Professional Documents
Culture Documents
IBM Security Guardium V10.1 en
IBM Security Guardium V10.1 en
IBM
Tables of Contents
Welcome 1
Product overview 1
IBM Guardium 1
What's new in this release 2
Release Notes 5
Getting Started 6
Getting Started with the User Interface 6
Customizing the User Interface 8
Quick start for monitoring and compliance 8
System View 9
Data Activity Monitoring 9
Policies and Rules 9
Workflows 9
Auditing 10
Classification 10
File Activity Monitoring 10
Overview and concepts for file activity monitoring 10
Prerequisites for file activity monitoring 11
High level workflow for file activity monitoring 12
Key Concepts and Tools 12
Queries and Reports 13
Access Control 13
User Roles 13
Groups 13
Data Archive and Purge 13
Guardium Installation Manager 14
Discover 14
Datasources 14
Creating a datasource definition 15
Working with existing datasources 19
Reporting on datasources 20
Defining a datasource using a service name 20
Managing KDC definitions 20
Cloud database service protection 21
Cloud database service protection workflow 22
AWS IAM definition 22
Create, modify, delete cloud accounts 23
Discover cloud databases 24
Catalog and manage databases 24
Manage Classification and Vulnerability Assessment 24
Configure database auditing 25
Modify limit of objects added automatically and collector
Enable auditing on one database
Disable auditing on one database
Starting and stopping DB audit ownership
Manage object auditing 27
Managing object audit in one database 27
Managing object audit in multiple databases 28
Database Auto-discovery 28
Classification 29
Classification Process Performance 30
Classification Rule Handling 30
Working with Classification Processes 31
Working with Classification Policies 32
Working with Classification Rules 32
Working with Classification Rule Actions 34
Discover Sensitive Data 35
Discovery scenarios 36
Name and description 36
What to discover 37
Rule Criteria 37
Actual Member Content 38
Where to search 39
Run discovery and review report 39
Audit 40
Scheduling 41
Regular Expressions 41
Discover and classify sensitive data in file servers 44
Installing and activating FAM components 44
File discovery and classification GIM parameters 45
Customizing FAM decision plans 47
Entitlement Optimization 48
Enable and configure entitlement optimization 49
Entitlement Optimization What's New 50
Entitlement Optimization Users and Roles 51
Entitlement Optimization Recommendations 51
Entitlement Optimization Browse entitlements 51
Entitlement Optimization What If 52
Protect 53
Baselines 53
Policies 56
Special pattern tests 58
Rule actions 58
Creating policies 63
Installing Policies 67
Rule definition fields 70
How to integrate custom rules with Guardium policy 73
How to use the appropriate Ignore Action 79
Character sets 80
Correlation Alerts 97
How to signify events through Correlation Alerts 99
Incident Management 102
How to manage the review of multiple database security incidents 103
Query rewrite 105
How query rewrite works 106
Using query rewrite 107
Enabling query rewrite 107
Creating query rewrite definitions 107
Testing query rewrite definitions 108
Defining a security policy to activate query rewrite 109
Creating a custom report to validate query rewrite results 109
File Activity policies and rules 110
File Activity Policies and rules functionality 110
Create a FAM policy and its rules from scratch 112
Creating a FAM policy rule from the Investigative Dashboard Entitlements tab 113
Reports 193
Report parameters 196
Creating dashboards 196
Viewing a report 197
Refreshing reports 198
Exporting a report 198
Viewing Drill-Down Reports 199
Creating a report 199
Creating reports for z/OS 199
Data Mart 199
Audit and Report 208
Queries 208
Using the Query Builder 209
Query Conditions 211
Domains, Entities, and Attributes 213
Domains 214
Custom Domains 216
Entities and Attributes 223
Database Entitlement Reports 254
How to take advantage of predefined reports 263
Predefined Reports 266
Predefined admin reports 266
Predefined user Reports 284
Predefined Reports Common 290
How to ask questions of the data 292
How to report on dormant tables and columns 294
How to Generate API Call from Reports 297
How to use Constants within API Calls 301
How to use API Calls from Custom Reports 305
Optional External Feed 310
Mapping an External Feed 310
Distributed Report Builder 311
How to create a Distributed Report 315
Getting started
Product overview
Product legal notices
What's new
Release notes
Installing
Upgrading
Common tasks
More information
Product overview
Product and release information for Guardium® Solutions.
IBM Guardium
IBM Guardium prevents leaks from databases, data warehouses and Big Data environments such as Hadoop, ensures the integrity of information and automates
compliance controls across heterogeneous environments.
What's new in this release
New features, functions, and enhancements.
Release Notes
Learn about the latest features and enhancements, system requirements, and upgrade, installation, and support information.
IBM Guardium
IBM Guardium prevents leaks from databases, data warehouses and Big Data environments such as Hadoop, ensures the integrity of information and automates
compliance controls across heterogeneous environments.
It protects structured and unstructured data in databases, big data environments and file systems against threats and ensures compliance.
It provides a scalable platform that enables continuous monitoring of structured and unstructured data traffic as well as enforcement of policies for sensitive data access
enterprise-wide.
A secure, centralized audit repository combined with an integrated workflow automation platform streamlines compliance validation activities across a wide variety of
mandates.
It leverages integration with IT management and other security management solutions to provide comprehensive data protection across the enterprise.
They are intended to enable continuous monitoring of heterogeneous database and document-sharing infrastructures, as well as enforcement of your policies for sensitive
data access across the enterprise, utilizing a scalable platform. A centralized audit repository designed to maximize security, combined with an integrated compliance
workflow automation application, enables the products to streamline compliance validation activities across a wide variety of mandates.
IBM Security Guardium is designed to help safeguard critical data. Guardium is a comprehensive data protection platform that enables security teams to automatically
analyze what is happening in sensitive-data environments (databases, data warehouses, big data platforms, cloud environments, files systems, and so on) to help
minimize risk, protect sensitive data from internal and external threats, and seamlessly adapt to IT changes that may impact data security. Guardium helps ensure the
integrity of information in data centers and automate compliance controls.
IBM Security Guardium File Activity Monitoring (FAM) - Use Guardium file activity monitoring to extend monitoring capabilities to file servers.
Automatically locate databases and discover and classify sensitive information within them;
Automatically assess database vulnerabilities and configuration flaws;
Ensure that configurations are locked down after recommended changes are implemented;
Enable high visibility at a granular level into database transactions that involve sensitive data;
Track activities of end users who access data indirectly through enterprise applications;
Monitor and enforce a wide range of policies, including sensitive data access, database change control, and privileged user actions;
Create a single, secure centralized audit repository for large numbers of heterogeneous systems and databases; and
Automate the entire compliance auditing process, including creating and distributing reports as well as capturing comments and signatures.
The Guardium solution is designed for ease of use and scalability. It can be configured for a single database or thousands of heterogeneous databases located across the
enterprise.
This solution is available as preconfigured appliances shipped by IBM® or as software appliances installed on your platform. Optional features can easily be added to
your system after installation.
These are the key functional areas of Guardium's database security solution:
Vulnerability assessment. This includes not just discovering known vulnerabilities in database products, but also providing complete visibility into complex
database infrastructures, detecting misconfigurations, and assessing and mitigating these risks.
Data discovery and classification. Although classification alone does not provide any protection, it serves as a crucial first step toward defining proper security
policies for different data depending on its criticality and compliance requirements.
Data protection. Guardium addresses data encryption at rest and in transit, static and dynamic data masking, and other technologies for protecting data integrity
and confidentiality.
Monitoring and analytics. This includes monitoring of database performance characteristics and complete visibility in all access and administrative actions for each
instance. On top of that, advanced real-time analytics, anomaly detection and security information and event management (SIEM) integration can be provided.
Threat prevention. This refers to methods of protection from cyberattacks such as distributed denial-of-service (DDoS) or SQL injection, mitigation of unpatched
vulnerabilities and other database-specific security measures.
Access management. This goes beyond basic access controls to database instances. The rating process focused on more sophisticated, dynamic, policy-based
access management capable of identifying and removing excessive user privileges, managing shared and service accounts, and detecting and blocking suspicious
user activities.
Audit and compliance. This includes advanced auditing mechanisms beyond native capabilities, centralized auditing and reporting across multiple database
environments, enforcing separation of duties, and tools supporting forensic analysis and compliance audits.
Performance and scalability. Although not a security feature per se, it is a crucial requirement for all database security solutions to be able to withstand high loads,
minimize performance overhead and support deployments in high-availability configurations.
Amazon Oracle v11 RDS DBaaS Monitoring with Cloud database service protection
Classifier enhancement
Enhance Enterprise Load Balancer to verify sniffer is up before allocating Managed Units
Deploy monitoring agents - Quickly prepare for database monitoring by discovering and activating GIM clients, installing S-TAPs, creating inspection engines,
and mapping the S-TAPs to collectors.
Cloudera Hadoop - Guardium was the first to provide Vulnerability Assessment in the NoSQL space with its support of MongoDB. Now Guardium is expanding into
the Hadoop/Big Data space with support for the Cloudera platform. Guardium Vulnerability Assessment helps organizations feel more confident in using Cloudera
by empowering them to assess and correct the system to align with security best practices. Combined with the Guardium Activity Monitor for real time audit,
compliance, and security analytics, Guardium can provide a holistic security solution for Cloudera and for most common databases and data warehouses in typical
enterprise environments.
Guardium S-TAP for z/OS - IBM Security Guardium, extends data security on mainframes with enhanced:
Data protection to block against unauthorized DB2 for z/OS user activities
Auditing and filtering capabilities to further extend data protection and real-time analytics
An outlier is defined by behavior by a particular source (a database, a particular user on a database, a server, or an OS user) in a particular time period that is
outside of the “normal†timeframe or scope of the particular source's activity. Outliers detection extends traditional database monitoring with
increased intelligence that provides early detection of possible attacks during operation by analyzing changes in source behavior. This release introduces:
FAM support
Outlier mining status page, providing the current status of the outlier mining process on all managed units, and drill-down into outlier processes that
did not complete successfully
Two tabs in the Results Table of the Investigation Dashboard: Summary tab has one row per source per hour in which an anomaly was found, with
anomaly score and reasons; Details tab has one row per outlier with the anomaly score, outlier reason(s) and details (source program, object, verb,
etc.)
This release expands Guardium support for monitoring Hadoop data with Cloudera integration using Cloudera Navigator and Hortonworks integration using
Apache Ranger. These integrations allow SSL encryption for clients that need to access Hadoop data and are supported by a new Hadoop Monitoring UI.
Guardium now supports running multiple classifier processes concurrently. The ability to run more than one classifier process at a time allows more efficient
use of available system CPU resources.
By default, Guardium classification processes now exclude several system databases and schema used by database software providers. By excluding these
databases and tables, classification processes run more efficiently and may return fewer errors.
Cleversafe backup/archive supports the Amazon S3 interface using the same SDK. Guardium interface to Cleversafe is analogous to Amazon S3 (which is
also supported by Guardium). Guardium cloud support now includes Cleversafe, SoftLayer and Amazon S3.
The new Deployment Health Dashboard expands existing deployment health views by providing an at-a-glance summary of health issues from across an
entire Guardium deployment. The dashboard is especially useful for identifying patterns and trends in the health data before investigating individual systems
where problems are identified.
UID chain for Windows FAM - Currently the Windows FAM agent returns the username for the process assigned to a file event. Now the Windows FAM agent
will change that single username into a chain of usernames that belong to the history of the process (UID chain). For instance is Process 1 (user janedoe)
spawns Process 2 (user johndoe), then for file events related to process #2, FAM will report the UID chain consisting of {janedoe, johndoe}.
Multi-Action Rule for FAM - Multi-action rules are comprised of multiple actions, each one is per a specified command category or a specified group. The
commands in a FAM context are: Read, Write, Delete, Execute and File Operation.
6. Entitlements optimization
Entitlement Optimization mediates between the role of the DBA in providing users the entitlements required to perform their jobs efficiently, and the role of
Security in keeping entitlements as accurate and as minimal as possible to prevent system vulnerabilities. Navigate to Entitlements optimization by Discover
> Database Entitlements > Entitlement Optimization
7. HP Vertica support
HP Vertica is a big data system that competes with Hadoop. HP-Vertica provides a standard Postgres SQL interface with its proprietary extensions.
HP Vertica is used for data warehouses to provide very fast query performance. HP Vertica is used for user interaction analysis, ad tracking, click stream
applications, threat assessment and financial forecasting.
KTAP request updates supported via existing processes (increments package version).
Shell and GIM installers will refuse to install if RPM installation is detected.
9. GDPR Accelerator
Data privacy and security are the most pressing concerns that any organization must face. Previously within the European Union each country required
different levels of compliance, the newly announced General Data Protection Regulation (GDPR) expands and standardizes data protection rules across the
whole European Union.
The Guardium GDPR accelerator provides predefined reports based on GDPR groups and policies. To begin working with the GDPR accelerator, assign the
GDPR role to a Guardium user, then navigate to Accelerators > GDPR with that user account.
Data in-sight introduces a revolutionary paradigm that utilizes human visual capabilities to gain an overall view on data flow and to identify unexpected
behaviors. Guardium already provides robust machine learning and data-analysis features to assist audits and detect attacks, based on accumulated
experience and knowledge. Data in-sight adds the flexibility of human visual perception to spot associations and movements in the raw data, irrespective of
known attack types, that would otherwise be unnoticed.
For example, an object recognition project to identify potholes in city streets would not identify an elephant wandering the neighborhood. The human eye,
however, would spot it immediately. Similarly, when reviewing audited data in bar charts, users looks for known issue types, but can easily overlook new
(unknown) aberrations.
Data in-sight converts audited data to a 3-D chronological visualization of data sources and destinations, showing data transactions unfold exactly as they
occurred.
The visualization space contains two planes, each represents entities of the audit domain of a given type. Every entry in the audit data is represented as a
moving ‘flash line’ from an object of the upper plane (one of client IP, OS user, DB user, source program) to an object of the lower plane (one of
database, object, server). The flash line between the source and the destination leaves a trail (a dotted line) indicating the presence of interaction between
the specific source and destination, which gradually fades into the background. The trails form an overview of the interaction between sources and
destinations in the selected time period. The sources are located near their destinations, and near other similar sources. The size of the destination entity is
proportional to the volume of transactions relative to the other destination entities. There are many ways of modifying the display including: color-code the
top entity (color changes as data source details change), filter from the data in-sight chart, and the investigation dashboard facets. You can also view data in-
sight with VR headsets.
To access data in-sight: in the Investigation Dashboard, click Add Chart > data in-sight chart.
Support for a Guardium appliance running in Hyper-V environment. Hyper-V is a virtualization solution from Microsoft.
Stability and reliability are enhanced for the S-TAP agents and the collection parsing.
Central Manager Health View provides a central dashboard to assess the status of the deployed Guardium components.
S-TAP Watchdog (guard_monitor) for UNIX/Linux and Windows is a process designed to monitor S-TAP performance and responsiveness. If S-TAP
CPU utilization exceeds the configured threshold, or if S-TAP does not respond to a console request, the following actions can be taken:
Enterprise readiness enhancements make Guardium components easier to deploy and use in large environments, including:
Updates to the automatic load balancing to improve granularity and rebalancing requests
Finer grain access to user interface (UI) console to help customers divide roles and access to Guardium
Template and profile configurations to ease deployment and control from the Central Manager
Improved failover, encryption, and reporting from the S-TAP agent on System i
Enhanced filtering, UID chaining, and usability for the S-TAP agents for z/OS data sources
Additional data security functions for more big data platforms: dynamic data masking for MongoDB, blocking for HortonWorks and integration with
Ranger security platform, and Cassandra Kerberos
Ranger integration - Ranger offers a centralized security framework to manage fine grained access control over Hadoop and related
components (Hive, HBase, HDFS, Yarn). Using Ranger administration console, users can easily manage policies around accessing a resource
(file, folder, database, table, column etc) for a particular set of users and/or groups, and enforce the policies within Hadoop. They also can
enable audit tracking and policy analytics for deeper control of the environment.
Support for the S-TAP agent RedHat 7.1 on Power 8 (big and little endian) architecture. Endianness refers to the order of the bytes, comprising a
digital word, in computer memory. Words may be represented in big-endian or little-endian format. Little-endian format stores the least significant
byte at the lower memory address with the most significant byte being stored at the highest memory address.
Security integration that provides synergistic use cases for the challenging security problems across IT silos:
Insider Threat Protection. Leverage integration with IBM Security Privileged Identity Manager to uncover insider threats.
Threat Protection System. Work in conjunction with IBM Security QRadar and IBM Security XGS to detect threats before they reach the data source to
prevent data breaches or heighten monitoring alertness.
Investigation Center provides a central place to run forensic tracking based on the audit records.
Updated security awareness with new common vulnerability event (CVE) and other vulnerability tests
Shared common framework for vulnerability assessment from the Application layer to the backend infrastructure, with an integration with IBM Security
AppScan
Support for FAM discovery on AIX 6.1 and Aix 7.1 (no classification). Support for shared drive discovery and classification on FAM crawler.
Release Notes
Learn about the latest features and enhancements, system requirements, and upgrade, installation, and support information.
Announcement
See the IBM Guardium release announcement for the following information:
Product-positioning statement
System requirements
For Guardium V10.1 system requirements and supported platforms information, see http://www-01.ibm.com/support/docview.wss?uid=swg27047801.
Upgrading Guardium
Installing Guardium
See Installing your Guardium system for information about installing the latest version of Guardium.
Known issues
Known issues are documented and made available through the IBM Support website.
As problems are discovered and resolved, the IBM Support website is updated. Search the IBM Support website to quickly find workarounds or solutions to problems as
well as other documents such as downloads and detailed system requirements.
Support lifecycle
If you are using an older version of Guardium software, plan ahead to allow time for upgrades. You can find information about end-of-support dates for IBM products at
the IBM Software Support Lifecycle website.
Getting Started
Getting Started with the User Interface
Learn the basics of the Guardium user interface, including logging in for the first time, banner and navigation menus, and the user interface and data search.
Customizing the User Interface
Guardium supports customizing the navigation menu for specific users and roles.
Quick start for monitoring and compliance
Learn how to deploy monitoring agents to your database servers and configure database monitoring for compliance with security standards and regulations.
System View
The System View is the default initial view for many users. It enables you to see key elements of system status.
Data Activity Monitoring
Information about key security concepts used in Guardium data activity monitoring.
File Activity Monitoring
File Activity Monitoring discovers the sensitive data on your servers; classifies content using pre-defined or user defined definitions; configures rules and policies
about data access, and actions to be taken when rules are met.
Key Concepts and Tools
Information about key concepts pertaining to Guardium administration.
Related information:
Guardium overview, architecture, and user interface (video)
Navigation
When you first log in to the Guardium user interface, there are two main menus - the banner and the navigation menu.
You can expand and collapse the navigation menu by clicking the chevron icon , or you can hide the navigation menu completely by clicking the show / hide icon
The initial layout of your screen is determined by the license applied, the access allowed based on roles, the machine type and a visibility factor. Examples of roles are
user, admin, access manager, and CLI. Roles are assigned to users and applications to grant users specific access privileges.
Banner Menu
The banner contains the following items:
Item Description
To-Do List Contains the Audit Process To-Do List, which can be filtered by user, and the Processes With No Pending Results.
Get information about your Guardium system, such as the version number, by clicking Help > About Guardium.
For help content specific to a screen or feature you're working with, click the small help icon that is embedded in the screen's pane.
Note: Both help icons take you to the same Information Center, where you can search and access all help content.
User interface / data / Search for a part of the user interface, a piece of data, or a file.
file search
For example, if you want to find the Policy Builder, toggle the search to User Interface, and start typing policy builder. Click any of the
results to go to that part of the user interface.
Account type Indicates what type of account you have. Edit your account details, such as your password or name, customize UI layout, and sign out of
Guardium securely.
Machine type Indicates what type of machine you are on, such as stand-alone, managed unit, central manager, or aggregator.
The banner menu also contains important startup messages such as Low RAM memory, Quick Search memory and CPU 4-cores minimum requirement, Certificate
expiration, Central Management failure, SSLv3 enabled or disabled, and No License.
Note: Guardium recommends that SSLv3 be disabled. However, in dealing with older Guardium versions that do not have the latest release installed, if SSLv3 is disabled,
the Central Management functionality will be impaired between the Central Manager and the managed units.
Navigation Menu
Each icon in the navigation menu represents one phase of the Guardium security lifecycle, click any icon to expand it and see the components within the phase. The
lifecycle-centric navigation menu is one way to navigate the user interface and is consistent across roles. Menu items may be customized and may or may not appear
based on your role.Â
Phase Description
Setup Configure your network settings, check the status of your services, and setup datasource definitions, groups, aliases, and alerts.
Manage Manage your environment's overall health, S-TAPs, data, modules, maintenance, and reports.
Discover Automatically discover new databases that are introduced to your environment, and find and classify sensitive data.
Harden Assess your environment's current weaknesses with Vulnerability Assessment and monitor changes made to your environment with Configuration
Auditing System (CAS).
Investigate Monitor database activities and investigate suspicious activity in any part of your environment.
Protect Protect your environment with data security policies that block suspicious activity and prevent unauthorized access to data. For more information about
policies, see Policies.
Comply Reach compliance initiatives with audit processes and granular reporting.
Reports Create your own report or use one of many predefined reports to report on any part of your environment. For more information about reports, see
Reports.
My Create your own dashboards to easily review reports that are of primary interest to you. For more information about dashboards, see Creating
Dashboards dashboards.
Icon Description
Note: When modifying items, the best practice is to clone the item, and then modify the clone.
The Customize Navigation Menu and Customize User/Role tools allow you to conveniently change the content and organization of the navigation menu. You can access
these tools in several locations:
All users can customize their own navigation menu by opening the User menu in the Guardium banner and selecting Customize.
Administrative users can customize the navigation menu for other users and roles by opening the User menu and selecting Customize User/Role or by navigating to
Setup > Tools and Views > Customize User/Role.
Users logged in as accessmgr can customize the navigation menu for other users and roles by navigating to Access > Access Management, selecting Role Browser,
and clicking the Customize Navigation Menu link.
The Navigation Menu list reflects the organization and contents of the Guardium navigation system. Select tools and reports from the Available Tools and Reports list and
use the icon to add items to the Navigation Menu list. Remove items from the Navigation Menu list by clicking the icon next to an item. Use drag and drop or the
icon controls to rearrange items within Navigation Menu list.
It is possible to define a new Guardium home page (that is, the first page seen after logging into the system) by selecting an item from the Navigation Menu list and clicking
the icon.
After clicking the OK button, the Guardium navigation menu is updated to reflect any changes you made in the Navigation Menu list.
You cannot delete the My Dashboards group, but you can delete individual dashboards within the group.
New groups that are empty will not be saved.
Empty groups shown in the Navigation Menu list will not appear in the Guardium navigation menu.
Use the Deploy Monitoring Agents tool to automatically activate GIM clients, install S-TAPs, and begin monitoring database traffic.
The deploy monitoring agents tool simplifies the process of establishing a Guardium deployment. Building on existing Guardium installation manager (GIM)
infrastructure, the deploy monitoring agents tools helps you quickly find database servers, install monitoring agents (S-TAPs), and configure inspection engines for
your databases. In addition, the tool provides a centralized view for tracking and reviewing deployment status.
Compliance monitoring
After deploying monitoring agents (S-TAPs), use the Compliance Monitoring tool to establish monitoring for specific security standards and regulations.
Guardium provides several compliance monitoring templates--groups, security policies, and reports corresponding to specific standards and regulations--including
the following:
These quick start compliance monitoring templates are especially useful for organizations that must comply with one of the associated standards or regulations in a
short period of time. After installing security policies, the compliance monitoring tool guides administrators or compliance officers through the initial setup and
population of groups with organization-specific information such as client IP addresses and specific privileged user IDs. In addition, the compliance monitoring tool
periodically checks your Guardium environment for new databases that can be monitored using the compliance monitoring templates.
Procedure
Results
After successfully deploying monitoring agents and configuring compliance monitoring for your database servers, Guardium begins monitoring your database traffic.
For more information about interpreting what you see on the compliance monitoring page, see Understanding the compliance monitoring views.
System View
The System View is the default initial view for many users. It enables you to see key elements of system status.
Three tabs under the System View display different types of status information:
The S-TAP Status Monitor displays summary data about S-TAPs that are deployed in your environment. Icons represent the high-level status, and you can drill down
to view information about inspection engines.
The Unit Utilization tab displays information about the usage of each Guardium system.
The System Monitor tab displays up-to-date details about incoming data, CPU usage, and other information.
Each rule in a policy defines a conditional action. The condition can be a simple test, for example a check for any access from a client IP address not found in an
Authorized Client IPs group, or the condition can be a complex test that evaluates multiple message and session attributes such as database user, source program,
command type, time of day, etc. Rules can also be sensitive to the number of times a condition is met within a specified timeframe.
The action triggered by the rule can be a notification action (e-mail to one or more recipients, for example), a blocking action (the client session might be disconnected), or
the event might simply be logged as a policy violation. Custom actions can be developed to perform any tasks necessary for conditions that may be unique to a given
environment or application.
Workflows
Workflows consolidate several database activity monitoring tasks, including asset discovery, vulnerability assessment and hardening, database activity monitoring and
audit reporting, report distribution, sign-off by key stakeholders, and escalations.
Workflows are intended to transform database security management from a time-consuming manual activity performed periodically to a continuously automated process
that supports company privacy and governance requirements, such as PCI-DSS, SOX, Data Privacy and HIPAA. In addition, workflows support the exporting of audit
results to external repositories for additional forensic analysis via Syslog, CSV/CEF files, and external feeds.
For example, a compliance workflow automation process might address the following questions: what type of report, assessment, audit trail, or classification is needed,
who should receive this information and how sign-offs are handled, and what is the schedule for delivery?
For each table in which changes are to be tracked, you can select which SQL value-change commands to monitor (insert, update, delete). Before and after values are
captured each time a value-change command is executed against a monitored table. This change activity is uploaded to Guardium on a scheduled basis, after which all of
Guardium‘s reporting and alerting functions can be used.
You can view value-change data from the default Values Changed report, or you can create custom reports using the Value Change Tracking domain.
Classification
Guardium supports the discovery and classification of sensitive data to allow the creation and enforcement of effective access policies.
A classification policy is a set of rules designed to discover and tag sensitive data elements. Actions can be defined for each rule in a classification policy, for example to
generate an email alert or to add a member to a Guardium group, and classification policies can be scheduled to run against specified datasources or as tasks in a
workflow.
Discovery and classification routines become important as the size of an organization grows and sensitive information like credit card numbers or personal financial data
become present in multiple locations, often without the knowledge of the current administrators responsible for that data. This frequently happens in the context of
mergers and acquisitions, or when legacy systems have outlasted their original owners. Guardium classification discovers and tags this sensitive data so appropriate
access policies can be applied.
Discovery includes collecting metadata and entitlements for files and folders.
Classification uses decision plans to identify potentially sensitive data in the files, such as credit card information or personally identifiable information.
Monitoring and collection of audit information and policy rules, and real time alerts or blocking of suspicious users or connections.
Use case 1
Critical application files can be accessed, modified, or even destroyed through back-end access to the application or database server
Solution: File Activity Monitoring can discover and monitor your configuration files, log files, source code, and many other critical application files and alert or block
when unauthorized users or processes attempt access.
Use case 2
Need to protect files containing Personally Identifiable Information (PII) or proprietary information while not impacting day-to-day business.
Solution: File Activity Monitoring can discover and monitor access to your sensitive documents stored on many file systems. It will aggregate the data, give you a
view into the activity, alert you in case of suspicious access, and allow you to block access to select files and folders and from select users.
Use case 3
Solution: File Activity Monitoring can discover, monitor, and block back-end access to your documents, which are normally accessed through an application front-
end (for example, web portal).
File activity monitoring for file servers consists of the following capabilities:
The basic discovery scan identifies the list of folders and files, their owner, access permissions, size, and the date and time of the last update. It also identifies user
permissions and group permissions. Discovery supports all file types. Classification is defined by decision plans. Each decision plan contains rules for recognizing a
certain type of data. (Decision plans for File Activity Monitoring are analogous to classification policies for Data Activity Monitoring.) Classification includes support many
types of files, including: Plain Text, HTML, Office, PDF. Default decision plans exist for HIPAA, PCI, SOX, and Source Code. You can change the classification entities from
the resulting reports/investigation dashboard, using the default decision plans. In addition, you can create new plans, or modify existing plans, using the Content Classifier
Workbench, a Windows application you upload to your collector appliance. See the requirements for IBM Content Classification Version 8.8, in this IBM Content
Classification technote. Plans are activated and configured through the Guardium Installation Manager (GIM).
Discovery and classification are handled by a discovery agent, called the file crawler. The file crawler sends the file metadata and data from its discovery and classification
processes to the Guardium system. The scan schedule is configurable. Subsequent (incremental) scans, after initial discovery and classification, identify incremental
changes of new and changed files only. Install and configure the file crawler with the Guardium Installation Manager (GIM) just as you would any other bundle.
File activity monitoring is implemented by the S-TAP, running on the file server. (Activity monitoring does not require the FAM bundle used by discovery and classification).
For NFS volumes, it is important to have an S-TAP installed and configured on all machines that access those volumes. S-TAP manages ongoing monitoring, alerting, and
blocking of file access, according to the Guardium policy rules. The rules specify which file servers and files to monitor and what actions to take if policy rules are violated,
for example log the violation, alert, or block access. Monitored Operations are Read, Write, Execute, Delete, Change Owner, Permissions, Properties. Any activity that
matches the security policy rules criteria is sent to the Guardium collector where it is stored in the Guardium repository. (In database activity monitoring, the S-TAP sends
all data activity to Guardium, where it is monitored.) All events recorded in the Guardium repository are audited events.
Because the file monitoring rules are activated in the S-TAP, blocking occurs immediately. The data that is requested by the user is never read from disk; the S-TAP blocks
and prevents the operation. Access to files can also be blocked, even if the operating system permissions allow access.
Monitoring activities are presented in the predefined reports: Users privileges, File privileges, Count of activity per user, Count of activity per client, Files open to
“public†, Dormant users, Dormant Files, etc., and the FAM – Access report (log of all monitored activity), and in the Investigation Dashboard.
Important: Windows Administrator and Linux ROOT user activities are not monitored or blocked by File Activity Monitoring.
One S-TAP agent manages both file server and database activity monitoring. If you have licenses for both capabilities you can use the same S-TAP agent for both file and
database activity monitoring. Install and configure S-TAP with the Guardium Installation Manager (GIM) just as you would any other bundle.
File activity monitoring supports UID chain: The FAM agent changes a single user name into a chain of user names that belong to the history of the process (UID chain). For
instance if Process 1 (user janedoe) creates Process 2 (user johndoe), then for file events that are related to process #2, FAM reports the UID chain of {janedoe, johndoe}.
rhel-4-linux-i686 yes no no
rhel-4-linux-ia64 no no no
rhel-4-linux-x86_64 yes no no
rhel-5-linux-ia64 no no no
rhel-5-linux-ppc64 yes no no
rhel-6-linux-ppc64 yes no no
rhel-6-linux-s390x yes no no
suse-10-linux-ppc64 yes no no
suse-10-linux-s390x. yes no no
suse-11-linux-s390x yes no no
UNIX: The debug level is configured by tap_debug_output_level. FAM errors and debug logs are named guard_stap.fam.txt. The default location in UNIX is \tmp,
and is configured by tap_log_dir
Windows: The FAM agent log file is called StapAT.ctl and resides in the C:\Program Files\IBM\Windows S-TAP\Logs folder.
Guardium queries describe a set of information obtained from the collected data. Queries are comprised of three elements: entities, fields, and conditions. Entities define
the scope of a query, fields list the columns of data to be returned by the query, and conditions define tests to match against the data (greater than, less than, contains,
etc.)
A report defines how the data collected by a query is presented. The default report is a tabular report that reflects the structure of the query, with each attribute displayed
in a separate column. All runtime parameters and presentation components of a tabular report can be customized.
Access Control
Guardium provides access maps as a way to conveniently show data access between database clients and database servers.
Data access by applications and tools can be categorized according to many dimensions, including what data is being accessed, how it is being accessed, or how many
SQL calls are being made. In an enterprise environment, it is very important to get a good handle on database access. This requirement can stem from the need to
understand and secure access to the database due to compliance initiatives and even due to the need to tune and optimize your database environment. Because there can
be many databases and a very large number of database clients in enterprise environments, getting a handle on the data access paths can be difficult.
The deployment health topology and table views show the data flow relationships between systems in your environment. These views make it easy to identify problematic
systems and investigate the underlying issues. Access the topology view by navigating to Manage > System View > Deployment Health Topology. Access the table view by
navigating to Manage > System View > Deployment Health Table.
User Roles
A role defines a group of Guardium users who share the same access privileges.
When a role is assigned to an application or the definition of an item (a specific query, for example), only those Guardium users who are also assigned that role can access
that component. If no security roles are assigned to a component (a report, for example), only the user who defined that component and the admin user can access it.
At installation time, Guardium is configured with a default set of roles and a default set of user accounts. The Guardium access manager can create new roles and modifies
existing roles as needed.
Groups
Guardium supports the grouping of elements to simplify creating and managing policies and to clarify the presentation of reports.
Grouping can simplify the process of creating policy and query definitions. It is often useful to group elements of the same type, and grouping can make the presentation
of information on reports more straightforward. Groups are used by all subsystems, and all users share a single set of groups.
For an example of grouping, assume that your company has 25 separate data objects containing sensitive employee information, and you need to report on all access to
these items. You could formulate a very long query testing for each of the 25 items. Alternatively, you could define a single group called sensitive employee info containing
those 25 objects. That way, in queries or policy rule definitions, you only need to test if an object is a member of that group.
An additional benefit of groups is that they can ease maintenance requirements when the group's composition changes. To continue the example, if your company decides
that two more objects need to be added to the sensitive employee info group, you only need to update the group definition and not all of the queries, reports, and policies
that reference the group.
There are two archive operations: Data Archive and Results Archive. The path to these archive operations is Manage > Data Management > Data Archive or Results Archive
(Audit).
Data Archive
With Data Archive, data is typically archived at the end of the day on which it is captured, which ensures that in the event of a catastrophe, only the data of that day
is lost. The purging of data depends on the application and depends on business and auditing requirements, but in most cases data can be kept on the machines for
more than six months.
Results Archive
In an aggregation environment, data can be archived from the collector, from the aggregator, or from both locations. Most commonly, the data is archived only once, and
the location from where it is archived varies depending on the customer's requirements.
The GIM component includes a GIM server, which is installed as part of the Guardium system, and a GIM client, which must be installed on servers that host databases
and file servers you want to monitor. After installing the GIM client, it works with the GIM server to perform the following tasks:
If your environment includes a Guardium system configured as a Central Manager, you must decide which Guardium systems you want to use as GIM servers. You can
either manage all of your GIM clients from a single Guardium system, such as the central manager, or you can manage them in groups from the different Guardium
systems. If you manage all of your GIM clients from a single Guardium system, then you can view the status of all the GIM clients and perform related tasks from a single
interface. If you choose to manage your GIM clients in groups from separate Guardium systems, then you can use each system to work with the GIM clients that it
manages, but no overall or environment-wide view is available.
Discover
Discovery refers to processes of locating and identifying objects in your environment that must be tracked for security and compliance purposes.
Discovery is the process of finding important objects such as privileged users, sensitive data, and datasources. Classification is the process of appropriately identifying
what is discovered for security and compliance purposes. These processes of discovery and classification are important in large organizations where mergers, acquisitions,
and legacy systems introduce new objects to your environment in unstructured or unpredictable ways. GuardiumGuardium® helps you incorporate these objects into
your environment so you can enforce effective security policies and ensure compliance.
A common scenario involves the discovery of sensitive data. Sensitive data refers to regulated information like credit card numbers, personal financial data, social security
numbers, and other information that requires special handling. Guardium supports two different approaches for discovering sensitive data: by using the Discover Sensitive
Data workflow builder, or by using the Policy Builder with other Guardium tools. The Discover Sensitive Data workflow builder is intended as an all-inclusive tool for
establishing discovery and classification processes for sensitive data. Use it to specify rules for discovery, define actions to take on discovered data, specify which data
sources to scan, distribute reports, and run the workflow on an automated schedule. For more advanced users, the Policy Builder supports more granular discovery and
classification rules that can be easily incorporated into existing processes and Guardium applications.
Datasources
Datasources store information about your database or repository such as the type of database, the location of the repository, or credentials that might be
associated with it. You must define a datasource in order to use it with Guardium applications.
Cloud database service protection
Cloud database protection provides classification, vulnerability assessment, and object auditing on cloud databases.
Database Auto-discovery
The Auto-Discovery application scans and probes your servers for open ports to prevent unknown or unwanted connections to your network. You can run auto-
discovery processes on demand, or schedule the processes on a periodic basis.
Classification
Classification policies and processes define how Guardium discovers and treats sensitive data such as credit card numbers, social security numbers, and personal
financial data.
Discover Sensitive Data
Create an end-to-end scenario for discovering and classifying sensitive data.
Regular Expressions
Regular expressions can be used to search traffic for complex patterns in the data.
Discover and classify sensitive data in file servers
File activity monitoring ensures integrity and protection of sensitive data on UNIX and Windows file servers.
Entitlement Optimization
Entitlement Optimization mediates between the role of the DBA in providing users the entitlements that are required to perform their jobs efficiently, and the role of
Security in keeping entitlements as accurate and as minimal as possible to prevent system vulnerabilities.
Datasources
Datasources store information about your database or repository such as the type of database, the location of the repository, or credentials that might be associated with
it. You must define a datasource in order to use it with Guardium® applications.
Procedure
1. Open the Datasource Builder by navigating to Setup > Datasource Definitions.
2. Click to open the Create datasource dialog. Use the Create datasource dialog to provide information about the datasource to be stored for future use.
Depending on the application and database type that you select, and the type of datasource you use, the dialog varies slightly.
3. Select an Application Type.
4. Enter a unique Name for the datasource.
5. From the Database Type menu, select the database or type of file. For some applications, the datasource must be a database, and cannot be a text file. Depending
on the type of database you select, some fields on the panel are disabled, or the labels change. For example, Assign Credentials can be either optional or
mandatory. When mandatory, it is disabled and the user name and password fields are mandatory. When optional, user name and password are disabled until you
select Assign Credentials.
6. Select Share Datasource to share the datasource definition across all applications. If you do not share the datasource, the definition you create can be used only
with the application you chose.
7. Optionally, configure additional credentials.
Use SSL: select to use SSL. Then optionally select import server SSL certificate, and click add certificate to select the certificate
Use LDAP: Select to use LDAP. Then click Assign credentials, and enter the user name and password
Use Kerberos: Select to use a predefined Kerberos configuration. Select a Kerberos configuration, and enter the Realm and KDC. The datasource compares
this with its own KDC and Realm to make sure they match.
8. Select Save Password to save and encrypt your authentication credentials on the Guardium appliance. Save password is required if you are defining a datasource
with an application that runs as a scheduled task (as opposed to on demand). When save password is selected, login name and password are required.
9. Enter your credentials for Login Name and Password.
10. For the Host Name/IP field, enter the host name or IP address for the datasource.
11. Use the table to complete Port based on your datasource type.
Datasource type and port number table
Database type Port number
DB2 50000
For DB2 UDB, Guardium supports count_big(*). On very large tables, a standard count(*) could fail
DB2 for i 446
GreenplumDB 5432
Hadoop 21000-21050
Informix 1526
MS SQL Server (Dynamic ports) and MS SQL Server Port number grayed out - Use of this datasource allows a client without a defined port value or where the
(DataDirect - Dynamic ports) dynamic function is enabled from the MS SQL Server database server to connect dynamically to a MS SQL
server database. To define dynamic port, go onto the DB serve for MS SQL Server and define 0 for Dynamic
port type and remove TCP/IP which by default is port 1433. Setting Dynamic port value to 0 and restarting
the services will set a dynamic IP.
For MS SQL, Guardium supports count_big(*). On very large tables, a standard count(*) could fail
Previously the jTDS driver had to be downloaded in order to support Windows authentication using
NTLM and NTLMv.
Parameters
If the Guardium user wants to use Windows authentication, then add this parameter to the
Connection Property:
domain=domain_name;AuthenticationMethod=ntlmjava
If using NTLMv2 for Windows authentication, then add this parameter to the Connection Property:
domain=domain_name;AuthenticationMethod=ntlm2java
AuthenticationMethod
Purpose
Determines which authentication method the driver uses when establishing a connection. If the
specified authentication method is not supported by the database server, the connection fails and
the driver throws an exception.
Valid Values
Notes
The User property provides the user ID. The Password property provides the password.
The values type4, type2, and none are deprecated, but are recognized for backward compatibility.
Use the kerberos, ntlm, and userIdPassword value, respectively, instead.
CodePageOverride=UTF-8
encryptionMethod=SSL;validateServerCertificate=false
MongoDB 27017
MySQL 3306
Netezza 5480
PostgreSQL 5432
Sybase 4100
Sybase IQ 2638
Teradata 1025
Text 0
Text:HTTP 8000
Text:FTP 21
Text:SAMBA 445
Text:HTTPS 8443
N_A 0
WEBHDFS 50070
Note: When attempting to connect using an SSL datasource for the first time, you may encounter this error when testing the connection:
error
Connection unsuccessful
Could not connect to: 'jdbc:db2://su11u1x64t-va:55000/VA_DB' for user: '(DELETE ME) db2 10.1 SSL_DB2(Security Assessment)'.
DataSourceConnectException: Could not connect to: 'DB2 (DELETE ME) db2 10.1 SSL 9.70.146.39:55000' for user: 'db2inst1'.
Exception: com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2030][11211][4.15.134] A communication
error occurred during operations on the connection's underlying socket, socket input stream,
This is caused because the GUI does not have the correct keystore file for the certificate loaded into memory. To correct this, restart the GUI and this error should
go away and the connection should be successful.
12. Depending on the datasource type, the dialog varies slightly for the fields after port.
If DB2, enter the database name.
If DB2 iSeries or Oracle, enter the service name.
If Informix, enter the Informix server name.
For a non-text Database Type, in the Database box, enter the database name (Informix, Sybase, MS SQL Server, PostgreSQL, or Teradata only). If it is blank
for Sybase or MS SQL Server, the default is master. For Sybase database, the Database text box should contain either the database name or default to master
if it is blank (This works for Entitlement Reports and Classifier. For VA, use the database instance name.)
For DB2, DB2 iSeries, or Oracle enter a valid schema name in the Schema box to use.
For a text file Database Type, in the File Name box, enter the file name.
13. Use the Connection Property box only if additional connection properties must be included on the JDBC URL to establish a JDBC connection with this datasource.
The required format is property=value, where each property and value pair is separated from the next by a semicolon.
For a Sybase database with a default character set of Roman8, enter the following property: charSet=utf8.
For an Oracle Encrypted Connection you need to define a Connection Property as:
oracle.net.encryption_client=REQUIRED;oracle.net.encryption_types_client=RC4_40 (Replacing with an encryption algorithm required by the monitored
instance, regardless of its type).
NOTE that 3DES168 encryption is problematic. A datasource defined to use 3DES168 encryption will incorrectly throw an ORA-17401 protocol error or ORA-
17002 checksum error when it encounters any SQL error. Thereafter, the connection simply won't work until it is closed and reopened.
For a DB2 Encrypted Connection you need to define a Connection Property as: securityMechanism=13
For a DB2 iSeries Connection, define a Connection Property as: property1=com.ibm.as400.access.AS400JDBCDriver;translate binary=true
For DB2 z/OS datasource, add a Connection Property to improve database performance: resultSetHoldability=2
In Oracle, sys is an Oracle default user, is owner of the database instance, and has super user privileges, much like root on Unix. SYSDBA is a role and has
administrative privileges that are required to perform many high-level administrative operations such as starting and stopping the database as well as
performing such operations as backup and recovery. This role (SYSDBA) can also be granted to other users. The phrase sys as SYSDBA refers to the
connection method required to connect as the sys user.
For monitor values for Oracle 10 (sys as SYSDBA) (this is for the Oracle open source driver), enter the following: internal_logon=sysdba
For DataDirect (Oracle driver), enter the following: SysLoginRole=sysdba
In addition, if using CRYPTO_CHECKSUM_TYPES in your sqlnet.ora, use the following examples:
oracle.net.encryption_client=aes256;oracle.net.crypto_checksum_types_client=SHA1
oracle.net.encryption_client=rc4_256;oracle.net.crypto_checksum_types_client=MD5
oracle.net.encryption_client=aes256;oracle.net.crypto_checksum_types_client=MD5
oracle.net.encryption_client=rc4_256;oracle.net.crypto_checksum_types_client=SHA1
Example: Use authentication to Oracle LDAP which is known as OID. Values needed are: the LDAP server host or IP, the LDAP server port, the Oracle instance
name and the realm. The custom URL must be properly entered:
jdbc:guardium:oracle:@ldap://wi3ku2x32t4:389/on0maver;cn=OracleContext;dc=vguardium;dc=com
14. If needed, enter a Custom URL connection string to the datasource. When the Custom URL field is blank, the connection is made using the properties entered in the
other datasource definition fields (for example, host, port, instance, etc.).
Important:
When specifying a Custom URL field with the Oracle Open Source format, specify jdbc:guardium:oracle://;SID=<SID>.
When creating a datasource for an Oracle database with Oracle Advanced Security enabled, specify EncryptionLevel=required in the Custom URL field
of the datasource definition.
15. Click Show Advanced Options to display the Roles and CAS options.
Because vendors offer flexibility during installation, users should be asked to help in determining the two fields required on the datasource definition.
CAS needs two pieces of information: a database instance account to run some of the database tools on Unix, and the name of the database instance directory in
order to find the files it is to monitor. Generally, if the Database Instance Account and Directory are not correctly entered in the Datasource Definition, you will see
No CAS data available messages for tests where CAS could not find data.
a. Enter a Database Instance Account (software owner) and a Database Instance Directory (directory where database software was installed) that will be used
by CAS.
These are suggestions for how to find the needed information to fill in the CAS information for datasources. This information may vary from one installation to
another. One of the ways used on Unix is to list the /etc/passwd file for specific database installations that can be used to identify the database instance
account and instance directory. Sometimes during the installation an environment variable is defined in the database instance account identifying the
instance directory, such as ORACLE_HOME. In this case, enter $ORACLE_HOME in the database instance directory field of the datasource definition form and
the variable will be expanded to find the correct directory name on the database server.
Note: To search multiple directories, you can define multiple file paths for Database Instance Directory. Refer to the MongoDB row for an example.
Table 1. Database Instances
Database
Type Database Instance Account Database Instance Directory/ Additional Hints
The program db2cmd.exe must be on the system path, or in the bin subdirectory of the
Database Instance Directory.
MongoDB Often mongodb or mongos With MongoDB, you must specify multiple paths for the database instance directory.
Indicate a separate path by using a pipe "|" with spaces.
The /var/lib/mongo path is required, as it is the home path for the mongo user.
MongoBinary=/usr/bin is the path to the mongo binary. You must specify the variable
(which is case sensitive) and then equal the path.
dbpath=/var/lib/mongo is the path to the data files. In this case, it happens to be the
same as the MongoDB home directory.
You do not need to define all the listed paths. Whichever paths are not defined will not
be analyzed.
Oracle Often oracle, or version specific For example, /home/oracle9 on Unix, or C:\oracle\product\10.2.0\db_1 on Windows.
such as oracle9 or oracle10 An environment variable ORACLE_HOME may be defined. Â
SQL Server Not needed unless Windows There are two scenarios when populating Database instance Directory for CAS usage in
Authentication is being used. In SQL Server.
that case, it must be in the form
acceptable to Windows If the datasource is being used for Vulnerability Assessment Tests, then this column
Authentication, needs to be populate with the DATABASE INSTANCE HOME DIRECTORY. Â
DOMAIN/Username.
Examples
MSSQL2008
If the datasource is being used for NON Vulnerability Assessment Tests, but for CAS
monitoring files or registry. Â
Then this column will be the Microsoft SQL Server directory with Program Files
or
Note: You must have two datasources if you want to do Vulnerability Assessment Tests
and CAS file monitoring
Sybase Often "sybase" For Unix /home/sybase, or C:\sybase for Windows. An environment variable SYBASE
may be defined.
Note: A MySQL datasource with a Unicode database name is not supported. The
datasource name in MYSQL must be ASCII.
Netezza  Not needed. The installation is in the same location on all machines.
PostgreSQL Â This is the most flexible of the installations. The user is required to define two
environment variables on the Postgres database server: PostgreSQL_BIN should be the
location of the binaries for the installation, and PostgreSQL_DATA the location of the
data. Â
Note: If an environment variable is to be used within the Database Instance Directory field, that environment variable must be defined on the database
server.
b. Select a Severity Classification (or impact level) for the datasource. Severity classification can be used to sort, filter, or focus datasources while you are
viewing reports and results.
c. Click Save to save the datasource definition (you cannot add roles or comments until the definition has been saved).
d. Optionally click Add Comments to add comments to the definition.
e. Optionally click Test Connection to test connectivity of the defined datasource.
f. Click Close when you are finished with the definition.
Procedure
Open the Datasource Builder by navigating to Setup > Datasource Definitions.
The Application Selection menu lists all applications with which you can use a datasource definition. Choose the application for which the datasource you want to
modify was created, and click Next, bringing you to the Datasource Finder.
Cloning a datasource
Procedure
Select the datasource that you want to clone from the Datasource Finder, and click Clone.
The information that you entered when the datasource definition was created appears in the Datasource Definition dialog, with "copy Of" appearing before the
original name of the datasource. Change whatever fields you like.
Click Apply to save the cloned datasource.
Procedure
Select the datasource that you want to modify from the Datasource Finder, and click Modify.
The information that you entered when the datasource definition was created appears in the Datasource Definition dialog. Change whatever fields you like.
Click Apply to save the changes that you made to the datasource.
Removing a datasource
Procedure
Select the datasource that you want to modify from the Datasource Finder, and click Delete.
Reporting on datasources
Guardium® provides reports on the datasources that are in your environment and any changes made to them.
Procedure
Open the Datasources report by navigating to Reports > Report Configuration Tools > Datasources. The table that appears lists all datasources, and the information
that is stored in each datasource definition.
Right-click any cell in the table and you are given two options: Datasource Version History, and Invoke.
Click Datasource Version History to view changes made to the datasource definition.
Click Invoke to select and run one of the available APIs for the datasource.
Note: You can customize the run time and presentation parameters of the Datasources report by clicking the pencil icon.
Procedure
1. Determine the oracle service name. You can use commands like these:
You can define up to 5 Kerberos Key Distribution Centers (KDC) on a Central Manager, and one on a standalone Guardium. To add a Key Distribution Center to Guardium
you specify:
Procedure
1. Click Setup > Tools and Views > Kerberos configuration
What to do next
After you have created a Kerberos KDC, you can select it when configuring your datasource setup.
Once you set up the Guardium connection with the cloud, you can:
AWS permissions are required to perform Guardium functions on the cloud DB. See AWS IAM definition.
In on-premises databases, the S-TAP installed in the database sends all database traffic to the Guardium system. In the cloud environment, Guardium pulls log files from
the cloud DB, and processes the data similar to S-TAP data. The difference is that the S-TAP records all database activity, whereas in the cloud environment, only the
tables that you select are audited. Another difference is that there can be a slight delay in data retrieval from the cloud.
Activity on audited databases and objects is written to the database logs. The volume of log activity increases with the number of monitored items. High volume log
activity can impact the database performance. You need to ensure that you are capturing all relevant data, while not overloading the system.
You can run cloud database service protection in a CM environment and on a standalone Guardium collector.
In the context of cloud DB service protection, database refers to the database on the cloud, and datasource refers to the Guardium cataloged database.
Only one Guardium system can own the DB audit and object audit of any one DB. Other Guardium systems can access the same cloud account and see the DB details, but
cannot disable the DB audit or access the object audit data. You can move ownership from one Guardium system to another, for example if one goes down without
expectation of recovery.
Discovery, Classification, and VA are supported for all AWS RDS database engines.
You must keep the RDS definition up to date, for example, DB instance deletions, or changes in credentials.
Guardium v10.1.4 supports only Oracle V.11 databases on an AWS cloud.
Extrusion rules are not supported, including redaction and testing for patterns in the returned data.
Return data is not supported, including records affected and logging of bind variable values.
Rule actions that interact with S-TAP are not supported; for example, S-GATE Terminate, Ignore, and query rewrite.
Failed logins are not captured by the Oracle audit, and therefore are not forwarded to Guardium.
Statements not captured by the Oracle audit, for example Statements with syntax errors, cannot be monitored.
Audit data has bind variable values but not type, for example, 123, so when it is replaced in SQL, surrounding quotes always added.
When variable values contain ASCII control character, for example, '\001' or multiple byte characters, the audit file is not downloadable.
Blob bind variable values are not supported.
Procedure
1. Create a cloud account.
2. Discover its database instances.
3. Catalog the databases you want to work with. Cataloging creates a datasource within Guardium, so that you can manage the cloud database Guardium functions on
the specific database.
4. Optionally add the datasource to a new or existing VA process (requires Vulnerability Assessment license).
5. Optionally add the datasource to a new or existing Classification process.
6. Optionally enable DB Audit on relevant databases and restart the databases either now from the Guardium UI, or later from the DB console. Once DB auditing is
enabled, it performs standard Oracle auditing. When you enable DB Auditing, your Guardium system becomes the unique owner of the DB Audit on this DB. No
other Guardium system can modify the DB Audit or the object audit. To see Classification results, run Classification once (Run once now) after you enable the DB
Audit, or wait for the next scheduled run. (The datasource must be assigned to a Classification process.)
7. Review the Classification results of your datasources (requires a classification process and DB Audit):
View the objects, grouped either by the object or the classification process that identified the objects, using filters to further refine the results
enable or disable object audit: individually, by table
Drill down from the objects grouping to open a list of all databases that contain the selected object in their classification results. In this view you can also
enable and disable object auditing.
8. Periodically repeat steps 2 through 7.
9. Review the datasources periodically, checking for New objects, and optionally adding or removing objects from the object audit. For example, you might remove
objects if the automatically added objects include objects you have decided do not need auditing, or if a database is having performance issues. Or you could
identify a suspicious object that is not audited, and add it to the object audit.
The minimum IAM permissions include viewing configuration and changing tags. They do not include enabling the DB audit, or restarting a DB. This JSON defines the
minimum permissions, without which you cannot run cloud database service protection.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"rds:DescribeDBParameters",
"rds:DescribeDBInstances",
"rds:DescribeDBParameterGroups",
"rds:DownloadDBLogFilePortion",
"rds:DescribeDBLogFiles",
"rds:ListTagsForResource",
"rds:RemoveTagsFromResource",
"rds:AddTagsToResource",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
"rds:CopyDBParameterGroup",
"rds:CreateDBParameterGroup",
"rds:ModifyDBInstance",
"rds:ModifyDBParameterGroup",
Restart DB instance
"rds:RebootDBInstance",
"rds:ModifyDBInstance"
"rds:AuthorizeDBSecurityGroupIngress",
"rds:CreateDBSecurityGroup",
"rds:ModifyDBInstance"
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateSecurityGroup",
When configuring these parameters, Guardium creates an inbound rule in the RDS instance security group, with collector public IP CIDR mask = 24.
Prerequisite: Define the AWS IAM policy, see AWS IAM definition.
Tip: If you are managing a large number of databases in this account, consider defining a default classification process. This saves you defining the properties for each
discovered database.
Procedure
What to do next
Discover databases and catalog them, set up classification and vulnerability assessment, and object auditing.
Procedure
1. Select the cloud account under Cloud DB Service Accounts, and click in the right pane.
2. Modify the configuration.
3. If any credentials were modified, test access to the cloud by clicking Test Access.
4. Click Save.
Procedure
1. Select the account in the Cloud DB Service Accounts pane, click , and confirm.
2. Restart the DB from the DB console. If you do not have Amazon access to the DB, ask your DBA to disable DB auditing and to restart the DB. It's important to stop
auditing and restart the DB so that the DB stops writing to the log files used by Guardium.
Every time you navigate to Discovery > Database Discovery > Cloud DB Service Protection, Guardium informs you if the DB auditing status in the cloud is different from the
status reported in the UI, with a message above the Database table: DB auditing status has changed for some databases. Click Refresh to update
the table. When you see this message, click Refresh to refresh the display.
You can also perform this check on demand by clicking Retrieve status. The retrieve can take a few minutes. When it's complete, a message appears only if any of the DB
audit statuses have changed. If there are changes, click Refresh
You can also upload cloud database definitions by CSV file. The required parameters are listed in GuardAPI Cloud Datasource Functions; the API parameter cloudTitle
must be replaced with the parameter environmentTitle (they have the same function, but different names). See the upload procedure in Create Datasource for CSV
uploaded via the Upload CSV menu in Customer Uploads, using the path Harden > Vulnerability Assessment > Customer Uploads to upload your file.
Procedure
1. Navigate to Discovery > Database Discovery > Cloud DB Service Protection, and click the service account name. When you create a cloud account, the Discover
Databases table is open, showing a list of all of the regions, with their RDS endpoints.
2. When you access this page subsequently, the table is closed. Click Discover Databases. The table opens showing the regions.
3. Select the row of each region whose databases you want to discover. Use the filter if relevant.
4. Click Discover. Guardium searches the regions, and adds any databases that were not previously discovered to the databases table.
Procedure
1. Catalog the databases you want to audit.
a. In the Databases table, select one or more databases.
b. Click Datasource > Catalog Datasource.
c. Enter the case-sensitive DB user and password that you received from your DBA. If you selected more than one database, be sure you want them to use the
same user and password pair.
d. Optionally select, modify or clear the default Classification process.
e. Click Catalog.
The Guardium datasource name appears in the Databases table.
2. Update the user or password:
a. In the Databases table, select one or more datasources.
b. Click Datasource > Update User and Password and modify the details. Both fields must be specified.
c. Click Catalog.
3. Modify one datasource definition.
a. Select the datasource and click Datasource > Open Datasource Definition
b. Modify as relevant. See parameter details in Creating a datasource definition.
c. Optionally test connectivity to the database by clicking Test Connection.
d. Click Save.
A green icon indicates the process is running. A yellow icon means there is no schedule defined for the process. A red icon in the Classification Process or VA column
indicates no classification or VA assigned, or an error. View VA errors in Harden > Vulnerability Assessment > Assessment Builder > View Results. View classification errors
in Discover > End-to-End Scenarios > Discover Sensitive Data > Review Report ribbon > Process Log.
If you get a classification error file bdump-file-listing in BDUMP not found Unable to retrieve results for: 'RDSADMIN.TRACEFILE_ add
RDSADMIN to the pre-defined schema group Excluded Classification schemas - Oracle in the Group Builder.
Procedure
1. Assign one or more datasources to an existing Classification process.
a. Select one or more datasources.
b. Click Classification > Add to Classification.
c. Select the Classification Process and click Save.
d. Optionally click Edit/View to modify or run the classification process.
e. If you want to enable object auditing automatically for the objects found by classification process, click Edit/View to open the classification process; in the
Where to search ribbon, select the checkbox Enable object auditing for Cloud DBs.
f. Alternatively, run the classification: click Run Now in the Run Discovery ribbon in the Discover > End-to-End Scenarios > Discover Sensitive Data.
2. Create a new Classification process, and assign one or more datasources to it.
a. Select one or more datasources.
b. Click Classification > Create Classification.
c. Follow procedure in Discover Sensitive Data. Enable object auditing for Cloud DBs is selected by default. Leave it selected.
d. Run the classification: after you define Where to Search, click Run Now, or after you save the process click Run Now in the Run Discovery ribbon.
3. Assign one or more datasources to an existing Vulnerability Assessment.
a. Select one or more datasources.
b. Click Vulnerability Assessment > Add to Vulnerability Assessment.
c. Select the Vulnerability Assessment process and click Save.
d. Run the process: navigate to Harden > Vulnerability Assessment > Assessment Builder, select the process and click Run once now.
4. Create a new Vulnerability Assessment, and assign one or more datasources to it.
a. Select one or more datasources.
b. Click Vulnerability Assessment > Create Vulnerability Assessment.
c. Enter a description of the vulnerability assessment; enter one or more email addresses, separated by commas, to receive the results as part of an audit
process that you define.
d. Click Save.
The VA process is created with all tests, the selected datasources, and the receivers you defined.
e. Run the process: navigate to Harden > Vulnerability Assessment > Assessment Builder, select the process and click Run once now.
If there is a collector defined for the datasource, it appears in the Active Collector column if you are the owner. Otherwise the column is blank.
The DB Audit Owner is the CM host name in a CM environment. In a standalone system the value is the collector's host name.
Enabled. When followed by pending restart, indicates that the status will take effect upon instance restart.
Disabled. When followed by pending restart, indicates that the status will take effect upon instance restart.
configuration does not match requirement. (The AWS parameter audit trail is not configured according to Guardium's requirement XML, EXTENDED. Ask your DBA
to modify this value.) When followed by pending restart, indicates that the status will take effect upon instance restart.
Not supported for this db engine. Activity monitoring is not currently supported by Guardium.
If you own the instance, a classification process is assigned, and DB audit is enabled, you should see results in the Objects column. The total is the number of objects
identified by the classification processes assigned to this instance; Audited is the number of those objects that are enabled for Object Audit; New is the number of objects
that have been found by a classification process but have not been enabled automatically. These objects require review. See Manage object auditing.
You should see results in the Objects column if the datasource is assigned to a classification process, the process has run since enabling the DB audit, and you are the
owner. If you don't see objects, verify the classification process and run it again.
Procedure
You can configure the parameter Limit objects added automatically or the collector with any permission level. Other changes require DB permissions. Your access keys
may or may not include these permissions. The instructions below cover all levels of permission.
When you enable DB Auditing, your Guardium system becomes the unique owner of the DB Audit on this DB. No other Guardium system modify the DB Audit or the object
audit. Another system can forcefully take ownership by clicking Start owning DB Audit.
Run classification at least once after enabling DB audit to see and manage objects for auditing. If no objects are found, check your policies.
CAUTION:
When you start managing the database, the Amazon RDS tag IBM Guardium IP is created with the value of your Guardium hostname. This tag should not be modified or
removed.
Procedure
When you stop owning or disable the DB Audit, the entire object audit is disabled as well and the list of objects that can be audited (the come from the classification
results) are deleted.
Procedure
Results
If there were changes, a message appears: DB auditing status has changed for some databases. Click Refresh to update the table. Click Refresh.
The status changes to disabled or disabled pending restart, the icon in the DB Auditing turns red, and the DB Audit Owner column is blank.
Owning the DB Audit gives you exclusive rights to the DB Audit and Object Audit definitions, and access to the object audit data (see Manage object auditing). Other
Guardium systems can access the same cloud account but can only see the DB details.
With full access rights, when you enable the DB audit, you also take ownership of the DB. If your access keys do not provide full access rights, then you take ownership
without enabling the DB audit. When DB audit is enabled (by the DBA) you will have access to the audit data. Conversely, when you disable the DB Audit, you relinquish
If you are transferring ownership between two live systems, first stop owning the DB Audit on the current owner, then take ownership on the second Guardium system. All
auditing is stopped when one Guardium system relinquishes ownership. You'll need to define the auditing process on the new Guardium system: assign the DB to a
classification, run the process, and add objects to the Object Audit.
CAUTION:
Stop owning the DB Audit on one before starting to own it on the second. Otherwise the data will go to the previous collector, as well as the new collector. Two collectors
with different policies (different CMs) receiving the same activities, produce different, or incomplete, results on each collector.
If you are transferring ownership from a Guardium system that has gone down without expectation of recovery, you can start owning the DBA Audit from another
Guardium system, while maintaining the audit definitions, only the ownership changes. In this scenario, stop the original Guardium from owning the DB Audit in the DB
console.
Procedure
New objects are objects that have been found by classification processes that have not been enabled for auditing. You can filter for all new objects, and then either enable
them for object auditing, or clear the New flag. When there are no New objects, then you are up to date with evaluating the new objects. Remember, Guardium could
receive new data every time the classification process runs. When new objects are found that were not added automatically to the object audit, there is a notice New
objects were found.
The Found by Classification column lists all the classification processes that identified this object.
The status Mixed in the Object Audit Status column means the object audit is enabled in some datasources and disabled in other datasources.
Enabling and disabling object auditing is a heavy process, and can take a few minutes. There is a waiting icon while the cloud processes the auditing changes.
You can review objects found in one datasource or multiple datasources by selecting the rows of the datasources you want to review from the Databases table. The object
audit windows shows all objects found by all classification processes on the selected database or databases.
When objects have been identified by the classification process but were not enabled automatically for object audit, New objects found appears above the objects
table. Click New Only to filter for all new found objects that require handling. New objects could be found every time the classification runs. When there are no New
objects, you are up to date with the new objects evaluation.
Review the datasources periodically, checking for New objects, and optionally adding or removing objects from the object audit. For example, you might remove objects if
the automatically added objects include objects you have decided do not need auditing, or if a database is having performance issues. Or you could identify a suspicious
object that is not audited, and add it to the object audit.
Use the classification filter for objects that you know must be audited. Select all objects in the filtered view, and enable object auditing.
Procedure
1. If you assigned the Classification process before you enabled DB Audit, run the Classification once now and wait a few minutes (or wait for the next scheduled run)
for Guardium to identify objects.
2. Select one datasource. Consider using the filter New objects found to identify datasources with new objects.
3. Select DB Auditing > Manage Object Auditing. The Manage Object Auditing window opens listing all objects found by the classification processes to which this
datasource is assigned.
4. Consider using the filter New only to identify all objects classified as New.
5. Select one or more objects (rows) in the table.
6. To enable audit trail, select Actions > Enable Audit. The system responds with the success or failure of the operation.
7. To clear the New flag, click Actions > Clear New flag.
8. To disable audit trail, select Actions > Disable audit. The system responds with the success or failure of the operation.
When objects are identified by the classification process but were not enabled automatically for object audit, New objects found appears above the objects table. Click
New Only to filter for all new found objects that require handling. Review the New objects and either enable object auditing, or clear the New flag.
New objects could be found every time the classification runs. When there are no New objects, you are up to date with the new objects evaluation.
Review the datasources periodically, checking for New objects, and optionally adding or removing objects from the object audit. For example, you might remove objects if
the automatically added objects include objects you have decided do not need auditing, or if a database is having performance issues. Or you could identify a suspicious
object that is not audited, and add it to the object audit.
Group by Object: To view all new found objects, type New in the text filter.
To enable or disable the object audit on one object in all the selected datasources, select the row(s) and click Action > Enable / Disable
To take action per datasource, click Present in # datasources to view all datasources whose classification processes have identified the selected object
Group by Classification is especially useful when you have almost identical datasources, or classification policies, whose objects need auditing without any further
evaluation, for example GDPR.
Procedure
1. If you assigned the Classification process before you enabled DB Audit, run the Classification once now (or wait for next scheduled run) and wait a few minutes for
Guardium to identify objects.
2. When grouped by object:
a. Select multiple datasources that have New objects in the Objects column of the Databases Table. Use the filter New objects found to identify these
datasources.
b. Click DB Auditing > Manage Object Auditing. The Manage Object Auditing window opens.
c. If the object must always be audited in all the datasources, select the row(s) and click Actions > Enable Audit. The system responds with the success or
failure of the operation.
d. If you want to enable the object audit on individual databases, click the number in the Present in # Datasources column, in the row of the object to open the
Datasources containing <object> window. This window shows all datasources whose classification processes have identified the selected object. Select one
or more datasource rows and click Actions > Enable Audit.
3. For a classification process whose identified objects always need auditing without further evaluation: Click the Classification radio button (above the table); select
one or more rows of classification processes, and click Actions > Enable Audit.
Database Auto-discovery
The Auto-Discovery application scans and probes your servers for open ports to prevent unknown or unwanted connections to your network. You can run auto-discovery
processes on demand, or schedule the processes on a periodic basis.
Auto-discovery uses scan and probe jobs to ensure that no database goes undetected in your environment.
A scan job scans each specified host (or hosts in a specified subnet), and compiles a list of open ports that are specified for that host.
A probe job uses the results of the scan to determine whether there are database services that are running on the open ports. A probe job cannot be completed
without first running a scan. View the results of this job in the Databases Discovered predefined report.
Before you begin, you must download and install the patch for the Auto-discovery application. The patch is available at IBM Fix Central.
1. Create an Auto-discovery process to search specific IP addresses or subnets for open ports.
2. Run the Auto-discovery process on demand or on a scheduled basis.
3. View the results of the process with Auto-discovery reports, or create custom reports.
Auto discovery has its own processes that are independent of audit processes, but they work exactly the same way as audit processes.
You can only enter IPs when doing a scan, not host names, but Guardium does detect host names as part of the report. Guardium does not truncate host names in the
Guardium product. However, it may be necessary to configure the report to have wider columns.
Guardium auto-discovery does not guess on what database appears during a probe. If Guardium auto-discovery says it has found a database, then it is 100% certain what
the database is.
Note: Discovery only finds running databases. Databases will need to be started if discovery is to be used during the installation. Due to how the AIX KTAP interception
works, the databases need to be restarted after the first time S-TAP runs. If the databases are not restarted, some interception will not work.
Auto-discovery Reports
Open the Auto-discovery reports by clicking Discover > Reports and selecting from the available reports.
You can create custom reports with the Auto-discovery Query Builder. Open the Auto-discovery Query Builder by clicking Discover > Database Discovery > Auto-discovery
Query Builder.
The main entity for this report is the Discovered Port. Each individual port that is discovered has its own row in the report. The columns that are listed are: Time Probed,
Server IP address, Server Host Name, DB Type, Port, Port Type (usually TCP), and a count of occurrences.
There are no special runtime parameters for this report, but it excludes any discovered ports with a database type of Unknown.
When an auto-discovery process definition changes, the statistics for that process are reset.
Classification
Classification policies and processes define how Guardium® discovers and treats sensitive data such as credit card numbers, social security numbers, and personal
financial data.
Discovery and classification processes become important as the size of an organization grows and sensitive information like credit card numbers and personal financial
data become present in multiple locations, often without the knowledge of the current administrators responsible for that data. This frequently happens in the context of
mergers and acquisitions, or when legacy systems have outlasted their original owners. Creating workflows for discovering sensitive data allows you to identify sensitive
data in your environment and take appropriate actions, such as applying access policies.
Classification processes consist of classification policies that have been associated with one or more datasources. Classification processes can be submitted to be run
once or, if login credentials have been stored for all the datasources used in the process, scheduled to run on a periodic basis in a compliance workflow automation
process.
Classification policies consist of classification rules and classification rule actions designed to find and tag sensitive data in specified datasources.
Classification rules use regular expressions, Luhn algorithms, and other criteria to define rules for matching content when applying a classification policy.
Classification rule actions specify a set of actions to be taken for each rule in a classification policy. For example, an action might generate an email alert or add an object
to a Guardium group. Each time a rule is satisfied, that event is logged, and thus can be reported upon (unless ignore is specified as the action to be taken, in which case
there is no logging for that rule).
When the classifier runs, you have the option of specifying how it samples records. The default behavior takes a random sampling of rows using an appropriate statement
for the database platform in question. For example, the classifier samples using a rand() statement for SQL databases. The alternative behavior is sequential sampling,
which reads rows, in order, up to the specified sample size. Random sampling is the default behavior and is generally recommended because it provides more
representative results. However, random sampling may run incur a slight performance penalty when compared to sequential sampling.
For both random and sequential sampling, the default sample size is 2000 rows or the total number of available rows, whichever is fewer. Larger or smaller sample sizes
may be specified. If you check the random sampling box, it selects 2000 rows randomly from that table/view and then scans. If the table contains less than 2000 rows, it
will scan all the rows. If you uncheck the random sampling box, it selects the first 2000 rows from that table/view and then scans. The default query time-out value is 3
minutes (180 seconds). If the process is running but stuck for 30 minutes, the entire process will be halted.
To further minimize the impact of classification processes on the database server, long running queries will be cancelled, logged, and the remainder of the table skipped.
Any rows acquired up to that point will be used while evaluating rules for the table. Similarly, if a classification process runs for an extensive period of time without
completing, the entire process is halted, logged with the process statistics, and the next classification process is started. This is an uncommon occurrence and usually only
happens on servers that are already experiencing performance problems.
The classifier periodically throttles itself to idle so it does not overwhelm the database server with requests. If many classification rules are sampling data, the load on the
database server should remain constant but the process may take additional time to run.
The classifier handles false positives by using excluded groups for schema, table and table columns. Previously, it could be a complex process to set up Guardium to
ignore false positive results for future classification scans. Now, when you review classifier results, you can easily add false positive results to an exclusion group, and add
that group to the classification policy to ensure those results are ignored in future scans.
Multi-thread classifier
Guardium can run more that one classifier process on a server based on the number of cores a server is setup/defined with. Basically, you can run multiple classifier
processes (almost at the same time - starting time is still the same every 10 seconds or so to start).
Run these commands to find out the number of cores on your server. For example:
Test using command "nproc" or "lscpu" to see number of allowed concurrent process * 2 AND/OR grep -c processor /proc/cpuinfo AND/OR grep "cpu cores"
/proc/cpuinfo |sort -u |cut -d":" -f2
Multiply the number of cores by 2 and that gives you the number of concurrent classifier processes you could define and run at the same time.
Use these CLI commands to set and to get the set value for setting concurrency levels:
grdapi set_classification_concurrency_limit limit=4 (setting up to 4 classifier processes to run at the same time)
grdapi get_classficiation_concurrency_limit (show/display the current concurrency limit, the default of any server is set to 1)
The Fire only with Marker is a constant value, can be named any value, and must have the exact same value across rules you want to group. This means that if one rule has
a marker of ABC then the other rule that you want to group it with must also have a marker named ABC. Any other marker value and the rules are no longer grouped.
You must use at least two rules of any values based on looking for data within the same table name.
Continue on Match
The Fire only with Marker is also based on the Continue on Match. As an example, if the following rules were defined such that Rule 3 does not match the Continue on
Match then no results will be returned regardless if all three marker rules were positive. This is because you didn't get to run Rule 4 and the grouping will not fire because
all Fire only with Markers must execute and with positive results.
No N/A Table. Classifier will stop processing rules after the first hit in the table.
Yes Yes Table and column. Classifier will record the first hit for any given column and ignore it thereafter for
subsequent rules.
Yes No Detailed. Classifier will record hits for all columns for all rules.
Procedure
Open the Classification Process Builder by navigating to Discover > Classifications > Classification Process Builder.
Parent topic: Classification
Procedure
1. From the Classification Process Builder, click the icon to open the Define Classification Process panel.
2. Enter a name for the process in the Process Description field.
3. Select a classification policy from the list. You can click Modify to view and edit the policy if needed.
4. Optionally clear the Random sampling check box. This feature applies only when the number of records in a table exceeds the sample size. Random sampling will
randomly search a number of records in the table up to the defined sample size. This is a high quality search because the results are more representative of the
data. Clearing the Random sampling check box changes the behavior to sequentially search records in the table up to the defined sample size. A sequential search
may be faster than a random sampling, but the results may not be as representative of all the available data.
5. Enter a Sample size when searching for data (see Define Classification Policy Rules / Define a Search for Data Rule), if the number of records in a table is <= to
"Sample size", then all those records are searched for a match. When the number of records in a table exceeds "Sample size", then random sampling may be used.
6. Click the Add Datasource button to add one or more datasources.
7. Click Save. This completes the definition of the classification process.
8. Optionally add comments to the definition. See Comments in the Common Tools help book.
9. Optionally add security roles. See Security Roles in the Access Management help book.
10. Optionally submit the classification process for execution. See Run a Classification Process.
11. Click Done when you are finished.
On demand from the Classification Process Builder, which is described in this task.
As a task within a Compliance Workflow Automation Process, described elsewhere.
As part of a Discover Sensitive Data Workflow, described elsewhere.
Procedure
1. From the Classification Process Builder, select the process to run, and click Modify to open the Classification Process Builder.
2. Click the Run Once Now button to submit the job. This places the process on the Guardium Job Queue, from which the Guardium system runs a single job at a time.
You can view the job status using the Guardium Job Queue.
3. Click the Done button when you are finished.
Procedure
The Guardium Job Queue is available from the administrator portal only.
Procedure
To view the report, open the Guardium Job Queue by navigating to Discover > Classifications > Guardium Job Queue.
Procedure
Procedure
Procedure
1. Select the classification policy to be cloned, and click the Clone button.
2. Type over any of the items as appropriate for the cloned policy. We recommend that you replace the default name for the clone, which is the name of the selected
policy prefixed with Copy of.
3. Click the Save Clone button to save the new classification policy. The policy will be re-displayed in the Classification Policy Definition panel.
4. See Modify a Classification Policy for instructions on how to change components of the new classification policy definition.
Procedure
1. Click the Add Rule button to open the Classification Rule definition panel.
2. Enter a Rule Name.
3. Optionally enter a new Category and/or Classification for the rule. The defaults are taken from the Classification Policy Definition for the policy.
4. If the next rule in the classification policy should be evaluated after this rule is matched, mark the Continue on Match checkbox. The default is to stop evaluating
rules when a rule is matched.
5. Select a Rule Type. For a new rule, no Rule Type is selected. Once a Rule Type is selected, the panel expands to include the fields needed to define that type of rule.
For the specifics of how to define each type of rule, see one of the following sections:
A catalog search rule searches the database catalog for table and/or column names matching specified patterns. Wildcards are allowed: % for zero to any number of
characters, or _ (underscore) for a single character.
Procedure
1. In the Table Type row, mark at least one type of table to be searched: Synonym, Table, or View. (Table is selected by default.)
2. Optionally enter a specific name or a wildcard based pattern in the Table Name Like box. If omitted, all table names will be selected.
3. Optionally enter a specific name or a wildcard based pattern in the Column Name Like box. If omitted, all column names will be selected.
4. Click the Accept button when you are done.
A search for data rule searches one or more columns for specific data values. Wildcards are allowed: % for zero to any number of characters, or _ (underscore) for a single
character. For example, the Rule Type is Search for Data, the Table Type is Table, and the Table Name Like is CREDIT%.
Procedure
1. In the Table Type row, mark at least one type of table to be searched: Synonym, Table, or View. (Table is selected by default.)
2. In the Table Name Like row, optionally enter a specific name or a wildcard based pattern. If omitted, all table names will be selected.
3. In the Data Type row, select one or more data types to search.
4. In the Column Name Like row, optionally enter a specific name or wildcard pattern. If omitted, all column names will be selected.
5. Optionally enter a Minimum Length. If omitted, no limit.
6. Optionally enter a Maximum Length. If omitted, no limit.
7. In the Search Like field, optionally enter a specific value or a wildcard based pattern. If omitted, all values will be selected.
8. In the Search Expression field, optionally enter a regular expression to define a pattern to be matched. To test a regular expression, click the (Regex) button to open
the Build Regular Expression panel in a separate window. For detailed information about how to use regular expressions, see Regular Expressions.
9. In the Evaluation Name, optionally enter a fully qualified Javaâ„¢ class name that has been created and uploaded. The Java class will then be used to fire and
evaluate the string. There is no validation that the class name entered was loaded and conforms to the interface. See Custom Evaluation and Manage Custom
Classes for more information on creation and uploading of Java class files.
10. Optionally enter a Fire only with Marker name. See Fire only with Marker.
11. In the Hit Percentage field, optionally enter a percentage of matching data that should be achieved for this rule to fire. Data is returned if the percentage of
matching data examined is greater than or equal (>=) then the percentage value entered, noting that an empty entry means it is not a condition and will not affect
whether the rule fires or not and return data to the view screen, a 0 percentage will cause the rule to fire for this condition and return data to the view screen, and a
percentage of 100 requires that all must match.
12. In the Compare to Values in SQL field, optionally enter a SQL statement. The SQL entered, which must be based on returning information from one and only one
column, will then be used as a group of values to search against the tables and/or columns selected. If used, the Compare to Values in SQL should follow the
following rules:
The SQL statement MUST begin with SELECT
The SQL statement SHOULD NOT utilize the ';' semi-colon
The SQL entered MUST specify a schema value name in order to be accurate in returning results.
Good examples include:
13. In the Compare to Values in Group field, optionally select a group. The group selected will then be used as a group of values to search against the tables and/or
columns selected. As long as one of the values within a group, that is either a public or a classifier group, matches, then the value rule will return data.
14. Mark the Show Unique Values checkbox to add, to the Comments, details on what values matched the classification policy rules and fired. Use regular expressions
in the Unique Values Mask field to redact the unique values. For example, mark the Unique Values checkbox and use ([0-9]{2]-[0-9]{3})-[0-9]{4} in the Unique
Values Mask field to log the last four digits and redact the prefix digits.
Procedure
1. In the Search Like box, optionally enter a specific value or a wildcard based pattern. If omitted, all values will be selected.
2. In the Search Expression box, optionally enter a regular expression to define a pattern to be matched. To test a regular expression, click the icon to open the
Build Regular Expression panel in a separate window. For detailed information about how to use regular expressions, see Regular Expressions.
Each time the classification rule is matched, a member will be added to the selected Object-Field group on the Guardium system. You have the option of replacing all
members, or adding new members.
For a database file, the object component of the member will be the database table name, and the field component will be the column name.
For an unstructured data file, the object component of the member will be the file name (in quotes), and the field component will be the column name, but if column
names cannot be determined, the columns will be named column1, column2, etc.
Procedure
Each time the classification rule is matched, a member will be added to the selected Object group on the Guardium system.
For a database file type, the member will be the database table name. For an unstructured file type, the member name will be the file name.
You have the option of replacing all entries, or only adding new entries.
Procedure
Each time the classification rule is matched, an access rule will be inserted into an existing security policy definition. The updated security policy will not be installed (that
task is performed separately, usually by a Guardium administrator).
Procedure
Each time the classification rule is matched, the selected privacy set's object-field list will be replaced.
For a database file, the object component of the privacy set will be the database table name, and the field component will be the column name.
For an unstructured data file, the object component of the privacy set will be the file name (in quotes), and the field component will be the column name, but if column
names cannot be determined, the columns will be named column1, column2, etc.
Procedure
1. Select the previously defined Privacy Set whose contents you want to replace.
2. Click the Accept button to add the action to the rule definition, close the Action panel, and return to the rule definition panel.
Each time the classification rule is matched, a policy violation will be logged. This means that classification policy violations will be logged (and can be reported) together
with access policy violations (and optionally correlation alerts) that may have been produced.
Procedure
Click the Accept button to add the action to the rule definition, close the Action panel, and return to the rule definition panel.
Procedure
Sensitive data discovery scenarios span three critical aspects of enterprise security:
Discovery: locating the sensitive data that exists anywhere in your environment
Protection: monitoring and alerting when sensitive data is accessed
Compliance: creating audit trails for reviewing the results of sensitive data discovery processes
The Discover Sensitive Data end-to-end scenario builder streamlines the processes of discovery, protection, and compliance by integrating several Guardium tools into a
single user-friendly interface.
Name and Description Provide a name and description for the scenario and its Creates a classification process and classification
Discover related processes and policies. policy.
What to discover Create rules and rule actions for discovering and Optionally creates new datasource definitions.
classifying data.
Run discovery Run the scenario, review the results, and define ad hoc
grouping and alerting actions.
Review report Creates an access policy.
Protect
Audit Define recipients, a distribution sequence, and review Creates an audit process.
Comply options.
This sequence of tasks guides you through the processes of creating a new discovery scenario. This includes creating classification policies consisting of rules and rule
actions for discovering sensitive data, creating classification processes by identifying datasources to scan for sensitive data, defining ad hoc policies (for grouping and
alerting, for example), and creating audit processes that distribute results to different stakeholders at scheduled intervals.
While a discover sensitive data scenario creates underlying policies and processes that can be accessed using other Guardium tools (for example the Classification Policy
Builder or through GuardAPI commands), there are no GuardAPI commands for creating or modifying a discovery scenario.
1. Discovery scenarios
Create a new discovery scenario or select an existing discovery scenario to copy or edit.
2. Name and description
Provide a name and description for your discovery scenario.
3. What to discover
Create policies consisting of rules and rule actions for discovering and classifying sensitive data.
4. Where to search
Identify datasources to scan for sensitive data.
5. Run discovery and review report
Optionally run your discovery scenario and review the results.
6. Audit
Optionally create an audit process by defining receivers, a distribution sequence, and review options for the discovery and classification report.
7. Scheduling
Optionally activate the audit process by scheduling it to run at defined intervals.
What to do next
Continue to the next section and provide a Name and description for your discovery and classification scenario.
Parent topic: Discover
Discovery scenarios
Create a new discovery scenario or select an existing discovery scenario to copy or edit.
Procedure
1. Navigate to Discover > End-to-End Scenarios > Discover Sensitive Data.
2. Create, copy, or edit a discovery scenario.
GDPR [template]
The GDPR [template] scenario provides the latest set of discovery rules and language support for your GDPR compliance strategy. Templates can be copied
or edited and saved under a different name, and the GDPR [template] will always receive the latest GDPR discovery rules and language support.
GDPR
The GDPR scenario provides a basic set of discovery rules that can be used as part of a GDPR compliance strategy. You can edit and save changes to the
GDPR scenario, but the scenario will not receive updated rules or language support over time.
Attention: If the GDPR [template] is available, using the older GDPR scenario is not recommended because the GDPR scenario does not receive updates.
During this step, you may also specify security roles that can access the discovery scenario.
Procedure
1. Open the Name and Description section and provide or edit the name and optional description of the scenario. The name you provide here will also be used to name
underlying classification processes and policies created by the discovery scenario.
What to do next
Continue to the next section of the discovery scenario, What to discover.
Parent topic: Discover Sensitive Data
Previous topic: Discovery scenarios
Next topic: What to discover
What to discover
Create policies consisting of rules and rule actions for discovering and classifying sensitive data.
This task guides you through the processes of creating and editing classification rules and rule actions for use in your discovery scenario.
Procedure
1. Open the What to discover section to define rules for discovering data.
2. Use the Language menu to filter rule templates by the selected language and countries where the selected language is a national language. Templates for universal
patterns like credit card numbers and email addresses are displayed for all Languge menu selections.
3. Add rules to your discovery scenario by doing one of the following:
Select rules from the Classification Rule Templates table and click the icon to add predefined rules.
4. Define a new rule, or edit a rule template by selecting the template and clicking the icon.
a. Select a Rule type based on the type of search being performed.
Search for data matches specific patterns or values in the data
Catalog search matches table or column names in the database catalog
Search for unstructured data matches specific values or patterns in an unstructured data file, for example CSV, TXT, or CEF files
b. Provide a name and description while optionally specifying a special pattern test at the beginning of the Name field. The rule name will also be used to name
the rule associated with the classification policy in the Classification Policy Builder. If you require a special pattern test, it is recommended that you work
with its corresponding template (for example, use Bank Card - Credit Card Number for credit card numbers).
c. Open the Rule Criteria section to define a regular expression and other search criteria for the rule. If you are working with a rule template, an appropriate
regular expression is provided by default.
Attention: For rules created in the discover sensitive data scenario, the default Data type includes both Number and Text.
d. Open the Actions section and define any rule actions that should be taken when rule criteria match.
e. When defining multiple rule actions, you can optionally click the icon and use the and icons to change the order in which the actions are
executed.
f. Click Save when you are finished adding or editing rule definitions to return to the What to discover section of the discovery scenario.
5. Optionally click the icon and use the and icons to change the order in which rules are applied. Rule order is important as the default behavior stops
rule execution after the first match unless Continue on match is selected under Rule criteria.
6. When you are finished working with rules, click Next to begin working on the next section of the discovery scenario.
What to do next
Continue to the next section of the discovery scenario, Where to search.
Parent topic: Discover Sensitive Data
Previous topic: Name and description
Next topic: Where to search
Related concepts:
Regular Expressions
Related tasks:
Working with Classification Rule Actions
Related reference:
Actual Member Content
Rule Criteria
Special pattern tests
Rule Criteria
Table 1.
Attribute Description
Table type Select one or more table types to search: Synonym, Table, or View. Table is selected by default.
Data type Select one or more data types to search: Number, Text, or Date. Number and Text are selected by default.
Search expression Optionally enter a regular expression to define a search pattern to match. To test a regular expression, click the RE button to open the regular
expression editor.
Table name like Optionally enter a specific name or wildcard pattern. If omitted, all table names are selected.
Column name like Optionally enter a specific name or wildcard pattern. If omitted, all column names are selected.
Continue on match If the next rule in the classification policy should be evaluated after this rule is matched, mark the Continue on Match checkbox. The default is
to stop evaluating rules once a rule is matched.
Search wildcard Optionally enter a specific value or a wildcard pattern. If omitted, all values are selected.
Evaluation name Optionally enter a fully qualified Javaâ„¢ class name that has been created and uploaded. The Java class will then be used to fire and evaluate
the string.
Note: There is no validation that the class name entered was loaded and conforms to the interface.
Fire only with marker The Fire only with marker allows for the grouping of classifier rules: rules with the same marker fire at the same time. Additionally, all returned
rules using a marker must return data based on the same table name. If two or more rules are defined with the same marker, those rules will
fire together such that if both rules fire on the same table they will both be logged and their actions invoked. On the other hand, if only one rule
fires on a table then neither of the rules will be logged or have their actions invoked. Being able to have multiple rules fire together becomes
important when you care about sensitive data appearing together within the same table. For example, you may want to know when a table has
both a social security number and a Massachusetts drivers license.
The fire only withMarker is a constant value, can be named to any value, and must have the exact same value across the rules you want
grouped. This means that if one rule has a marker of ABC then the other rule that you want to group it with must also have a marker named
ABC.
The Fire only with Marker also interacts with the Continue on Match flag. For example, if the following rules were defined such that Rule 3 does
not match the Continue on match then no results will be returned regardless if all three marker rules were positive. This is because you didn't
get to run Rule 4 and the grouping will not fire because all Fire only with markers must execute with positive results.
Hit percentage Optionally enter a percentage of matching data that should be achieved for this rule to fire. Data is returned if the percentage of matching data
examined is greater than or equal (>=) then the percentage value entered, noting that an empty entry means it is not a condition and will not
affect whether the rule fires or not and return data to the view screen. A 0 percentage will cause the rule to fire for this condition and return
data to the view screen, and a percentage of 100 requires that all must match.
Compare to values in Optionally enter a SQL statement. The SQL entered, which must be based on returning information from one and only one column, will then be
SQL used as a group of values to search against the tables and columns selected.
Note: If used, the Compare to values in SQL should observe the following rules:
Good examples:
Compare to values in Optionally select a group. The group selected will then be used as a group of values to search against the tables and columns selected. As long
group as one of the values within a group, that is either a public or a classifier group, matches, then the value rule will return data.
Show unique values Mark the Show Unique Values checkbox to add details on what values matched the classification policy rules to the comments field of the
resulting report.
Unique values mask Use regular expressions in the Unique values mask field to redact the unique values. For example, mark the Show unique values checkbox and
use ([0-9]{2]-[0-9]{3})-[0-9]{4} in the Unique values mask field to log the last four digits and redact the prefix digits.
Parent topic: What to discover
Table 1.
Actual Member Content Selection Value in Group
%/%.Name %%.tableName
%/Full %%.schemaName.tableName
Read/%.Name Read/%.tableName
Change/%.Name Change/%.tableName
Read/Full Read/schemaName.tableName
Change/Full Change/schemaName.tableName
If your rules return the table name JJ_CREDIT_CARD from the schema DB2INST1, and you have specified an Add to Group of Objects action, the Actual Member Content
selections behaves as follows:
Where to search
Identify datasources to scan for sensitive data.
In this task, identify the datasources you would like to search for sensitive data.
Procedure
1. Open the Where to search section to identify the datasources you would like to search for sensitive data.
2. Add datasources to your discovery scenario by doing one of the following:
Click the icon to open the Create Datasource dialog and add a new datasource definition.
Select datasources from the Available Datasources table and click the icon to add existing datasources.
3. Define a new datasource, or edit an existing datasource by selecting the datasource and clicking the icon. New datasources defined through the discovery
scenario can also be viewed or edited through the Datasource Definitions tool.
a. Provide or edit the name of the datasource.
b. Select the appropriate database type from the Database type menu and provide the requested information to complete the datasource definition. The
available fields differ depending on the selected database type.
c. When you are finished editing the datasource definition, click Save to save your work and optionally click Test Connection to verify the datasource
connection.
d. When you are finished working with the datasource definition, click Close to close the dialog.
4. If you are using this classification process for cloud databases also, select Enable object auditing for Cloud DBs.
5. When you are finished adding datasources, click Next to begin working on the next section of the discovery workflow.
Results
A classification process is created after adding datasources to your discovery scenario and saving the scenario. To view or edit this process directly, use the Classification
Process Builder.
What to do next
Continue to the next section of the discovery workflow, Run discovery.
Parent topic: Discover Sensitive Data
Previous topic: What to discover
Next topic: Run discovery and review report
Related concepts:
Datasources
Related tasks:
Creating a datasource definition
Procedure
1. Open the Run discovery section to test your discovery scenario.
2. Click Run Now to begin.
Attention:
Depending on the policies you have specified and the number of datasources you have selected to search, it may take several minutes or more to complete
the process of identifying sensitive data. The process status is indicated next to the Run Now button, or you can monitor the process using the Guardium Job
Queue.
You can also run the classification process by visiting the Classification Process Builder, selecting your classification process, and clicking Run Once Now.
3. When the discovery scenario has finished running, open the Review report section to see the results.
4. While reviewing the results, you can define additional rules and actions based on the results. Use the Filter to refine results (filtering is not supported with more
than 10,000 results).
a. Select the row(s) containing data you want to define actions against.
b. Click Add to Group to define a grouping action, or click Advanced Actions to define other actions such as alerting, logging, or ignoring.
c. After completing the dialog to define an action, click OK to return to the results report.
Attention:
Actions added from the results table are considered ad hoc actions that run only as invoked from the results table. These actions will not appear in the
What to discover > Edit rule > Actions section of your discovery scenario, and they will not run automatically as part of the discovery scenario or
related classification processes.
Use the Policy Builder to review, edit, and install alerting actions and access rules.
Use the Group Builder to review and edit grouping actions.
Use the Privacy Set Builder for to review privacy set actions.
Use the Incident Management tool to review policy logging actions.
5. When you are finished reviewing the results report, click Next to begin working on the next section of the discovery scenario.
Results
After running the search for sensitive data, monitor its status next to the Run Now button or using the Guardium Job Queue. You can use the Group Builder to review any
grouping actions or the Policy Builder to review and install any alerting actions that were added from the results table.
What to do next
Optionally, continue to the next section of the discovery scenario, Audit.
Parent topic: Discover Sensitive Data
Previous topic: Where to search
Next topic: Audit
Audit
Optionally create an audit process by defining receivers, a distribution sequence, and review options for the discovery and classification report.
The audit process created by adding receivers to a discovery scenario inherits the name of the scenario. For example, adding receivers to a discovery scenario named "Find
PCI" creates an audit process named "Find PCI Audit process" followed by a date and time stamp.
Procedure
1. Open the Audit section to define receivers for discovery reports.
2. Add receivers to your discovery scenario by clicking the icon and defining options for how the reports are delivered.
If sending the report to Guardium users, roles, or groups, you will need to define process control options.
If sending the report to email recipients, provide their email address and filter the report by a Guardium username that is appropriate for the email recipient.
3. Click OK to add the receiver to the discovery workflow. Continue adding additional receivers to the scenario if needed.
4. Optionally click the icon and use the and icons to change the order in which reports are distributed to recipients. This is important when using
sequential distribution as it determines which receivers must review or sign the report before it is sent to subsequent receivers.
5. When you are finished adding, editing, and ordering receivers, click Next to begin working on the next section of the discovery workflow.
Results
An audit process is created after defining receivers and saving the discovery scenario. To view, edit, or run this process directly, use the Audit Process Builder.
The audit process remains inactive until it is scheduled using the Schedule section of the discovery scenario or using the Audit Process Builder. You can also run the audit
process by visiting the Audit Process Builder, selecting the audit process, and clicking Run Once Now.
What to do next
Optionally, continue to the next section of the discovery workflow, Schedule.
Parent topic: Discover Sensitive Data
Previous topic: Run discovery and review report
Next topic: Scheduling
Scheduling
Optionally activate the audit process by scheduling it to run at defined intervals.
Procedure
1. Open the Schedule section to define a schedule for discovering data.
2. Use the Schedule by menu to set daily or monthly intervals for the audit process.
3. Use the Start schedule every and Repeat every check boxes to define how many times per day and how many times within each hour to run the audit process.
4. Use the Start date and time controls to define an explicit date and time for the schedule to begin.
5. Clear the Activate schedule check box to deactivate the audit process while retaining scheduling information for later use. The Activate schedule box is checked by
default, meaning that the audit process becomes active after saving the schedule.
6. When you have defined a schedule, click Save to finish editing and close the workflow editor.
Results
An audit process is created after defining a schedule and saving the discovery scenario. To view or edit this audit process directly, use the Audit Process Builder. Review
the Scheduled Jobs report to see the status, start time, and next fire time for scheduled audit tasks.
Parent topic: Discover Sensitive Data
Previous topic: Audit
Related concepts:
Building audit processes
Regular Expressions
Regular expressions can be used to search traffic for complex patterns in the data.
The IBM Guardium implementation of regular expressions conforms with POSIX 1003.2. For more detailed information, see the Open Group web site:
www.opengroup.org. Regular expressions can be used to search traffic for complex patterns in the data. See Policies for examples.
This help topic provides instructions for using the Build Regular Expression Tool, and several tables of commonly used special characters and constructs. It does not
provide a comprehensive description of how regular expressions are constructed or used. See the Open Group web site for more detailed information.
The important point to keep in mind about pattern matching or XML matching using regular expressions, is that the search for a match starts at the beginning of a string
and stops when the first sequence matching the expression is found. Different or the same regular expressions can be used for pattern matching and XML matching at the
same time.
Note: IBM Guardium does not support regular expressions for non-English languages.
To open the Build Regular Expression tool, click the icon next to the field that will contain the regular expression. If you have already entered anything in the field, it will
be copied to the Regular Expression box in the Build Regular Expression panel.
literal Match an exact sequence of characters (case sensitive), can can Can cab caN
except for the special characters described below
. (dot) Match any character including carriage return or newline ca. can cab c cb
(\n) characters
* Match zero or more instances of preceding character(s) Ca*n Cn Can Caan Cb Cabn
| Match either the preceding or following pattern Can|cab Can cab Cab
(x ...) Match the sequence enclosed in parentheses (Ca)*n Can XaCan Cn CCnn
{n} Match exactly n instances of the preceding character(s) Ca{3}n Caaan Caan Caaaan
{n,} Match n or more instances of the preceding character(s) Ca{2,}n Caan Caaaan Can Cn
{n,m} Match from n to m instances of the preceding character(s) Ca{2,3}n Caan Caaan Can Caaaan
[a-ce] Match a single character in the set, where the dash [C-FL]an Can Dan Lan Ban
indicates a contiguous sequence; for example, [0-9]
matches any digit
[^a-ce] Match any character that is NOT in the specified set [^C-FL]an aan Ban Can Dan
[[.char.]] Match the enclosed character or the named character [[.~.]]an or [[.tilde.]]an ~an @an
from the Named Characters Table
[[:class:]] Match any character in the specified character class, from [[:alpha:]]+ abc ab3
the Character Classes Table
NUL \0
SOH \001
STX \002
ETX \003
EOT \004
ENQ \005
ACK \006
BEL \007
alert \007
BS \010
backspace \b
HT \011
tab \t
LF \012
newline \n
VT \013
vertical-tab \v
FF \014
form-feed \f
CR \015
carriage-return \r
SO \016
SI \017
DLE \020
DC1 \021
DC2 \022
DC3 \023
DC4 \024
NAK \025
SYN \026
ETB \027
CAN \030
EM \031
SUB \032
ESC \033
IS4 \034
FS \034
IS3 \035
GS \035
IS2 \036
RS \036
IS1 \037
US \037
space ' '
exclamation-mark !
quotation-mark "
Phone Number (North America - Matches 3334445555, 333.444.5555, 333-444-5555, 333 444 5555, (333) 444 5555, and all combinations thereof) \(?[0-9]{3}\)?[-. ]?
[0-9]{3}[-. ]?[0-9]{4}
Zip Code (US) (5 digits required, hyphen followed by four digits optional) [0-9]{5}(?:-[0-9]{4})?
Tip: To install the FAM discovery agent successfully on AIX, it is recommended to set the process data size to unlimited by modifying the following lines in the
/etc/security/limits file: default: data = -1
Procedure
1. Install the GIM client on the file server. See Guardium installation manager.
2. Download the FAM bundle and save in an accessible drive. Choose the correct module for your file server OS. The UNIX bundle has a name like: guard-bundle-
FAM_r*****_trunk_*****.gim. The Windows bundle looks like: guard-FAM-guardium_r*****Windows-Server-x86_x64_ia64.gim.
3. If you are also installing the S-TAP, install it before installing the FAM bundle. Download the S-TAP from Fix Central and follow the instructions in the next step. S-
TAP must be installed before the FAM bundle is installed.
4. On the Central Manager if there is one, otherwise on an appliance, upload and import the FAM bundle:
a. Navigate to Manage > Module Installation > Upload Modules.
b. Under Upload Module, click Browse and navigate to the FAM bundle. Click Upload.
c. Under Import uploaded modules, select the FAM bundle and click Install/Update.
5. Install and configure the FAM bundle:
a. Navigate to Manage > Module Installation > Set up by Client (Legacy). To see all registered clients, click Search.
b. Select your file server and then click Next.
c. Choose the FAM module you uploaded. (For Windows, you may need to uncheck the Display Only Bundles checkbox.)
d. Configure parameters for the FAM discovery agent. Configure SOURCE_DIRECTORIES for the directories you want to scan. By default, the agent will only do
basic scanning for entitlement information. To enable scanning based on decision plans, such as for SOX or HIPAA, you need to set FAM_IS DEEP_ANALYSIS
to true. By default, it uses all of the default decision plans. You can specify which decision plans you want it to use. The default schedule for the scanning is
every 12 hours, and starts immediately upon configuration. You can change these using GIM parameters FAM_SCHEDULER_HOUR_TIME_INTERVAL,
FAM_SCHEDULER_START, and FAM_SCHEDULER_REPEAT. See full parameter list in File discovery and classification GIM parameters.
Note: You can also configure GIM parameters using the grdapi command: gim_update_client_params.
e. Click Apply to Selected then click Install/Update, where you can install immediately or schedule a later time.
6. For v10.4 S-TAP installed by GIM, enable FAM monitoring on the S-TAP by changing the parameter fam_enable to 1 (enabled). See Windows: Editing the S-TAP
configuration or Linux/UNIX: Editing the S-TAP configuration. This is required even if you are only using the FAM discovery agent.
7. Verify that the FAM discovery agent installed successfully by viewing the Guardium report, S-TAP Status Monitor (add the report from My Dashboards). Look for the
FAM_Agent suffix in the IP address of the S-TAP host.
8. To trigger file rediscovery later without uninstalling and reinstalling the FAM bundle:
a. Remove the files under the work directory. If Guardium is installed in the default directory, the files to be removed are in this directory on the file server:
/usr/local/IBM/modules/FAM/current/files/work
b. Change any FAM parameter in GIM, for example, changing the time interval from 5 to 10 minutes
c. Click Apply to Selected then click Install/Update.
Results
Discovery and Classification results: when the installation of the FAM discovery agent (file crawler) is complete, a basic run of the file crawler begins, using the initial path
that you specified during the installation. Each time the crawler completes its run, it sends a status message that is included in the Files Crawler Configuration report. This
Configure file discovery and classification per collector. These parameters can be configured during installation, or at a later time using GIM (Manage > Module Installation
> Set up by Client) or using the GuardAPI command gim_update_client_params. You can only update one collector at a time when using the GuardAPI.
D
e
f
a
u G
l U
GIM Parameter t Description I
FAM_DEBUG 0 Logs on the file server are collected and sent to the Guardium appliance. X
0=OFF
1= ON
The S-TAP parameter fam_enable must be enabled for the discovery agent to function.
Windows:The FAM service restarts, as shown in the Event viewer (Windows logs > System The IBM
Guardium FAM service entered the stopped state and The IBM Guardium FAM service
entered the running state). There is no new entry in the pre-defined GUI report "Files Crawler
Configuration" and the configuration stays at 2 in the GIM GUI. For next restart, change the parameter to 1.
FAM_ICM_CLASS_DECISION_PLANS Â Enable the decision plans by including their Plan names and their classification entities. X
DecisionPlanName1{Entity1.1,Entity1.2,..}:DecisionPlanName2{Entity2.1,Entity2.2,..}
Set semicolon delimited list of Decision Plans list of entities for each Decision Plan.
Format: Entities listed in curly braces and colon delimited.
When curly braces are empty or missing for some Decision Plans, all classification entities are presented in
the classification results in FAM report /Investigation Dashboard.
Examples for empty/missing curly braces : DecisionPlanName1{}:DecisionPlanName2{}
DecisionPlanName1:DecisionPlanName2"~
FAM_ICM_CLASS_THREAD_COUNT 5 Number of threads for the classifier to use. The default is 5 and is the recommended value. X
FAM_INSTALL_DIR Â The location in which the File Activity Monitoring software is installed. Windows only. Â
FAM_SCAN_EXCLUDE_EXTENSIONS N Excludes the specified file extension(s) or documents without extensions from the FAM scan. Relevant for X
U both Windows and Linux.
L
L Format: semicolon delimited list
The setting is case sensitive. Examples of excluded extensions: pdf;txt;doc. To exclude documents without
extension, set to "NO_EXTENSION".
FAM_SCAN_MAX_DEPTH Â Limit the depth for the scan relative to the specified starting directories (FAM_SOURCE_DIRECTORIES). X
FAM_SCHEDULER_HOUR_TIME_INTERVAL 1 Frequency, in hours, at which the discovery and classification scan is run. X
2
Format: integer
The default is 12 hours.
FAM_SCHEDULER_MINUTE_TIME_INTERVAL Â Along with the hour interval, this is the time interval between scans. For example, if you want scans to occur X
12 hours and 30 minutes apart, specify 12 for the hour and 30 here for the minute.
Format: integer
FAM_SCHEDULER_REPEAT Â True = Repeat the discovery process at the specified time interval. X
False = Do not repeat scan.
During File Activity Monitoring, the GIM installation user must configure ICM Decision Plan setting on the File Activity Monitoring GIM configuration page.
User must configure the list of Decision Plans (categories) with entities (NVP fields) for each Decision Plan delimited by colons.
The customer should be able to configure all possible entities for each Decision Plans templates, available during the File Activity Monitoring installation.
Decision plan classification will appear only when file is sensitive and classification is not empty.
After File Activity Monitoring installation, there are four Decision Plan templates available:
The "Source" decision plan refers to two knowledge bases (CodeKB and DocumentTypeKB) which are loaded by default once the Source decision plan is configured.
Here the list of possible entities for each Decision Plan supplied out of the box with File Activity Monitoring and can be configured via GIM.
HIPAA
SSN, Name, License, GovermentID, PassportContext, BankAccount, Address, IPAddress, EmailAddress, URL, Phone, CreditCard, possibleHealthPlan, Confidential_match,
HIPAA_match
PCI
SSN, Name, License, GovermentID, PassportContext, BankAccount, Address, IPAddress, EmailAddress, URL, Phone, BankAccountContext, CreditCard, CreditContext,
containCardIssuer, PCI_match, Confidential
SOX
SSN, Name, License, GovermentID, PassportContext, BankAccount, Address, IPAddress, EmailAddress, URL, Phone, BankAccountContext, CreditCard, CreditContext,
containCardIssuer, piiMatch, Confidential, SOXContext, SOX_match
Source
A decision plan is a collection of rules that you configure to determine how IBM Classification Module classifies content items. Rules consist of triggers and actions. A
trigger determines the conditions that must be met to initiate an action. An action determines how the document is to be classified. A decision plan can also refer to one or
more knowledge bases to combine rule, keyword-based classification with statistical, text-based classification.
A Knowledge base is a set of collected data that is used to analyze and categorize content items. The knowledge base reflects the kinds of data that the system is
expected to handle. Before the knowledge base can analyze text, it must be trained with a sufficient number of sample content items that are properly classified into
Note: ICM is not able to work with Decision Plans with Chinese names. Content documents in Chinese and Decision Plan rules in Chinese is supported, but not Decision
Plan names in Chinese.
Note: Distribution of decision plans from the Central Manager to managed units is unsupported.
Note: The classification results for each decision plan should be specified by properly configured and recognized entities. Classification will appear only when the file is
sensitive and classification is not empty. In debug level, there is documentation regarding ICM errors and decision plan failures.
Procedure
1. Use the Windows Start menu to open the IBM Content Classification 8.8 Classification Workbench.
2. In the Open Project dialog, click New....
3. In the New Project dialog, choose Decision Plan for the project type. Enter a name for this decision plan, such as ProjectA_DP. Enter a description if you want one.
4. In the New Project Options dialog, select Create an empty project.
5. In Project Explorer click Word and string list files. In the Word and string list files dialog, click New... to create a new file. In the New File dialog, choose Word list for
the file type and choose a name for the file. In this example we call the file Names. Wordlist_Names.txt appears in the list of files.
6. Double-click the file name to edit the file. Insert a single line with the string ~ProjectA~ and save the file.
7. In Project Explorer click DecisionPlan > New Group > New Rule. Change the name of the rule to ProjectA.
8. In the New Rule dialog, open the Trigger tab. Click condition.
9. Choose Trigger when fields contains specific words or phrases. Choose Word list file. Click OK.
10. Open the Action tab. Click Add new rule.
11. Select Advanced Actions from the Action Type list. Choose the Set content field action. This content field is created when the specified trigger fires. The content
field can be viewed in FAM reports.
12. In the Add action dialog, enter ProjectA_match as the content field name and enter found in the Value field.
13. Import the content set into the decision plan project.
a. Create a text document that contains the string "ProjectA."
b. In the Project Explorer, expand the ProjectA_DP project. Right-click Content Set and choose Import Content Set.
c. Click Files from a file system folder. Browse to the file that you created in step a. Click Next, then Next, then Next, then Finish.
14. Verify that your definition is successful.
a. In the Project Explorer, open the Content Set tab. Right-click your file and choose Run Item through Decision Plan.
b. In the Analyzed item dialog, expand Decision Plan and the group. Verify that Rule:ProjectA is marked [Triggered].
c. Click Content Fields.... In the Select Content Fields dialog, verify that "ProjectA_match" is displayed in the Changed fields box, and "found" is displayed in the
content box.
15. In the Project Explorer, click Project > Save to save the ProjectA_DP project.
16. In the Project Explorer, click Project > Export to export the ProjectA_DP project to a dpn file.
17. Use GIM to push the dpn file to the file servers where you want to use the decision plan.
Entitlement Optimization
Entitlement Optimization mediates between the role of the DBA in providing users the entitlements that are required to perform their jobs efficiently, and the role of
Security in keeping entitlements as accurate and as minimal as possible to prevent system vulnerabilities.
Situations naturally arise during day to day management of the system that result in vulnerabilities, for example:
Over-generalized access
A privilege that was given to a user needed for one-time use but not removed afterward
Changes over time of users and tables, resulting in dormant users and tables
Privileges that are passed from one user to another
Entitlements require constant ongoing vigilance. For example, advanced persistent threats (APT) usually originate with one of these back door entries into the system.
Entitlement optimization constantly analyzes users’ privileges and actions, and produces recommendations that pinpoint specific actions that aim to minimize user
access to only that which is required. The analysis is entirely performed by the system. The admin reviews the results, examines each case, and takes the appropriate
actions, for example, removing privileges from a DB user, or deleting dormant roles.
You can also investigate entitlement changes over the past week, a complete list of users and roles, data source privileges alongside their actual usage, and a simulated
justification of a specific user-role combination. These views provide information relevant to the recommendations, and are also starting points for other investigations.
The advantages of entitlement optimization over Guardium reports is that it consolidates information for all database types (that appears in multiple Guardium reports),
and it adds new analyses into its own comprehensive and consolidated reports, simplifying entitlement management, and thereby increasing system security.
Entitlement optimization supports database types: Microsoft SQL Server, Oracle. It does not support SQL Contained Databases. (Guardium reports are per database type.)
Entitlement optimization activity monitoring is limited to the data currently monitored by Guardium. The accuracy of the Recommendations, Entitlement browse and What
if analyses depend on the relevance of the monitored data. To fully maximize the potential of this tool, configure the userScope and objectScope parameters, and consider
modifying the security policy.
Users that are dormant from the time you start monitoring with Entitlement optimization are not included in the entitlement optimization reports. To watch a specific user
that is monitored but doesn't have any recommendations, manually check the activity of the user either through entitlement browse or any of the other Guardium activity
monitoring tools. The tools have the full information if the policy is correctly defined.
Entitlements analysis is per Collector, and operates only on the data sources that you configure by grdapi.
Access entitlement optimization from Discover > Database Entitlements > Entitlement Optimization
All commands are run on the Collector, and use the already defined Guardium data sources. First you enable the feature on the Collector, then specify the data sources
and enable the specific features.
The most accurate results are obtained by fine-tuning the data that is included in the entitlement optimization.
Users and Roles, and Browse entitlements, are enabled by default, however you must set extractActivity and extractEntitlement to true to extract the relevant data. The
other three features (What's New, Recommendations, What If) are enabled individually. For example, you can enable Recommendations while leaving What If disabled.
Entitlement recommendations uses a subset of data, filtered by the userScope and objectScope parameters. Browse Entitlements uses the userScope parameter to filter
data. Both parameters specify one or more Guardium groups. Most likely, you will create specific groups to use for this purpose. Define the groups to extract only the data
you want, to minimize storage and processing. The groups should have Full Audit, so that all data is analyzed and the results are conclusive. When you use groups with Full
audit, the Browse Entitlements shows all rights of all users, regardless of their activity. A user that is outside of the userScope definition appears in the window, but its
activity count is "unknown."
The best practice is to carefully evaluate and design your data collection scheme such that you only rarely change it. This is for two reasons: every time you change the
configuration, it takes a week to generate data for reports; the data is compared to data of the previous 3 weeks, and when you change the data definition the comparison
is less meaningful for the first 3 weeks.
Data is present in each tab from the first Sunday after you enable the individual feature.
Prerequisites
Quick Search is enabled. (Required for What-if, Recommendations, and updating activity in Entitlement Browse.)
The user that configures the entitlement optimization must have permission to all the meta data and schema tables that are in the configured datasources.
Syntax:
grdapi enable_entitlement_optimization
Syntax:
grdapi disble_entitlement_optimization
Syntax:
Use this table to determine which extractions you require, per feature:
extractActivity    X X
extractEntitlement X X X X Â
Syntax:
isEnabled
userScope
objectScope
extractActivity
extractEntitlement
generateRoleClusters
generateNews
generateRecommendations
filterTempObjects
filterIgnoreVerbs
grdapi get_entitlement_optimization_info
Typical output:
Data is presented in the tab from the first Sunday after you enabled the feature.
The number of new Users, Roles, Objects, and the number of databases associated with these additions
The number of new Grantees and Grantors and the number of grants
Click Details in any topic to open a detailed table of the additions. For example, the details on new users are the server and service name.
Data is presented in the tab from the first Sunday after you enabled the feature.
This tab is based on the standard Guardium user and roles report that presents data on only one database type. It presents:
Host
Service Name
DB type
Grantee
Grantee type
Role
You can use the standard Report Builder functions, which are accessed by the icons above the table.
Data is presented in the tab from the first Sunday after you enabled the feature.
The system is continuously evaluating users and privileges. The weekly Entitlement recommendations report is based on the last 3 weeks of data (by default), such that
each new report overlaps with data of the previous report. The Recommendations tab is equivalent to the Recommendations report in “Reports†, which can be
enabled as a distributed report.
If you customized the userScope parameter, the recommendations only include users from the specified user groups. The userScope and objectScope parameters are
used in order to explicitly define the scope of recommendations. In order to maximize the accuracy of recommendations regarding users and objects, the users and
objects in the specified groups should have Full Audit.
All recommendations must be thoroughly investigated by the admin, by drilling-down for specific server, database, object, and recommendation type, before
implementation.
The top of the tab contains a pie graph that shows the recommendations by type. The table at the bottom of the window lists the recommendations. You can modify the
recommendations report using the standard reports icons, export the report by clicking Export, and map to API by clicking Actions.
ANOMAL USER User {object} has anomal activity within role {source} User activity count within a specific role is anomalous.
This means the user is either much more active or
much less active than other users.
ALERT ACTIVITY (Ad-hoc user) User {source} used the privilege {verb}-{object} but no A typical ad hoc user gives itself permission, performs
entitlement was found an action, and then removes the permission. Users can
be erroneously identified as ad hoc due to the time
differences between the entitlement changes and their
activities. Use the Guardium Activity Monitoring Tools
to determine whether or not the privilege is justified.
DORMANT_USER Remove inactive or empty user{object} User has no assigned privilege or had no activity within
the given interval.
DORMANT_ROLE Remove inactive or empty role{role} No users, no activity by any users, or empty privileges
REVOKE_FROM_USER Revoke{verb}-{object} from user {source} User did not performed any activity on the relevant
object, verb.
REVOKE _FROM_ROLE Revoke{verb}-{object} from role {source} ALL the users within the specific role didn't perform
any activity on object, verb.
REMOVE_FROM_ROLE Remove user{object} from role{source} User didn't use any of the privileges granted to him by
the role.
INACTIVE DATABASE Database has no activity If the unused database cannot be justified, remove it.
Parent topic: Entitlement Optimization
Data is presented in the tab from the first Sunday after you enabled the feature. After the first Sunday, the activities are updated daily.
This information is useful for general entitlement investigation, and to further evaluate recommendations in the Recommendations report. The default view in this window
is a bar chart of the datasources with the highest rates of unused privileges.
Entitlement browse shows all the entitlements of the data sources defined in the grdAPI that have extractEntitlement available. This is true if the activity collection is off,
and if the user scope and object scopes are defined. You can always search and see the permissions of all the users.
Active users appear green and have numerical results in the activity count column
Non-active users appear red and the activity count is "Not active"
Users that are not included in the userScope:
Active users appear green and have numerical results in the activity count
Non-active users appear gray and the activity count is "unknown"
Determine which objects a user has permissions for and whether he uses them
Determine whether a user utilized his permission on an object at the specific time it was permitted
Are there permissions that are used more than expected?
Are there permissions that are used only once?
What is the lineage of the permissions that have been unusually utilized: explicit, or implicit, inherited from a parent role, or role hierarchy?
To get more details on how a specific privilege is used, with full SQL, you can search for Data Activity (Investigate > Search for Data Activity), right-click the DB User or
Source program in the Results Table, and select Full SQL by DB User.
Action rarely performed, but a valid entitlement, for example generating a quarterly report
Unused and therefore not justified (point of vulnerability)
The table presents the Grantee type, Grantee, Verb, Name, Activity count, and Lineage. A user can have multiple privilege lineages: explicit, or implicit, inherited from a
parent role, or role hierarchy.
Data is available in this tab from the first Sunday after you enabled the feature.
Guardium analyzes the behavior of similar users to produce the probable justification, which is some cases provides highly relevant information. The analysis can be useful
when you examine unused entitlements, and the REVOKE_FROM_USER recommendation. It is a general indication, and should be used together with other entitlement
optimization functions.
User name
Object name
Verb (one or more)
Server IP
Service name
The probability that this DB user will use this privilege is n%. Probability of 100% indicates that the user used the activity at least once.
Protect
After you identify databases and file systems that contain sensitive data, you can take several steps to protect that data. Protection options include masking data, alerting
personnel based on data access, and establishing policies that enforce access restrictions.
Baselines
A baseline is a profile of access commands executed in the past, helping to identify normal activity and anomalous behavior (inconsistent with or deviating from
behavior that is usual, normal, or expected).
Policies
A security policy contains an ordered set of rules to be applied to the observed traffic between database clients and servers. Each rule can apply to a request from a
client, or to a response from a server. Multiple policies can be defined and multiple policies can be installed on a Guardium® appliance at the same time.
Correlation Alerts
An alert is a message indicating that an exception or policy rule violation was detected.
How to signify events through Correlation Alerts
Trigger a correlation alert if there are more than fifteen SQL Errors in the last three hours from any individual user of the application.
Incident Management
The Integrated Incident Management (IIM) application provides a business-user interface with workflow automation for tracking and resolving database security
incidents.
How to manage the review of multiple database security incidents
Incident management - track and resolve database security incidents.
Query rewrite
Query rewrite functionality provides fine-grained access control for databases by intercepting database queries and rewriting them based on criteria defined in
security policies.
File Activity policies and rules
File activity monitoring ensures integrity and protection of sensitive data on UNIX and Windows file servers.
Baselines
A baseline is a profile of access commands executed in the past, helping to identify normal activity and anomalous behavior (inconsistent with or deviating from behavior
that is usual, normal, or expected).
The Baseline Builder generates a baseline by examining activity previously logged and currently available, on the Guardium system.
When included in a security policy, the baseline becomes a baseline rule, which allows all database access that has been included in the baseline. Â
Attention: The Baseline Builder and related functionality is deprecated starting with Guardium V10.1.4.
The Policy Builder can generate suggested policy rules from the baseline. The suggested rules can be edited and included in the policy ahead of the baseline rule, so that
alternative actions (alerts, for example) can be taken for some commands that were seen in the baseline period. In addition, an examination of the suggested rules
provides valuable insight into the actual traffic patterns observed (types of commands and frequency).
The Baseline Builder provides the ability to control what gets included in the baseline, in several ways:
By specifying a threshold to control how many occurrences of a command must be seen before the command will be included in the rule. A threshold of one
includes every command observed, while a threshold of 1,000 includes only those commands occurring 1,000 times or more.
By controlling sensitivity to one or more attributes. For example, if the baseline is sensitive to the database user, it will include commands for specific users only.
Users who did not execute the command during the baseline period would not be allowed by the baseline rule.
By limiting the connections included to subsets of server and client IP addresses. The baseline always specifies a single client network mask and a single server
network mask. Each mask can be as inclusive or as exclusive as required.
By merging data from different time periods. There may be traffic that occurs during non-contiguous time periods that should be included in the baseline. You can
merge the data from any number of time periods into a single baseline. In addition, the data can be filtered for specific client and server addresses.
Database User
Database Protocol
Database Protocol Version
Time Period
Source Program
Sequence
Baseline sensitivity depends on a specified threshold, which defines the minimum number of times a command must be observed during the baseline period in order to
include that command in the baseline.
If a single type of sensitivity is selected, a separate count of each command will be maintained for each value of the sensitivity type (database user, for example).
If multiple types of sensitivity are selected, separate counts of each command are maintained for each combination of values for each selected type (for each combination
of database user and source program, for example). Thus for each type of sensitivity included, the number of combinations can increase dramatically.
A-B Y
A - everything else N
B-C Y
B - anything else N
Anything but A N
To illustrate how the Baseline Builder assigns requests to time periods, assume that Saturday is included in three time periods:
Since the time period named Saturday is the most restrictive (24 hours only), all requests time-stamped on Saturday will be counted in that time period, and not in the
more inclusive Week End or 7x24 time periods.
Baselines are generated using only the data currently available on the appliance that is generating the baseline.
A baseline generated on a collector will be built using the traffic available on that unit only.
A baseline built on an aggregator will be built from the data currently available on the aggregator, which typically will have been sent from multiple collectors over a
period of time.
A baseline generated on a Central Manager that is not also an aggregator will be empty, since a Central Manager does not collect data (unless it is also an
aggregator).
In a Central Management environment, a baseline generated on a managed unit will be built using data from that unit only, but the baseline will be stored on the
Central Manager, and it will be available for use on any other unit.
In a Central Management environment, to generate a single baseline from multiple managed units, the baseline can be built with data from the first managed
appliance, and then merged using data from the other appliances, one at a time.
You may want to modify the suggested rules if you discover an activity that occurred during the baseline period that you would like to monitor or alert upon in the future.
You simply tailor the appropriate rule suggested from the baseline, and assign the desired action. By default, the suggested rules will be positioned before the baseline
rule, so that the action specified will be taken before the baseline rule executes to allow that command with no further testing of rules.
Note: The Policy Builder can also generate rules from the database ACL. See Policies for more information.
You can display the membership of a suggested object group, and you have the option of accepting or rejecting each group. In the example just given, if you reject the
suggested object group, the single rule that references it will be replaced by three suggested rules (one each for AAA, BBB, and CCC).
Creating a Baseline
If the approach you are taking in building your security policy is to always allow the most commonly issued commands from the past, then set this number upwards
to the appropriate level. If, on the other hand, you want to ensure that the baseline is comprehensive, then leave this value set to 1. In either case, you can have the
Policy Builder suggest rules from the baseline. The suggested rules are sorted in descending order by frequency in the baseline period, so you can decide at that
time whether to include or modify rules for each unique command issued.
6. Use the Baseline Network Information pane to identify the servers and clients to be included in the baseline. The method used to select which IP addresses to use
to construct the baseline is the same for servers and clients.
For each address encountered in the baseline data, membership in an optional tagged group is considered first. A tagged group is a specific list of IP addresses for
which baseline constructs will be generated. If a tagged group is selected, and if an IP address encountered in the baseline data is included in the corresponding
tagged group, that element will be included in the baseline for that specific IP address. For example, assume that the Tagged Client IP Group named ZoneAGroup
has been selected, and that group includes a client address of 192.162.14.33. If the baseline generator encounters the command SELECT abc FROM xyz from that
IP address, that command will be counted for that specific address.
In contrast, if no tagged group is selected, or if an IP address is encountered in the baseline data that is not a member of the selected tagged group, that command
may be counted with identical commands from other IP addresses as directed by the corresponding network mask.
The network mask is required to group both client and server IP addresses. Choices include all the different variations of subnet masks between 255.255.255.255
(all four octets must match) and 0.0.0.0 (all octets can be anything).
When generating the baseline, this command will be included in the count of all SELECT abc FROM xyz commands for all client IP addresses from the 192.168.0.0
subnet.
7. Click Save to validity-check and save the baseline definition. If you have omitted required fields or entered invalid values, the definition will not be saved and you
must resolve any problems before attempting to save again.
8. Optionally click Roles to assign roles for the policy.
9. Optionally click Comments to add comments to the definition.
10. After a baseline has been saved successfully, the Baseline Generation and Baseline Log panes appear on the panel.
11. Click anywhere on the Baseline Generation pane title to expand the pane.
12. Supply both From and To dates to define the time period from which the baseline is to be generated. There are a number of ways to enter dates; for more
information see Dates_and timestamps. Regardless of how you enter dates, any minutes or seconds specified will be ignored.
13. Click the Generate button to generate the baseline. If you have modified the baseline definition, you will be prompted to save the definition before generating the
baseline.
Note: After you successfully generate the baseline for the first time, additional fields are displayed in the Baseline Generation panel. These fields allow you to merge data
from additional time periods into the baseline, and to restrict the client and server IP addresses used during each additional time period.
1. Click Protect > Security Policies > Baseline Builder to open the Baseline Finder.
2. From the Baseline Definition list, select the baseline into which additional baseline information is to be merged.
3. Click Modify to open the Edit Baseline panel.
4. Do not modify the Baseline Sensitivity selections. If you modify the baseline sensitivity, you are prompted to generate a completely new baseline to replace the
existing one.
5. Optional. Set the Minimum number of occurrences for addition to Baseline value in the Baseline Threshold pane. The value entered here has no impact on
information previously included in the baseline. Once something is added to the baseline, it is not removed during a merge operation.
6. Optional. Enter alternative network information in the Baseline Network Information pane. The displayed values are from the last generate or merge operation. If
the merged information comes from the same set of servers and/or clients, leave these fields unchanged. Otherwise, make the appropriate changes in this pane to
select the traffic to be included in the baseline.
7. Click anywhere on the Baseline Generation pane title to expand the pane.
8. Supply both From and To dates to define the time period from which the baseline is to be generated. There are a number of ways to enter dates; for more
information see Dates_and timestamps. Regardless of how you enter dates, any minutes or seconds specified will be ignored.
9. Select the Merge radio button.
10. Optional. In the Filter Selection pane, limit the baseline generation to specific client and/or server IP addresses by entering an IP address followed by a network
mask. For example, to select all client IP addresses from the 192.168.9.x subnet, enter 192.168.9.1 in the first Client IP box, and 255.255.255.0 in the second box.
To include additional addresses, click the Add button, then enter the additional address information
11. Click Generate to generate the baseline. If you have modified the baseline definition, you will be prompted to save the definition before generating the baseline.
Modify a Baseline
Caution: Before modifying a baseline definition, be sure that you understand the implications of modifying it, particularly if the baseline whose definition you want to
modify and re-generate is used in an installed policy. If you modify and re-generate a baseline contained in an installed policy, when you re-install that policy it will use the
1. Click Protect > Security Policies > Baseline Builder to open the Baseline Finder.
2. From the Baseline Definition list, select the baseline to be modified. Click the Modify button to open the Edit Baseline panel. Apart from the panel title, this panel is
identical to the Add Baseline panel. See Create a Baseline for instructions on using this panel.
Clone a Baseline
There are a number of situations where you may want to define a new baseline based on an existing one, without modifying the original definition. See the caution.
1. Click Protect > Security Policies > Baseline Builder to open the Baseline Finder.
2. From the Baseline Definition list, select the baseline to be cloned.
3. Click Clone to open the Clone Baseline panel.
4. Enter a unique name for the new baseline in the New Baseline Description box. Do not include apostrophe characters in the new baseline description.
5. To clone the baseline constructs (the commands, basically) that have been generated for the baseline being cloned, mark the Clone Constructs checkbox.
6. Click Accept to save the new baseline. You can then open and edit the new baseline by using the Baseline Finder.
Remove a Baseline
1. Click Protect > Security Policies > Baseline Builder to open the Baseline Finder.
2. From the Baseline Definition list, select the baseline to be removed.
3. Click Delete. You are prompted to confirm the action.
Policies
A security policy contains an ordered set of rules to be applied to the observed traffic between database clients and servers. Each rule can apply to a request from a client,
or to a response from a server. Multiple policies can be defined and multiple policies can be installed on a Guardium® appliance at the same time.
Each rule in a policy defines a conditional action. The condition tested can be a simple test - for example it might check for any access from a client IP address that does
not belong to an Authorized Client IPs group. Or the condition tested can be a complex test that considers multiple message and session attributes (database user, source
program, command type, time of day, etc.), and it can be sensitive to the number of times the condition is met within a specified timeframe.
The action triggered by the rule can be a notification action (e-mail to one or more recipients, for example), a blocking action (the client session might be disconnected), or
the event might simply be logged as a policy violation. Custom actions can be developed to perform any tasks necessary for conditions that may be unique to a given
environment or application. For a complete list of actions, see Rule Actions Overview.
A policy violation is logged each time that an alert or log-only action is triggered. Optionally, the SQL that triggered the rule (including data values) can be recorded with
the policy violation. Policy violations can be assigned to incidents, either automatically by a process, or manually by authorized users (see the Incident Management tab in
the Guardium GUI. For further information, see Incident Management.
Note: Correlation alerts can also be written to the policy violations domain (see Correlation Alerts).
In addition to logging violations, policy rules can affect the logging of client traffic, which is logged as constructs and construct instances.
Constructs are basically prototypes of requests that Guardium detects in the traffic. The combinations of commands, objects and fields included in a construct can
be very complex, but each construct basically represents a very specific type of access request. The detection and logging of new constructs begins when the
inspection engine starts, and by default continues (except as described) regardless of any security policy rules.
Each instance of a construct detected in the traffic is also logged, and each instance is related to a specific client-server session. No SQL is stored for a construct
instance, except when a policy rule requests the logging of SQL for that instance, or for a particular client/server session of instances (with or without values).
In addition to controlling the inclusion of SQL in client construct instances, a security policy rule can disable the logging of constructs and instances for the remainder of a
session. Â
In heavy volume situations, the parsing and aggregating of information into constructs and instances can be deferred by using the Log Flat (Flat Log) option. When used,
the production of alerts and reports will be delayed until the logged information has been aggregated. See Log Flat discussed later in this topic.
To completely control the client traffic that is logged, a policy can be defined as a selective audit trail policy. In that type of policy, audit-only rules and an optional pattern
identify all of the client traffic to be logged. See Use Selective Audit Trail discussed later in this topic.
In addition to installing new policies from Policy Installer screen of Administration Console/Policy Installation:
On a new installation only (not on upgrades), a default policy exists. It has no rules, but Selective Audit is checked (this means that the Guardium system will not collect
any traffic per the default policy). The default policy on 64-bit Guardium (new installation) is Default - Ignore Data Activity for Unknown Connections.
An access rule applies to client requests - for example, it might test for UPDATE commands issued from a specific group of IP addresses.
An exception rule evaluates exceptions returned by the server (responses) - for example, it might test for five login failures within one minute.
An extrusion rule evaluates data returned by the server (in response to requests) - for example, it might test the returned data for numeric patterns that could be
social security or credit card numbers.
To deal with thresholds, a minimum count and a reset interval can be specified for each policy rule. This can be used, for example, to trigger the rule action after the count
of login failures exceeds 100 (the minimum count) within one minute (the reset interval). If omitted, the default is to execute the rule action each time the rule is satisfied.
Note: Continue to Next Rule applies to access rules following access rules and to exception rules following exception rules, but not to an exception rule following an
access rule or an access rule following an exception rule.
Extrusion rules will be processed regardless of the end of an access or exception rule preceding the extrusion rule. See extrusion rules revoke in the Rule Definitions
Reference table at the end of this topic for information on excluding logging a response that has already been selected for logging by a previous rule in the policy.
Note: The full SQL with values will be available only in the policy violation record, within the policy violations reporting domain. It will not be available in the client traffic
log, or on reports from the data access domain. To include full SQL (with or without data values) in the client traffic log, use the Log Full SQL rule actions.
For more information about working with rules, see the following topics:
Be aware that a group member may contain wildcard (%) characters, so each member of a group may match multiple actual values.
When a Group is selected, be aware that the group may contain wildcards.
Negative Rule: Mark the Not box to create a negative rule; for example, not the specified App User, or not any member of the selected group, or neither the
specified App User nor any member of the selected group.
Empty Value: Enter the special value guardium://empty to test for an empty value in the traffic. This is allowed only in the following fields: DB Name, DB User, App
User, OS User, Src App, Event Type, Event User Name, and App Event Text.
To define a new group to be tested: Click the Groups button to define a new group, and then select that group from the Group list.
To match any value: Leave the value box blank, and select nothing from the Group list (be sure that the line of dashes is selected, as in the example). Â
To match a specific value only: Enter that value in the value box, and select nothing from the Group list.
To match any member of a group: Leave the value box blank, and select the group from the list. If the minimum count is greater than 1, there will be a single
counter, and it will be incremented each time any member of the group is matched.
To match an individual value or any member of a group: Enter a specific value in the value box, and select a group from the list. If the minimum count is greater than
1, there will be a single counter, and it will be incremented each time the individual value or any member of the group is matched.
If the minimum count is greater than 1, count each individual value separately: Enter a dot (.) in the value box, and select nothing from the group list. Note that the
dot option cannot be used for the Service Name or Net Protocol boxes. If the minimum count is greater than 1, count each member of a group separately: Enter a
dot (.) in the value box, and select a group from the list. Note that the dot option cannot be used for the Service Name or Net Protocol boxes.
Note: You can also use regular expressions in the following fields (DB user, App User, SRC App, Field name, Object, App Event Values Text) by typing the special value
guardium://regexp/(regular expression) in the text box that corresponds to the field.
Note: IBM Security Guardium does not support regular expressions for non-English languages.
For detailed information about how to use regular expressions, see Regular Expressions.
Each policy rule can include a single special pattern test. To use one of these tests, begin the rule name with one of the special pattern test names, followed by a space
and one or more additional characters to make the rule name unique For example, if you are searching for Social Security numbers of your employees, you could name the
rule guardium://SSEC_NUMBER employee. You can still specify all other components of the rule, such as specific client and server IP addresses.
These tests match a character pattern, and that match does not guarantee that the suspected item, such as a Social Security number, has been encountered. There can be
false positives under a variety of circumstances, especially if longer sequences of numeric values are concatenated in the data.
guardium://CREDIT_CARD
Detects credit card number patterns. It tests for a string of 16 digits or for four sets of four digits, with each set separated by a blank. This special pattern test also
works with American Express 15-digit credit card number patterns (first digit 3 and second digit either 4 or 7). For example: 1111222233334444 or 1111 2222
3333 4444
When a rule name begins with "guardium://CREDIT_CARD", and there is a valid credit card number pattern in the Data pattern field, the policy uses the Luhn
algorithm, a widely-used algorithm for validating identification numbers such as credit card numbers, in addition to standard pattern matching. The Luhn algorithm
is an additional check and does not replace the pattern check. A valid credit card number is a string of 16 digits or four sets of four digits, with each set separated by
a blank. There is a requirement to have both the guardium://CREDIT_CARD rule name and a valid [0-9]{16} number in the Search Expression box in order to have the
Luhn algorithm involved in this pattern matching.
guardium://PCI_TRACK_DATA
Detects two patterns of magnetic stripe data. The first pattern consists of a semi-colon (;), 16 digits, an equal sign (=), 20 digits, and a question mark (?), such as:
;1111222233334444=11112222333344445555?
The second pattern consists of a percent sign (%), the character B, 16 digits, a carat (^), a variable-length character string terminated by a forward slash (/), a
second variable-length character string terminated by a carat (^), 31 digits, and a question mark (?), such as:
%B1111222233334444^xxx/xxxx x^1111222233334444555566667777888?
guardium://SSEC_NUMBER
Detects numbers in Social Security number format: three digits, dash (-), two digits, dash (-), four digits, such as 123-45-6789. The dashes are required.
guardium://CPF
The Cadastro de Pessoas FÃsicas (CPF), a Brazilian personal identifier. It contains 11 digits of the format nnn.nnn.nnn-nn, where the last two digits are check
digits. Check digits are computed from the original nine digits to provide verification that the number is valid. The formatting characters within the expression are
optional. If there is a match on the expression, the check digits are validated.
guardium://CNPJ
Cadastro Nacional de Pessoas JurÃdicas (CNPJ), an identification number used for Brazilian companies. It contains 14 digits of the format 00.000.000/0001-00
where:
The formatting characters within the expression are optional. If there is a match on the expression, the check digits are validated.
Rule actions
There are a number of factors to consider when selecting the action to be taken when a rule is satisfied.
Note: With S-TAP TERMINATE, the triggering request usually will not be blocked, but additional requests from that session will be blocked (on high rate, sometimes more
than one request may go through before the session is terminated).
S-GATE Actions
S-GATE provides database protection via S-TAP for both network and local connections.
Attached (S-GATE is "on") – S-TAP is in firewalling mode for that session, it holds the database requests and waits for a verdict on each request before releasing
its responses. In this mode, latency is expected. However, it assures that rogue requests will be blocked.
Detached (S-GATE is "off") - S-TAP is in normal monitoring mode for that session, it passes requests to the database server without any delay. In this mode latency
is not expected.
S-GATE configuration in the S-TAP defines the default S-GATE mode for all sessions, as well as other defaults related to S-GATE verdicts when the collector is not
responding. (See Linux and UNIX systems S-TAP firewall parameters and Windows S-TAP firewall parameters.) Other than the default S-GATE configuration, S-GATE is
controlled through the real-time policy mechanism using the following S-GATE Policy Rule Actions:
Intended for use when a certain criteria is met that raises the need to closely watch (and if needed block) the traffic on that session.
S-GATE DETACH: sets S-GATE mode to "Detached" for a specific session.
Intended for use on sessions that are considered as "safe" or sessions that cannot tolerate any latency.
S-GATE TERMINATE: Has effect only when the session is attached. It drops the reply of the firewalled request, which will terminate the session on some databases.
The S-GATE TERMINATE policy rule will cause a previously watched session to terminate.
Note:
S-GATE/ S-TAP termination does not work on a client IP group whose members have wild-card characters. S-GATE/S-TAP termination only works with a single IP
address. Wild-card should be handled by groups if the customer wants to use multiple IP entries. Customer can create groups of trusted or untrusted users/clients
to handle their business needs in the policies.
For ATAP and S-GATE, there are limitations for lower Linux kernels. Basically, for S-TAP 10.1.2 and higher, S-GATE is supported everywhere except Linux with ATAP
and kernels less than 2.6.36.
For MySQL databases, It should be noted that MySQL's default command line connection is 'mysql -u<user> -p<pass> <dbname>’
In this mode, MySQL will first map all the objects and fields in this database to support auto completion (with TAB). When a terminate rule on any object or field
that is involved in this mapping, it will immediately disable the connection session. To avoid this, connect to MySQL with the "-A" flag, which will disable the"'auto-
complete" feature, and will not trigger the "terminate" rule. Another option is to fine tune the rule and not terminate on ANY access to these objects/field and
instead find a criteria that is more narrow and will not trigger the rule on the login sequence.
Alerting Actions
Alert actions send notifications to one or more recipients.
For each alert action, multiple notifications can be sent, and the notifications can be a combination of one or more of the following notification types:
Email messages, which must be addressed to Guardium® users, and will be sent via the SMTP server configured for Guardium. Additional receivers for real-time
email notification are Invoker (the user that initiated the actual SQL command that caused the trigger of the policy) and Owner (the owner/s of the database). The
Invoker and Owner are identified by retrieving user IDs (IP-based) configured via Guardium APIs. The choice Data Security User - Database Associations (available
from accessmgr) displays the mapping (this is similar to what is displayed if running the Guardium API command "list_db_user_mapping").
SNMP traps, which will be sent to the trap community configured for the Guardium appliance.
Syslog messages, which will be written to syslog.
Custom notifications, which are user-written notification handlers, implemented as Javaâ„¢ classes.
Note: Alerts definition and notification are not subject to Data Level Security. Reasons for this include alerts are not evaluated in the context of user, the alert may be
related to databases associated to multiple users and to avoid situations where no one gets the alert notification.
Message templates are used to generate alerts. Multiple Named Message Templates are created and modified from Global Profile. There are several types of alert actions,
each of which may be appropriate for a different type of situation.
Alert Daily sends notifications only the first time the rule is matched each day.
Alert Once Per Session sends notifications only once for each session in which the rule is matched. This action might be appropriate in situations where you want to
know that a certain event has occurred, but not for every instance of that event during a single session. For example, you may want a notification sent when a
certain sensitive object is updated, but if a program updates thousands of instances of that object in a single session, you almost certainly would not want
thousands of notifications sent to the receivers of the alert.
Alert Only - For Alert Only, with type syslog, the message goes directly to /var/log/messages. For other types of Alert Only, the message will get sent to MESSAGE
table. Alert Only does not notify of policy violations.
Alert Per Match sends notifications each time the rule is satisfied. This would be appropriate for a condition requiring attention each and every time it occurs.
Alert Per Time Granularity sends notifications once per logging granularity period. For example, if the logging granularity is set to one hour, notifications will be sent
for only the first match for the rule during each hour. (The Guardium administrator sets the logging granularity on the Inspection Engine Configuration panel.)
The Log and Ignore commands are generally always available, but the Audit Only action is only available for a Selective Audit Trail policy. Access rules, exception rules and
extrusion rules differ in what actions are permitted. Click on the Add Action button for offerings.
Audit Only: Available for a Selective Audit Trail policy only. Log the construct that triggered the rule. For a Selective Audit Trail policy, no constructs are logged by
default, so use this selection to indicate what does get logged. When using the Application Events API, you must use this action to force the logging of database
user names, if you want that information available for reporting (otherwise, in this case, the user name will be blank).
Allow: When matched, do not log a policy violation. If "Allow" action is selected, no other actions can be added to the rule. Constructs are logged.
FAM Alert and Audit - two rule actions - Alert, on a matching event, trigger an alert (using receiver and template) and Audit, log the construct that triggered the rule.
FAM Audit only - log the construct that triggered the rule.
FAM Ignore - Do not log this event.
FAM Log Only Access Violations - log FAM access violations.
Log only: Log the policy violation only. We refer to the fact that the rule was triggered as a policy violation. Except for the Allow action, a policy violation is logged
each time a rule is triggered (unless that action suppresses logging).
Note:
Redaction (Scrub) on supported as of version 9.1. For Windows and UNIX/Linux platforms, Scrub is supported only with ANSI character sets.
Redaction (Scrub) rules should be set on the session level (meaning, trigger rules on session attributes like IPs, Users, etc), not on the SQL level / attributes (like -
OBJECT_NAME or VERB), because if you set the scrub rule on the SQL that needs to be scrubbed it probably will take a few miliseconds for the scrub instructions to make
it to the S-TAP where some results may go though unmasked.
To guarantee all SQL is scrubbed, set the S-TAP (S-GATE) default mode to "attach" for all sessions (in guard_tap.ini). This will guarantee that no command goes through
without being inspected by the rules engine and holding each request and waiting for the policy's verdict on the request. Â This deployment will introduce some latency
but this is the way to ensure 100% scrubbed data.
For the Informix database, when char is used as data type, there will be no null terminated at the end of each column. Thus all four columns are captured in sendmsg
system call as one piece. KTAP will always try to redact whatever data it captures. This is a limitation when using redaction and the Informix database.
For HTTP support, there are Policy action limitations. The following policy actions are not supported for HTTP: S-TAP terminate and Skip logging.
Ignore Responses Per Session: because HTTP does not support exception and extrusion.
Ignore SQL Per Session: because HTTP does not contain SQLs.
Quarantine: This action is used to quarantine user, but HTTP does not support DBUser and OSUser.
Quick Parse: This action is for log SQL.
SGate Terminate: This action is not supported for Hadoop - all the terminate actions do not work for HTTP.
For policy conditions - these conditions are not supported for HTTP:
Client MAC; DB Name; DB User; App User; OS User; Src App; Masking Pattern; Replacement Character; Quarantine for minutes; Records Affected Threshold; XML Pattern;
Event Type; Event User Name; App Event Values Text; App Event Values Text Group; App Evert Values Text and Group; Numeric; Date.
is logged as insert into tableA (name,ssn,ccn) values (?, ?,?). This is the default behavior for two reasons:
1. Values should not be logged by default because they may contain sensitive information.
2. Logging without values can provide for increased system performance and longer data retention within the appliance. Very often, database traffic consists of
many SQL requests, identical in everything except for their values, repeated hundreds, thousands, or even millions of times per hour. By masking the values,
Guardium is able to aggregate these repeated SQL requests into a single request, called a "construct". When constructs are logged, instead of each individual
SQL request/construct being logged separately, it is only logged once per hour (per session) with a counter of how many times the construct was executed.
This can save a tremendous amount of disk space because, instead of creating a hundreds (or millions) of lines in the database, only one new line is added.
With Log Full Details, Guardium logs the data with the values unmasked and each separate request. Log Full Details also provides the exact timestamp whereas
logging without details provides the most recent timestamp of a construct within the logging granularity time period (usually 1-hour).
Ignore S-TAP Session - Ignore S-TAP Session causes the collector to send a signal to the S-TAP instructing it to stop sending all traffic, except for the logout
notification, for specific sessions. For example, if you have a rule that says where DBUserName?=scott, Ignore S-TAP Session:
When Scott logs into the database server, S-TAP sends the connection information to the collector.
The collector logs the connection. Session information (log in/log outs) are always logged.
The collector sends a signal to S-TAP to stop sending any more traffic from this specific session. This means that any commands run by Scott against the
database server and any responses (result sets, SQL errors, etc.) sent by the Database server to Scott will be discarded by S-TAP and will never reach the
collector.
When Scott logs out of the database server, S-TAP will send this information to the collector (log in/log out information is always tracked even if the session is
ignored).
When Scott logs in again, these steps are repeated. The logic on which sessions should be ignored is maintained by the collector, not the S-TAP.
It is important to note that Ignore Session rules are still very important to include in the policy even if using a Selective Audit Trail. Ignore Session rules decrease
the load on a collector considerably because by filtering the information at the S-TAP level, the collector never receives it and does not have to consume resources
analyzing traffic that will not ultimately be logged. A Selective Audit Trail policy with no Ignore Session rules would mean that all traffic would be sent from the
database server to the collector, causing the collector to analyze every command and result set generated by the database server.
Limitation
The success or failure of SQL commands in MS-SQL or Sybase batch statements may not show correctly.
MS-SQL or Sybase SQL batch statements are primarily used when creating complex procedures.
When executing SQL statements separately, the status of each statement is tracked separately and will have the correct success or failure value.
When a batch of SQL statements (used in MS-SQL or Sybase) are executed together, the status returned is the single status of the last transaction in the batch.
Guardium example
In the Guardium application, only the success or failure of the last SQL statement is reported in a MS-SQL or Sybase batch statement. In this case, success is
reported for the MS-SQL or Sybase batch statement, even though SQL 1 and SQL 2 failed.
As a result an extrusion rule is attached to the session and Analyzer will use EUC-JP in the session, if there is no other character set.
As a result an extrusion rule us attached to the session and Analyzer will use EUC-JP character set in the session in any case. Character set used before will be substituted
by EUC-JP.
Keep in mind that extrusion rules usually attach to the session with some delay. Therefore short sessions or the beginning of the session are not immediately changed by a
character set change. The schema works for: Oracle, Sybase, MY SQL, and MS SQL.
Analyzer rules
Certain rules can be applied at the analyzer level. Examples of analyzer rules are: user-defined character sets, source program changes, and issuing watch verdicts for
firewall mode. In previous releases, policies and rules were applied at the end of request processing on the logging state. In some cases, this meant a delay in decisions
based on these rules. Rules applied at the analyzer level means decisions can be made at an earlier stage.
Log Flat
The Log Flat option listed in Policy Definition of Policy Builder allows the Guardium appliance to log information without immediately parsing it.
This saves processing resources, so that a heavier traffic volume can be handled. The parsing and merging of that data to Guardium's internal database can be done later,
either on a collector or an aggregator unit.
There are two Guardium features involving the Flat Log Process - Flat Log by policy definition and Flat Log by throttling mechanism.
Flat Log by throttling mechanism - This is the feature implemented by running the CLI command, store alp_throttle 1. The same policy that is applicable to real-time S-TAP
traffic is used to process traffic that was logged into the GDM_FLAT_LOG table.
For Flat Log by throttling mechanism, the Flat Log checkbox should NOT be checked in Policy Builder.
Flat Log by policy definition - Selection of this feature involves the Policy Builder menu in Setup >Tools and Views and Flat Log Process menu in Manage > Activity
Monitoring.
Note: Rules on flat does not work with policy rules involving a field, an object, SQL verb (command), Object/Command Group, and Object/Field Group. In the Flat Log
process, "flat" means that a syntax tree is not built. If there is no syntax tree, then the fields, objects and SQL verbs cannot be determined.
The following actions do not work with rules on flat policies: LOG FULL DETAILS; LOG FULL DETAILS PER SESSION; LOG FULL DETAILS VALUES; LOG FULL DETAILS
VALUES PER SESSION; LOG MASKED DETAILS.
When the Log Flat (Flat Log) checkbox option listed in the Policy Definition screen of the Policy Builder is checked,
Rules on Flat
This section describes the differences on uses of Rules on Flat.
Policy rules will fire at processing time using the current installed policy.
Note: Rules on flat does not work with policy rules involving a field, an object, SQL verb (command), Object/Command Group, and Object/Field Group. In the Flat Log
process, "flat" means that a syntax tree is not built. If there is no syntax tree, then the fields, objects and SQL verbs cannot be determined.
The following actions do not work with rules on flat policies: LOG_FULL_DETAILS; LOG_FULL_DETAILS_PER_SESSION; LOG_FULL_DETAILS_VALUES;
LOG_FULL_DETAILS_VALUES_PER_SESSION; LOG_MASKED_DETAILS.
This is appropriate when the traffic of interest is a relatively small percentage of the traffic being accepted by the inspection engines, or when all of the traffic you might
ever want to report upon can be completely identified.
Without a selective audit trail policy, the Guardium appliance logs all traffic that is accepted by the inspection engines. Each inspection engine on the appliance or on an S-
TAP is configured to monitor a specific database protocol (Oracle, for example) on one or more ports. In addition, the inspection engine can be configured to accept traffic
from subsets of client/server connections. This tends to capture more information than a selective audit trail policy, but it may cause the Guardium appliance to process
and store much more information than is needed to satisfy your security and regulatory requirements.
When a selective audit trail policy is installed, only the traffic requested by the policy will be logged, and there are two ways to identify that traffic:
If the Guardium security policy has Selective Audit Trail enabled, and a rule has been created on a group of objects, the string on each element in the group is checked. If
there is a match, a decision is made to log the information and continue. If the Guardium security policy has Selective Audit Trail enabled, and a rule has been created on a
group of objects using a NOT designation on the object group, there is still a need to check the string on each element in the group, and decide to log and continue only if
none of the elements match. NOT designated rules behave the same as normal rules when used with Selective Audit Trail.
This includes:
Note: Any select statements with query hints, such as SELECT /*+ ORDERED USE_MERGE(m) */ SELECT /*+ ORDERED */ SELECT /*+ all_rows */ etc. are allowed to pass
through the parser and logged regardless of the rule definition to skip them (at least with selective audit mode). This is because a selective audit policy should not prevent
logging of certain SQLs that may be needed for other functions, like application user translation.
The policy will ignore all of the traffic that does not fit the application user translation rule (for example, not from the application server).
Only the SQL that matches the pattern for that policy will be available for the special application user translation reports.
Creating policies
In addition to creating policies, you can modify, clone, or remove a policy.
Create a policy
Use this section to create a policy. The steps follow the menu fields on the Policy Builder screen.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. A series of predefined policies (available for policy cloning) with access, exception and extrusion rules have been created for database events that demonstrate
attempts to defeat the protect mechanisms. Such events that will generate log actions or alerts are: failed logins and SQL errors from certain groups or servers,
access of certain database objects by certain users or groups, attempts to change SQL GRANT commands, and more. These predefined policies facilitate quicker
creation of policies for compliance. For exmaple, GDPR, Basel II, and PCI.
Attention: If a [template] version of a predefined policy is available, using the older version (not marked as a [template]) is not recommended because it will not
receive updates. Instead, clone the [template] version and customize it as needed.
3. Clone a predefined policy or click New to open the Policy Definition panel.
4. Enter a unique name for the policy in the Policy Description box. Do not include apostrophe characters in the description.
5. Optional. Enter a category in the Category box. A category is an arbitrary label that can be used to group policy violations for reporting purposes. The category
specified here will be used as the default category for each rule (and it can be overridden in the rule definition).
6. Optional. Select a baseline to use from the Policy Baseline list. Be sure that the baseline selected has been generated. If it has not been generated, the Policy
Builder will not be able to suggest rules from that baseline. If the baseline you want to use does not display in the list, your Guardium user ID has not been assigned
a security role authorized to use that baseline. Contact your Guardium® Administrator for further information.
If the policy includes a baseline, the policy definition will initially contain only the baseline, and the action for a baseline is always allow without continuing to the
next rule.
When adding a baseline to an existing policy, it will be added as the first rule. You can move the baseline rule to any location in the policy. (Be aware if moving the
baseline as the last rule, it will have no effect.)
Attention: The Baseline Builder and related functionality is deprecated starting with Guardium V10.1.4.
7. Optionally mark Log Flat to indicate that Guardium is to log data, but not analyze and aggregate the data to the internal database.
8. If Log Flat is selected, optionally mark Rules on Flat to apply the policy rules to the flat log data (as opposed to the aggregated data).
9. Optionally mark Selective Audit Trail to restrict what will be logged when this policy is installed:
When marked, only traffic requested by this policy will be logged. This is appropriate when the traffic of interest is a relatively small percentage of the traffic
being seen by the inspection engines. When marked, there are two ways to signal what traffic to log: by specifying a string that can be used to identify the
traffic of interest, in the Audit Pattern box; or by specifying Audit Only or any of the Log actions for one or more policy rules (rule actions are described later).
When not marked (the default situation), the Guardium appliance logs all traffic that is seen by the inspection engines. This provides comprehensive audit
trail capabilities, but may result in capturing and analyzing much more information than is needed.
Modify/Clone/Remove a Policy
Use this section for the steps on how to modify, clone or remove a policy.
Modify a policy
Use caution before modifying a policy definition: be sure that you understand the implications of modifying a policy that is in use. If the existing policy has to be re-
installed before all revisions have been completed, the policy may not install, or it may not produce the desired results when installed. For this reason, it is preferable to
clone the policy, so that the original is always available to reinstall.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be modified.
3. Do one of the following:
To edit overall policy settings (Category, Log Flat option, etc.) click Modify. To change any of these settings, see Create a Policy.
To edit the rules only, click Edit Rules. To modify any components of the rule definitions, see Add or Edit Rules.
Clone a policy
There are a number of situations where you may want to define a new policy based on an existing one, without modifying the original definition.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be cloned.
3. Click Clone to open the Clone Policy panel.
4. Enter a unique name for the new policy in the New Name box. Do not include apostrophe characters in the name.
5. To clone the baseline constructs (the commands, basically) that have been generated for the baseline being cloned, mark the Clone Constructs checkbox.
6. Click Save to save the new policy. You can then open and edit the new policy via the Policy Finder. See Modify a Policy.
Remove a policy
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be cloned.
3. Click the Delete button. You will be prompted to confirm the action.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be edited.
3. Click the Edit Rules button to open the Policy Rules panel.
4. Do one of the following:
To edit a rule, click the Edit this rule individually button.
To add a new rule, click one of the following buttons:
Add Extrusion Rule (will only be available if the administrator user has set the Inspection Engine configuration to Inspect Returned Data)
Extrusion matches allow the user to define how many matched records will be grouped together when logged and reported on by Guardium. Extrusion rules
must have an action of LOG FULL DETAILS and a rule name that includes guardium://(some text)?split=(number) where (some text) is any text or one of the
predefined words such as CREDIT CARD and (number) is the number of returned data records per Guardium log record.
5. The attributes that can be tested for in each type of rule vary, but regardless of the rule type, each rule definition begins with the following four items:
Rule Description - Enter a short, descriptive name for the rule. To use a special pattern test, enter the special pattern test name followed by a space and one
or more additional characters to make the rule name unique, for example: guardium://SSEC_NUMBER employee.
Category - The category will be logged with violations, and is used for grouping and reporting purposes. If nothing is entered, the default for the policy is
used.
Classification - Optionally enter a classification in the Classification box. Like the category, these are logged with exceptions and can be used for grouping and
reporting purposes.
Severity - Select a severity code: Info, Low, Med, or High (the default is Info).
6. Use the remaining fields of the Rule Definition panel to specify how to match the rule. Many of the same fields are available for Access, Exception, and Extrusion
Rules; and some fields are available only after selecting various other options. For an alphabetical reference of all fields available in the rules definition panels, see
Rule Definition Reference. Also, for instructions on how to use combinations of groups and individual values, see Specify Values and/or Groups of Values in Rules.
The Filter box in the Rules Definition panel can be used for this purpose. The process of defining a filter is similar to the process of defining a rule.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to be viewed or modified.
3. Click Edit Rules.
4. In the Filter boxd do one of the following:
Select a filter from the Filter list.
Click Edit to modify a filter definition.
Click New to define a new filter.
Once the filtered set of rules is displayed, you can perform any of the actions described in this section on the displayed rules.
Copy Rules
Use this procedure to copy selected rules from one policy to another, or to a different location in the same policy.
All of the rules copied will be copied to a single location - after rule 3, for example. To copy rules to different locations in the receiving policy, either perform multiple copy
operations, or copy all of the rules in one operation, and then edit the receiving policy to move the rules as necessary.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy from which you want to copy one or more rules.
3. Click Edit Rules.
4. Mark the checkbox for each rule to be copied.
5. Click Copy Rules.
6. From the Copy selected rules to policy list, select the policy to receive the copied rules.
7. From the Insert after rule list, select the rule after which the copied rules should be inserted, or select Top to insert the copied rules at the beginning of the list.
8. Click Copy. You will be informed of the success of the operation.
9. You should now edit the policy to which you copied the rules, to verify that you have copied the correct rules to the correct location.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to work with. (It must include a baseline.)
3. Click the Edit Rules button.
4. Set the Rule minimum count value. This is the minimum number of like commands that the system should find in order to suggest a rule. The default is zero. The
smaller the number entered, the more suggested rules the system will generate. (Be aware that the Count that displays in the suggested rules panel does not
reflect this value.)
5. Set the Object Group minimum count value, to determine how many instances of an object group the system should find to generate a suggested object group. The
default is one. The smaller the number entered here, the greater the number of suggested object groups.
6. Click the Suggest Rules button. The suggested rules display in a separate window, in the Suggested Rules panel.
7. The suggested rules are sorted in descending order by the count of occurrences in the baseline period, which is listed for each suggested rule. If you select one or
more of the suggested rules and click Save, they are inserted in the same order, just before the BASELINE rule in the Policy Rules panel. You can then change the
order of the suggested rules or edit them as necessary, from the Policy Rules panel.
8. Expand the rules and check the membership of the suggested object groups. In the Object column of the Suggested Rules panel, if any suggested object groups
have been created, these begin with the name Suggested Object Group and are displayed as hypertext links. For information about how to view, accept, or reject
suggested object groups, see Using Suggested Object Groups.
9. Mark the Select box for each suggested rule to include in the policy.
10. Click Save to accept the selected rules.
11. You can now edit or modify the suggested rules as you would any rules that you added manually.
Before accepting a suggested object group, you can edit the generated Group Description field (Suggested Object Group603-25 11:54, for example) to provide a more
meaningful name. After accepting a suggested object group, you can view its membership. You can reject the use of that group within any suggested rule, but you cannot
edit the membership of that group.
If you reject a suggested object group, the suggested rule for that group is replaced with a separate suggested rule for each member of the rejected group. You can accept
or reject each of those suggested rules separately. After accepting a suggested rule, you can edit that rule.
Suggested object groups display in the Object column of the Suggested Rules panel as hypertext links beginning with the words Suggested Object Group.
To view a suggested object group's membership, click the hypertext link for that group. If the group has not yet been accepted, the group membership displays in
the Edit Group panel. If the group has already been accepted, it displays in the View Group panel.
1. Enter a meaningful name in the Group Description field in the Edit Group panel. (Not required, but strongly recommended). Do not include apostrophe
characters in the name. This is the only opportunity you have to name this group. Otherwise, the group gets a name beginning with Suggested Object Group
and followed by a number, as described previously.
2. Click Save to accept the edited group for the suggested rule, or click Save for All to accept the edited group for all suggested rules in which it appears. The
new object name will replace the old one in the rule.
To reject the group for this suggested rule only: Click the Reject button.
To reject the group for all suggested rules: Click the Reject for All button.
Note: If you accept a suggested object group in one rule, open that same suggested object group again from another rule, and then click the Reject for All button, that
group will be retained in any rule where it was explicitly accepted, but rejected in the remaining rules in which it was used.
The Policy Builder does this by examining the permissions granted to user groups and database objects (tables, procedures, and views) within the DBMS, then grouping
the database objects into suggested object groups so that the total number of suggested rules can be minimized. You can accept or reject any suggested object group (see
Using Suggested Object Groups). You can also accept or reject any suggested rule.
To have the Policy Builder suggest rules from the database ACL:
Note: When suggesting rules from the database ACL, the system does not use the Rule minimum count or the Object Group minimum count fields. Those fields are used
only when suggesting rules from the baseline.
1. Click Suggest from DB to open the Database Definition panel in a separate browser window.
2. Click Add Datasource to select the database from which you want to access the DB ACL.
Note: If adding an Oracle, DB2® or DB2 for z/OS® datasource to access the DB ACL, the Query Parameters section, in the Database Definition pop-up window, will
be disabled.
3. Click Suggest Rules to generate the rules. The Suggested Rules panel opens in a separate window (as described previously, for the Rules Suggested from Baseline).
If you select one or more of the suggested rules and click Save, they will be inserted in the same order into the list of rules in the Policy Rules panel, just before the
BASELINE rule. If there is no BASELINE rule, they will be inserted at the beginning of the list. Once the suggested rules have been inserted into the Policy Rules
panel, you can change the order of the rules or edit them, as necessary.
4. Check the membership of the suggested object groups. In the Object column, any suggested object groups that have been created begin with the name Suggested
Object Group and display as hypertext links (in blue and underlined). For information about how to view, edit, accept, or reject suggested object groups, see Using
Suggested Object Groups).
5. Mark the Select box for each suggested rule you want included in the policy. Click Save to accept the selected rules.
It does not test exception rules or extrusion rules. The simulator replays logged network traffic and applies all access rules in the policy. It produces a special report in a
separate window, listing the SQL that triggered alert or log only actions. The report includes the following columns: Timestamp, Category Name, Access Rule Description,
Client IP, Server IP, DB User Name, Full SQL String, Severity Description, and Count of Policy Rule Violations. Use the CLI command, store allow_simulation, to make the
Policy Simulation button active in the GUI.
The Policy Simulator can be used to test only the following types of access rule actions:
Log Only
Any Alert action: Alert Daily, Alert Once Per Session, Alert Per Match, Alert Per Time Granularity
The Policy Simulator will not produce any results if the policy includes logging actions other than Log Only. To use the simulator for such a policy, temporarily change all
logging actions to Log Only.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Builder to open the Policy Finder.
2. From the Policy Description list, select the policy to work with.
3. Click Edit Rules.
Installing Policies
Use this topic to install the policy on the Guardium collector and modify the schedule.
Multi-policy support
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Installation to open the Policy Installer.
2. Select the policy to be installed from the Policy Description box.
3. Do one of the following:
Click Install to install the policy immediately.
If using the Policy Installer, you can click Modify Schedule to open the general-purpose scheduling utility, to schedule the policy installation.
The order of appearance can be controlled during the policy installation, such as first, last or somewhere in between. But the order of appearance can not be edited at a
later date.
On the Policy Installation page, click the icon to remove a previously-installed policy.
The first installed policy has a special meaning, as it sets the value of the global policy parameters. These parameters are: Global pattern; Is it a selective audit; Client and
Server net mask; Tagged Client and Server group ID.
This multi-policy support is available through the GUI (Setup > Tools and Views > Policy Installation) and through GuardAPI.
1. Click Setup > Policy Builder to open the Policy Finder or click Protect > Security Policies > Policy Installation to open the Policy Installer.
2. Click the Installed Policy link to display the policy rules. Authorized users will have an additional button enabled: To open the policy for editing in the Policy Builder,
click the Edit installed policy button.
Feature Highlights
User marks a scheduled job to find and run dependencies at run time.
When the scheduler runs the job, it automatically finds all the subordinate jobs and runs them in order.
Find dependencies
Policy Groups that are defined in any of the (to be installed) policies and are either Policy rules that use groups must have up-to-date group data before
Installati scheduled or not scheduled to be populated by the Populate From Query being installed.
on mechanism.
Policy Audit processes that include a Classification audit task, where the classification Policy rules that use groups must have up-to-date group data before
Installati task has an action of Add To Group of Object, Add To Group of Object/Field, or Add they are installed.
on To Access Rule.
Audit Custom table upload jobs where the custom table name is referred to (in the "from" Custom tables data that are referred to by an audit task of type Report
Process clause) by an audit task of type Report. must be populated with up-to-date data before an audit process is
Audit Groups that are defined in a condition of an audit task of type Report are either Groups that are referred to by a query condition must be populated with
Process scheduled or not scheduled to be populated by the Populate From Query up-to-date data before an audit task of type Report is run.
mechanism.
Populate Custom upload tables that contain any of the entities of the query that is used to Â
From populate a group.
Query
Audit Import Relevant for an aggregator only. This prerequisite guarantees that
Process information is imported from all aggregated units before any audit
process can run.
Scheduler enhancements
Direct dependencies are objects that are tied together by definition, for example, Policy depends on Rule and Rule depends on Groups.
Indirect dependencies are objects that are logically tied, for example, run Audit processes before installing policies.
GUI support
1. Check box option, Auto run dependent jobs, after selecting Create Schedule from Policy Installation.
2. Click Save to schedule the process. This notifies the user of the dependencies status.
GuardAPI support
function parameters :
dependOnJobExecutedWithin - String
intervalBetweenRetries - Integer
jobRetries - Integer
runIfDependOnJobReturns - String
api_target_host - String
function parameters :
api_target_host - String
function parameters :
dependOnTrigger - String
api_target_host - String
function parameters :
api_target_host - String
function parameters :
api_target_host - String
To obtain a list of all the scheduled jobs/triggers, run the GuardAPI command:
function parameters :
api_target_host - String
function parameters :
api_target_host - String
function parameters :
dependOnJobExecutedWithin - String
intervalBetweenRetries - Integer
jobRetries - Integer
runIfDependOnJobReturns - String
api_target_host - String
function parameters :
api_target_host - String
Run Scheduler
Scheduler will check for job dependencies when it is time to run a job.
Policy Install
(Runnable)
  Audit    Â
Task
   Classification   Â
Process
    Classification  Â
Policy
     Classification Policy Â
Action
Execution order will be : Populate from Query Group → Audit Process → Policy Install
Scheduler will run each one of the dependencies and wait for it to finish.
Running a full dependency tree might take a long time to complete, but it is guaranteed all dependencies are executed in the correct order.
Handle errors
If any of the dependencies fails to execute, the job currently executed by the scheduler is not going to run.
The number of retries that the job dependent on previous jobs can be set. The default is 3. A valid value is ≥ 0. The interval, in minutes, between retries can be
set. The default is 3. A valid value is ≥ 0.
Action Indicates the action to be taken when the rule is true. For a comprehensive description of all rule actions, see Rule
Actions Overview.
App Event Exists Match for an application event only. See the App Event Note.
App Event Values Match the specified application event Text, Numeric, or Date values. Also allow a Group to be chosen for the event string
as an option. See the App Event Note.
(App) Event Type Match the specified application event. See the App Event Note.
(App) Event User Name Match the specified application event user name only. See the App Event Note.
App Event Note The App Event fields cannot be used when the Flat Log box is marked.
App. User Application User. See Specify Values and/or Groups of Values in Rules.
Category An arbitrary label that can be used to group policy violations for reporting purposes. A default category can be specified
in the policy definition, but the default can be overridden for each rule.
Classification An arbitrary label that can be used to group policy violations for reporting purposes. A default classification can be
specified in the policy definition, but the default can be overridden for each rule.
Client Info DB2® client info: For access rules only. For z/OS® only, a CLIENT INFO field (and CLIENT_INFO_GROUP_ID) will be
visible if DB_TYPE is either DB2, Â DB2 COLLECTION Profile or VSAM COLLECTION Profile.
The type of information that can be placed in this field is USER=x; WKSTN=y; APPL=z.
Client IP Clear the Not box to include, or mark the Not box to exclude:
Any client: Leave all client fields blank. The count will be incremented every time any client satisfies the rule. (You
cannot leave all fields blank if the Not box is marked.)
All clients selected by an IP address and mask: Enter a client IP address in the first box and network mask in the
second box. The count will be incremented each time that any of the specified clients satisfies the rule. For
example, to select all clients in subnet 192.168.9.x, enter 192.168.9.1 in the first box and 255.255.255.0 in the
second box. For more information selecting IP addresses, see Selecting IP Addresses Using a Mask.
A group of clients: Select a group of client IP addresses from the Group drop-down list, or click the Groups button
to define a new group and then select that group. The count will be incremented each time that any member of
the selected group satisfies the rule.
All clients selected by an IP address and mask AND a group of clients: Use both the Client IP and Group fields.
The count will be incremented each time that any client specified using either method satisfies the rule.
Client IP/Source Program/DB User/ Server 7-tuple group - Client IP/Src App/DB User/Server IP/Svc. Name/OS User/DB
IP/Service Name
5-tuple group type available for access, exception and extrusion rules.
A tuple allows multiple attributes to be combined together to form a single group member.
Tuple supports the use of one slash and a wildcard character (%). It does not support the use of a double slash.
Wildcard % is permitted in a policy for Client IP/Source Program/DB User/ Server IP/Service Name group.
Client MAC To make the rule sensitive to a single client MAC address, enter the address in nn:nn:nn:nn:nn:nn format, where each n
is a hexadecimal digit (0-F) OR Enter a dot (.) in the Client MAC box to indicate that a separate count should be
maintained for each client MAC address OR Leave the Client MAC box empty to ignore client MAC addresses.
Command The command. See Specify Values and/or Groups of Values in Rules if a commands group cannot be edited, and the
and/or Group label changes to Collect Only, indicating that commands from only the selected group are to be selected.
If the Every box is checked, every field in the SQL statement must be a member of the group.
Continue to Next Rule If marked, rule testing will continue with the next rule, regardless of whether or not this rule is satisfied. This means that
multiple rules may be satisfied (and multiple actions taken) by a single SQL statement or exception. If not marked (the
default), no additional rules will be tested for the current transaction when this rule is satisfied.
Data Pattern Every type of rule (Access, Exception, Extrusion) can have Data pattern, but it is required for Extrusion rules.
For use in defining Extrusion Rules - A regular expression to be matched, in the Data Pattern box. Click the Regex button
to open the Build Regular Expression tool, which allows you to enter and test regular expressions. This enables more
complex masking patterns. Put parentheses around the section that should be masked. Use this function to mask data
retrieved from the database.
For example,
Additional regular expressions (Regex) for use only in Data Patterns with an action of Redact (Scrub):
UNIX S-TAP
Name: Pattern: Masked to:
SCRUB_SSN_ANSI AAA-AA-AAAA ***-***-AAAA
SCRUB_SSN_UNICODE UUU-UU-UUUU ***-***-UUUU
SCRUB_CC_SPACES_ANSI AAAA AAAA AAAA AAAA A*** **** **** 1234
SCRUB_CC_SPACES_UNICODE UUUU UUUU UUUU UUUU U*** **** **** ****
SCRUB_CC_SOLID_ANSI AAAAAAAAAAAAAAAA A***************
SCRUB_CC_SOLID_UNICODE UUUUUUUUUUUUUUUU U***************
SCRUB_AMEX_SOLID_ANSI AAAAAAAAAAAAAAAA A***************
SCRUB_AMEX_SOLID_UNICODE UUUUUUUUUUUUUUUU U***************
Regex with Redact - Use of Regular expressions (regex) in the IBM Security Guardium solution (including the masking in
the policy) are executed on the appliance, and allow advanced regexp capabilities.
However, the regex library for use with Redaction is executed in the kernel of the database server and is limited to most
basic regex. Only basic regex patterns can be used with Redaction.
For example, the regular expression nomenclature [0-9]* cannot be used to indicate any number of digits. It is
necessary to use basic regular expression nomenclature [0-9]-[0-9]-[0-9]... to specify a sequence of digits. Â
Note: S-TAP® will only accept the predefined SCRUB pattern names; ignoring any other name.
Access rule, data pattern and replacement character - Using a data pattern, for example, [a-z,2]{3}([_][0-9]{1,2}) with a
replacement character of * will change the values between the parentheses in the data pattern to ***. Use this function
to mask values.
Available for Oracle, Sybase, MySQL, & MSSQL and for extrusion rules only, users may influence the character set
used by defining special extrusion rules. These character set policy rules are only used to set the character set a
user would like to convert traffic to, setting an action is irrelevant. In order to have an action for that traffic the
user needs to define additional rules after that character set rule. Two examples of setting a character set rule are
possible (hint or force) as defined in the following examples:
Will convert the traffic by character set as defined in the extrusion rule of the installed policy ONLY if the regular
conversion failed.
Will convert the traffic by character set as defined in the extrusion rule of the installed policy for ALL data.
Note: Keep in mind that extrusion rules usually attached to the session with delay. Therefore short sessions or beginning
of a session may be not immediately affected by character set change.
DB Name The database name. See Specify Values and/or Groups of Values in Rules.
For access rule: Cassandra, CIFS, CouchDB, DB2, DB2 COLLECTION PROFILE* (only for use with z/OS), FTP,
GreenPlumDB, Hadoop, HTTP, IBM® INFORMIX (DRDA), IBM iSeries, IMS™, IMS COLLECTION PROFILE (only for uses
with z/OS, Informix®, MongoDB, MS SQL SERVER, MYSQL, NETEZZA, Oracle, PostgreSQL, Sybase, TERADATA, VSAM or
VSAM COLLECTION PROFILE* (only for use with z/OS).
For exception and extrusion rules: Cassandra, CIFS, CounchDB, DB2, FTP, GreenPlumDB, Hadoop, IBM INFORMIX
(DRDA), IBM iSeries, Informix, MongoDB, MS SQL SERVER, MYSQL, NETEZZA, Oracle, PostgreSQL, Sybase, or
TERADATA. Note: Informix supports two protocols SQLEXEC (native Informix protocol) or DRDA (IBM protocol). These
protocols are automatically identified for Informix traffic with no additional settings. The Server Type attribute will show
INFORMIX (for SQLEXEC protocol) and IBM INFORMIX (DRDA) (for DRDA protocol).
Note: TERADATA has a silent login and allows clients to auto-reconnect. To block Teradata statements in a policy, use
the S-TAP firewall function with default state ON and un-watch safe users.
DB User The database user. See Specify Values and/or Groups of Values in Rules.
Error Code The error code (for an exception). See Specify Values and/or Groups of Values in Rules.
Note: A session closed by GUI timeout, in an Exception rule, will not produce a Session Error (Session_Error).
Field Name The field name. See Specify Values and/or Groups of Values in Rules.
If the Every box is checked, every field in the SQL statement must be a member of the group.
Min. Ct. The minimum number of times the condition contained in the rule must be matched before the rule will be satisfied
(subject to the Reset interval).
Net. Protocol The network protocol. See Specify Values and/or Groups of Values in Rules.
Object The object name. See Specify Values and/or Groups of Values in Rules.
For Sybase and MS SQL Server, there are two groups, MASKED_SP_EXECUTIONS_SYBASE and
MASKED_SP_EXECUTIONS_MS_SQL_SERVER respectively that include names of stored procedures. If there is an
execution of an included procedure than everything will be masked.
If the Every box is checked, every field in the SQL statement must be a member of the group.
OS User Operating system user. See Specify Values and/or Groups of Values in Rules.
Pattern A regular expression to be matched, in the Pattern box. You can enter a regular expression manually, or click the (Regex)
button to open the Build Regular Expression tool, which allows you to enter and test regular expressions.
Time Period To make the rule sensitive to a single time period, select a pre-defined time period from the Period list or click the
(Period) button to define a new time period.
Rec. Vals. When marked, the actual construct causing the rule to be satisfied will be logged, and available in reports, in the SQL
String attribute. For a policy violation only, if not marked, no SQL statements will be logged.
Records Affected Threshold Access rule only. Set a threshold value for matched records. Example: Let 1000 instances take place before taking
action.
This field affects the output of the rule rather than the definition of the rule (example, what happens when it is triggered,
rather than when should it trigger).
Records affected threshold is based on rule and session. It is accumulated returned rows from all queries that meet the
rule condition. Once all accumulated records affected reach the threshold, the rule will trigger and the records affected
on the statement (if the action log full details) will be the accumulated value of the records affected.
Should the output produced by the extrusion rule match the regular expression, the portions that match sub-
expressions between parenthesis '(' and ')' will be replaced by the Masking character.
Reset Interval Used only if the Min. Ct. field is greater than zero. This value is the number of minutes after which the condition met
counter will be reset to zero.
Revoke This checkbox appears on extrusion rules only. It allows you to exclude from logging a response that has already been
selected for logging by a previous rule in the policy. In most cases you can accomplish the same result more simply by
defining a single rule with one or more NOT conditions to exclude the responses you do not want, while logging the
remaining ones that satisfy the rule. (The Revoke checkbox pre-dates NOT conditions, and is provided mainly for
backward compatibility to support existing policies.)
Rule Description The name of the rule. To use a special pattern test in the rule, enter the special pattern test name followed by a space
and one or more additional characters to make the rule name unique, for example: guardium://SSEC_NUMBER
employee. (See Special Pattern Tests for more information.)
When displayed, the name will be prefaced with the rule number and the label Access Rule, Exception Rule, or Extrusion
Rule, to identify the rule type. If the rule was generated using the Suggest From DB function, the generated name is in
the format: Suggested Rule <n>_mm-dd hh:mm, consisting of the following components
Server IP Clear the Not box to include, or mark the Not box to exclude:
Any server: Leave all server fields blank. The count will be incremented every time any server satisfies the rule.
(You cannot leave all fields blank if the Not box is marked.)
All servers selected by an IP address and mask: Enter a server IP address in the first box, and network mask in
the second box. The count will be incremented each time that any of the specified servers satisfies the rule. For
example, to select all servers in subnet 192.168.3.x, enter 192.168.3.1 in the first box, and 255.255.255.0 in the
second box.
A group of servers: Select a group of server IP addresses from the Group drop-down list or click the Groups
button to define a new group and then select that group. The count will be incremented each time that any
member of the specified group satisfies the rule.
All servers selected by an IP address and mask AND a group of servers: Use both the Server IP and Group fields.
The count will be incremented each time that any server specified using either method satisfies the rule.
Service Name The service name. See Specify Values and/or Groups of Values in Rules.
Severity Select a severity code from the list: INFO, LOW, NONE, MED or HIGH. If HIGH is selected and email alerts are sent by
this rule, the email will be flagged Urgent.
SQL Pattern A regular expression to be matched, in the Pattern box. You can enter a regular expression manually, or click Regex to
open the Build Regular Expression tool, which allows you to enter and test regular expressions.
Restriction: SQL Pattern is not supported for redaction rules.
Src app Application source program. See Specify Values and/or Groups of Values in Rules.
Trigger Once Per Session Do not analyze session for same rule after first match. Especially effective for “Selective Audit†policies.
XML Pattern A regular expression to be matched, in the Pattern box. You can enter a regular expression manually, or click Regex to
open the Build Regular Expression tool, which allows you to enter and test regular expressions.
A regular expression to be matched can be used in this box. The regular expression must be entered manually.
Full_SQL return values using MSSQL In MSSQL, sp_cursoropen and sp_cursorfetch stored procedures are used for SELECT database queries.
Sp_cursoropen holds the original statement, while the FULL_SQL return value in an Extrusion rule will appear as
sp_cursorfetech instead of Select * from ___________.
Parent topic: Policies
This example will take a sample Oracle table (CUSTOM_ENTITLEMENT) as an example of a custom entitlement data, use an Oracle script to select data from this table,
and then generate a file with GuardAPI commands. The file will include commands for the creation of new or modification of existing policy rules, change of a policy rule
order, and policy reinstallation. We’ll then show you how to execute the generated script and then view the policy changes in the Guardium GUI.
Value-added: Guardium API provides access to Guardium functionality from the command line or script. This allows for the automation of repetitive tasks which is
especially valuable in larger implementations. Calling these GuardAPI functions enables a user to quickly perform operations such as maintenance of the Guardium policy.
1. Define a rule structure which logs full details for all database manipulation (DML) Commands. It will be used as a template for a creating new rules creation. The
rule will belong to the template policy.
2. Create the Oracle script that will generate a file with the following GuardAPI commands:
copy_rule - add new rules to the installed policies as a copy of rule template
update_rule - update the copied rules with the relevant data from CUSTOM_ENTITLEMENT Oracle table
update_rule - update the existing rule with the data from that table
Steps:
As many actions are permitted for a given policy rule, it becomes very difficult to define the complex hierarchical structure that a rule has using the Guard API.
However, in most cases rules differ by the conditions and the action/receiver structures usually fall into a small set of different options. Therefore, the APIs are
based on cloning an existing rule which acts as a rule template – this defines the actions/receiver structure and then conditions are changed using APIs.
Here we create a rule template (HowToTemplate), which includes rule action definition and will then be cloned and updated each time a new rule of that kind has to
be added to a policy.
Click Protect > Security Policies > Policy Builder to open the Policy Finder and create a template policy. S
Click New to create the template policy; entering a Policy description, checking the Selective audit trail check-box, and clicking the Save button.
Click on the Add Access Rule button to display the Access Rule Definition panel and add a rule.
To add the rule, enter DML Command - Log Full Details Templatein the Description box; choose (Public) DML Commandsfrom the Commands box; highlight LOG
FULL DETAILS WITH VALUESin the Action section; and then click the Save button.
GuardAPI is a set of CLI commands, all of which begin with the keyword grdapi. To list all GuardAPI commands available, enter the command 'grdapi' with
no arguments. To display the parameters for a particular command, enter the command followed by '--help=yes'.
For example
ID=0
function parameters :
fromPolicy - required
ruleDesc - required
ok
Both the keyword and value components of parameters are case sensitive.
If a parameter value contains one or more spaces, it must be enclosed in double quote characters. For example:
There is no need to use all available parameters that a function supports. In addition to the required parameters, use the parameters that you want to
change.
Scripts, which invoke GuardAPI, may contain sensitive information, such as passwords for datasources. To ensure that sensitive information is kept
encrypted at all times, the grdapi command supports passing of one encrypted parameter to an API Function. This encryption is done using the System's
Shared Secret which is set by the administrator and can be shared by many systems, and between all units of a central management and/or aggregation
cluster; allowing scripts with encrypted parameters to run on machines that have the same shared secret. For more details about this issue please see
Guardium Help.
If multiple policies are installed, then install policy command (policy_install) must include all installed policies descriptions delimited by pipe character. This
must be done even if only one policy has changes. The policy descriptions should be in the order you want the policies to be installed.
Â
Logic behind writing of the script; changing the currently installed policy HowTo in the following way:
a. For each record in the CUSTOM_ENTITLEMENT table with IS_NEW_FLAG equals ‘1’, a new access rule with description saved in RULE_DESC column
will be added to the “HowTo†policy. The rule logs full details for all DML Commands from OS user (OS_USER field value), client IP (CLIENT_IP), server
IP (SERVER_IP) with service name (SERVICE_NAME).
b. If IS_NEW_FLAG value is ‘0’, the rule with description equals to the value of RULE_DESC column will be changed based on the relevant data from this
record of the table.
c. Rule3 will be set as the first rule – to show how to use change_rule_order function.
a. Add a new access rule: Rule1. The rule logs full details for all DML Commands from user “user1†, client IP “192.168.7.101†to Oracle database
on “192.168.7.201†server with service name “PROD1†.
b. Add a new access rule: Rule2. The rule logs full details for all DML Commands from user “user2†, client IP “192.168.7.102†to Oracle database
on “192.168.7.202†server with service name “PROD2†.
c. Add a new access rule: Rule3. The rule logs full details for all DML Commands from user “user3†, client IP “192.168.7.103†to Oracle database
on “192.168.7.203†server with service name “PROD3†.
d. Change Rule2 – set OS user to “user4†, client IP to “192.168.7.104†, server IP to “192.168.7.204†, service name to “PROD4†.
Oracle script
When the Oracle script is run within SQL*Plus, and spooled accordingly, will produce a file (update_policy.txt) that looks like:
Note: The last grdapi command which re-installs the policy to apply the rules to the system
3. Run the generated script.
For example, to run update_policy.txt script on host 192.168.12.5 (password will be prompted for)
Sample output:
192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20015
192.168.12.5> 192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20016
192.168.12.5> 192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5> ok
ID=20017
192.168.12.5> 192.168.12.5> ok
ID=20016
192.168.12.5> 192.168.12.5> ok
ID=20002
192.168.12.5> 192.168.12.5>
Â
Before running the script, there were no rules defined in the HowTo policy as shown in this preview
As a result of the copy_rule, the HowTo policy now has three Access Rules.
Â
Expanding any of the policy rules, Rule1 here, we can validate the various fields that have been altered with the update_rule commands.
Â
And as a result of the policy_install command, the currently installed policy is now the HowTo policy with three installed rules.
Value-added: Make clearer what happens when certain choices are made in Policy Rules for log or ignore actions, which control the level of logging, based on observed
traffic.
Ignore session
The current request and the remainder of the session will be ignored. This action does not log a policy violation, but it stops the logging of constructs and will not test for
policy violations of any type for the remainder of the session. This action might be useful if, for example, the database includes a test region, and there is no need to apply
policy rules against that region of the database.
Ignore - SQL commands, SQL errors, Result Sets Log in/ Log out Ignore – SQL commands, SQL errors, Result Sets.
Sniffer to S-TAP - One signal to S-TAP to stop sending SQL commands and errors coming from a Span Port or
activity for this session. If additional activity is sent by Network TAP are filtered at the Sniffer.
S-TAP, it is ignored at the sniffer level only.
The current request and the remainder of the S-TAP session will be ignored. This action is done in combination with specifying in the policy builder menu screen of certain
machines, users or applications that are producing a high volume of network traffic. This action is useful in cases where you know the database response from the S-TAP
session will be of no interest.
Ignore - SQL commands, SQL errors, Result Sets Log in/ Log out  Sniffer to S-TAP - One signal to S-TAP Not Applicable
to stop sending activity for this session. Additional
signals to S-TAP to stop sending activity to this If there is a need to ignore traffic from a Span Port/
session. Network TAP, use Ignore session instead.
Â
Responses for the remainder of the session will be ignored. This action logs a policy violation, but it stops analyzing responses for the remainder of the session. This action
is useful in cases where you know the database response will be of no interest.
Note: For ignore response per session, since the sniffer does not receive any response for the query or it is ignored, then the values for COUNT_FAILED and SUCCESS are
whatever the default for the table says they are, in this case COUNT_FAILED=0 and SUCCESS=1.
Table 3. Ignore responses per session
Data logged or ignored between client and DB Data sent from DB Server/S-TAP to Collector Data from Span Port/ Network TAP to Collector
Server/S-TAP
Log – SQL commands  Ignore - SQL errors, Result Log in/ Log out  SQL Commands  Sniffer to S-TAP - Not applicable
Sets One signal to S-TAP to stop sending activity for this
session. Additional signals to S-TAP to stop sending This rule action is S-TAP-only implementations.
activity to this session.
No SQL will be logged for the remainder of the session. Exceptions will continue to be logged, but the system may not capture the SQL strings that correspond to the
exceptions.
Ignore - SQL commands Log in/ Log out Ignore – SQL commands
Log - SQL errors, Result Sets Sniffer to S-TAP - One signal to S-TAP to stop sending Log - SQL errors, Result Sets
activity for this session. If additional activity is sent by
S-TAP, it is ignored at the sniffer level only. SQL commands are filtered at the Sniffer.
Use a Selective Audit Trail policy to limit the amount of logging on the appliance. This is appropriate when the traffic of interest is a relatively small percentage of the traffic
being accepted by the inspection engines, or when all of the traffic you might ever want to report upon can be completely identified.
It is important to note that Ignore Session rules are still very important to include in the policy even if using a Selective Audit Trail. Ignore Session rules decrease the load
on a collector considerably because by filtering the information at the S-TAP level, the collector never receives it and does not have to consume resources analyzing traffic
that will not ultimately be logged. A Selective Audit Trail policy with no Ignore Session rules would mean that all traffic would be sent from the database server to the
collector, causing the collector to analyze every command and result set generated by the database server.
Ignore - SQL commands Log in/ Log out Ignore – SQL commands
Log - SQL errors, Result Sets Ignore SQL commands, except for those defined by Log - SQL errors, Result Sets
Audit-Only or Log Full Details rules.
SQL commands are filtered at the Sniffer.
Log SQL errors
Character sets
You can use character set codes in extrusion rules.
Correlation Alerts
An alert is a message indicating that an exception or policy rule violation was detected.
A correlation alert is triggered by a query that looks back over a specified time period to determine if alert threshold has been met. The Guardium® Anomaly
Detection Engine runs correlation queries on a scheduled basis. By default, correlation alerts do not log policy violations, but they can be configured to do that.
A real-time alert is triggered by a security policy rule. The Guardium Inspection Engine component runs the security policy as it collects and analyzes database
traffic in real time.
Regardless of how they are triggered, Guardium logs all alerts the same way: the alert information is logged in the Guardium internal database. The amount and type of
information logged depends on the specific alert type. The Guardium Alerter component, which also runs on a scheduled basis, processes each new alert, passing the
logged information for each alert to any combination of the following notification mechanisms:
SMTP – The SMTP (outgoing e-mail) server. The Alerter passes standard email messages to the SMTP server for which it has been configured.
SNMP – The SNMP (network information and control) server. When SNMP is selected for an alert notification, the Alerter passes all alert messages of that type to
the single trap community for which the Alerter has been configured.
Syslog – The alert is written to syslog on the Guardium appliance (which may be configured by the Guardium Administrator to write syslog messages to a remote
system).
Note: For SNMP or SYSLOG, the maximum message length is 3000 characters. Any messages longer than that will be truncated.
Custom – A user written Java™ class to handle alerts. The Alerter passes an alert message and timestamp to the custom alerting class. There can be multiple
custom alerting classes, and one custom alerting class can be an extension of another custom alerting class.
Note: Alerts definition and notification are not subject to Data Level Security. Reasons for this include alerts are not evaluated in the context of user, the alert may be
related to databases associated to multiple users and to avoid situations where no one gets the alert notification.
Note: If there is an alert using a query that contains 30 fields or more (including counters) the anomaly detection will fail with an Array out of bound exception
error message Queries with 30 columns (or more) can not be used for alerts. Such queries do not appear in the list of available queries for threshold alerts.
If the threshold is per report, the value for that interval is 0 (zero), and an alert will be generated if the threshold condition is met (for example, if the
condition specified is “Alert when value is  < 1†).
If the threshold is per line, no alert will be generated, regardless of the specified condition (this is because there are no lines of output).
Select As absolute limit to indicate that the threshold entered is an absolute number or select As a percentage change within period to indicate that the
threshold represents a percentage of change within the time period identified in the From and To fields.
If the As percentage change within period option is selected, use the date picker controls to select the From and To dates.
If the As percentage change for the same "Accumulation Period" on a relative time is selected , one relative date will be entered and the alert will execute the
query for the current period and for the relative period (using the same interval), and will check the values as a percentage of the base period value.
Note: If relative period is used, each time the alert is checked it will execute the query twice, once for the current period and once for the relative period. Â
19. Indicate in the Notification Frequency box how often (in minutes) the Alert Receivers should be notified when the alert condition has been satisfied.
20. Click Save to save the alert definition.
Note: You cannot assign receivers or roles, or enter comments until the definition has been saved.
21. In the Alert Receivers panel, optionally designate one or more persons or groups to be notified when this alert condition is satisfied. To add a receiver, click the Add
Receiver button to open the Add Receiver Selection panel.
Note: If the receiver of an alert is the admin user then admin needs to be assigned an email for the alert to fire.
Note: An additional receiver for threshold alerts is Owner (the owner/s of the database). If the query associated with the alert contains Server IP and Service name
and if the alert is evaluated Per Row, then the receiver can be Owner. The alert notification must have: Alert Notification Type: Mail, Alert User ID: 0, Alert
Destination: Owner. See Alerting Actions in Policiesfor additional receivers for real-time alerts.
22. Optionally click Roles to assign roles for the alert.
23. Optionally click Comments to add comments to the definition.
24. Click Apply and then  Done when you have finished.
Prerequisites
Configure email (SMTP) server (Setup > Tools and Views >Alerter)
After fully configuring the correlation alert, make sure it is active and running (Setup > Tools and Views> Anomaly Detection)
An alert is a message indicating that an exception (correlation alert) or policy rule violation (real-time alert) was detected.
A correlation alert is triggered by a query that looks back over a specified time period to determine if an alert threshold has been met.
1. Create a custom query from Exceptions Tracking with a field of SQL Errors (with a count) and a condition of application users. In order to use this custom query in
the Alert Builder, a date field (timestamp) is required.
2. Click Protect > Database Intrusion Detection > Alert Builder to open the Alert Finder.
3. Click on New. Complete the fields per the instructions after the Alert Builder menu screen.
4. Add Receiver.
Procedure
1. Exceptions Tracking - Open the Query Finder
Users: Select Tools > Report Building, and then select the Exceptions domain only.
2. Open the drop-down choices for Query. Select SQL Errors. This will open a configuration screen with SQL Errors at the main title.
3. Clone this selection, typing in a unique name in the text box for the query. Do not include apostrophe characters in the query name.
4. In your custom query, under Query fields, from Client/Server entity list, add a date field (timestamp) and change the database error text field to count field mode.
Under Query conditions, change the run time parameters of exception types to attribute and choose Exception.App. User Name.
5. Click Save. This custom query for SQL Errors from any application user is now available for use in the Alert Builder.
Troubleshooting tip: If a custom query has been created in any Query Builder in Report Building, and it does not appear in the Query list, then make sure that the
custom query has a timestamp (date field).
18. If the selected query contains run-time parameters, a Query Parameters panel will appear in the Alert Definition pane. Supply parameter values as appropriate for
your application.
19. In the Accumulation Interval box, enter the length of the time interval (in minutes) that the query should examine in the audit repository, counting back from the
current time (for example, enter 10 to examine the last 10 minutes of data).
20. Mark the Log Full Query results box to have the full report logged with the alert.
21. If the selected query contains one or more columns of numeric data, select one of those columns to use for the test. The default, which will be the last item listed,
is the last column for the query, which is always the count of occurrences aggregated in that row.
22. In the Alert Threshold pane, define the threshold at which a correlation alert is to be generated, as follows:
In the Threshold field, enter a threshold number that will apply as described by the remaining fields in the panel.
From the Alert when value is list, select an operator indicating how the report value is to relate to the threshold to produce an alert (greater than, greater
than or equal to, less than, etc.).
Select per report if the threshold number applies to a report total.
If there is no data during the specified Accumulation Interval: If the threshold is per report, the value for that interval is 0 (zero), and an alert will be generated if the
threshold condition is met (for example, if the condition specified is “Alert when value is < 1†).
23. Indicate in the Notification Frequency box how often (in minutes) the Alert Receivers should be notified when the alert condition has been satisfied.
24. Click the Apply button to save the alert definition.
Note: You cannot assign receivers or roles, or enter comments until the definition has been saved.
25. In the Alert Receivers panel, optionally designate one or more persons or groups to be notified when this alert condition is satisfied. To add a receiver, click the Add
Receiver button to open the Add Receiver Selection panel. For information about adding receivers, see notifications.
26. Optionally click the Roles button to assign roles for the alert. See Security Roles.
27. Optionally click the Comments button to add comments to the definition.
28. Click the Apply button and then the Done button when you have finished.
If there are more than fifteen SQL errors in the last three hours by any application user, then an alert will be sent to the designated receiver.
Incident Management
The Integrated Incident Management (IIM) application provides a business-user interface with workflow automation for tracking and resolving database security
incidents.
It simplifies incident management by allowing administrators to group a series of related policy violations into a single incident and assign them to specific individuals.
This reduces the number of separate policy violations that oversight teams need to review.
Incident generation processes can be defined and scheduled to read the policy violations log and generate new incidents. From an incident generation process, each
selected incident is:
In addition, policy violations can be assigned manually (by authorized users) to new incidents or existing incidents from the Policy Violations / Incident Management
report.
Once an incident has been generated, administrators and other users work with incidents from the Incident Management tab, which is included on both the admin and
user portals. From there, all other tasks can be performed (assign incidents, send notifications, assign status, and so forth).
The Incident Management functions can be accessed from the drill-down menus of the Incident Management reports. Each user may only have a subset of reports or
functions available, depending on the security roles assigned to the user account.
You can create your own copies of the Incident Management reports, but those copies will not have all of the capabilities available from the pre-configured reports on the
Incident Management tab. To assign incidents, severity codes, and so forth, use the reports on the Incident Management tab.
1. Click Comply > Tools and Views > Incident Generation to open Incident Generation Processes.
2. Click Add Process to open the Edit Incident Generation Process panel.
3. Select a query from the Query list. There are several restrictions that apply to queries used in an incident generation process. We suggest that you open the query in
the Query Builder to verify that it satisfies the following criteria:
The query must be from the Policy Violations domain.
The query must have the Add Count check box checked. See Queries for more information.
The main entity for the query must be the Policy Rule Violation entity.
The query fields for the query must not include a SQL string (from either the SQL entity or the Full SQL String attribute of the Policy Rule Violation entity).
4. Select a Severity for the incident (defaults to Info).
5. Optionally enter a Category for the incident (defaults to none).
6. Optionally enter a Threshold for generating the incident. The default is one, meaning every row returned by the query will generate an incident.
7. From the Assign to User list, select the user to whom the incident will be assigned.
8. Enter the From and To Dates for the query. For a scheduled query, use relative dates (for example: now -1 day and now).
9. Click Save to save the process definition. You cannot run or schedule the process until it has been saved.
10. To run the query now, click Run Once Now.
11. To schedule the query, click Modify Schedule to open the general-purpose scheduling utility.
A message is displayed when the change has been completed, and the Incident Management panel will be refreshed. If a new incident has been created, it will be
listed first in the Open Incidents report.
Assign to User
1. Double-click the incident to be assigned to another user, in one of the Incident Management reports.
2. Select Assign to user from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of users, and one additional option:
Unassign.
3. Select a user, or select Unassign to remove the current user assigned. When a user is assigned, the Status Description will be Assigned, and when unassigned the
Status Description will be Open.
A message is displayed when the change has been completed, and the Incident Management panel will be refreshed.
Change Severity
1. Double-click the incident on which the severity is to be changed, in one of the Incident Management reports.
2. Select Change Severity from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of severity codes: Info, Low, Med, and
High.
3. Select the new severity code.
A message is displayed when the change has been completed, and the Incident Management panel will be refreshed.
Notify
1. Double-click the incident a user is to be notified about, in one of the Incident Management reports.
2. Select Notify from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of users.
3. Select a user.
Change Status
1. Double-click the incident on which the status is to be changed, in one of the Incident Management reports.
2. Select Change Status from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of status codes:
ASSIGNED - Once an incident has this status, it cannot have additional policy violations added to it. To add policy violations, change the incident status back
to Open, add the violations, and then change the status back to Assigned.
CLOSED - Once an incident is marked Closed it cannot be modified, and is no longer listed.
OPEN - This is the initial status for a new incident.
3. Select the new status code.
A message is displayed when the change has been completed, and the Incident Management panel will be refreshed.
Add Comments
1. Double-click the incident to which comments are to be added, in one of the Incident Management reports.
2. Select Comments from the drill-down menu, to open the User Comment window. For instructions on how to add comments, see Comments.
Prerequisites
A security policy contains an ordered set of rules to be applied to the observed traffic between database clients and servers.
A policy violation is logged each time that a rule is triggered. Policy violations can be assigned to incidents, either automatically by a process, or manually by authorized
users (see Incident Management).
Summary of Steps
1. Click Comply > Tools and Views > Incident Generation to open Incident Generation Processes.
2. Edit Incident Generation Process (Query, Severity, Threshold, Scheduling).
3. Go to Incident Management tab for reports.
The Incident Management application provides a business-user interface with workflow automation for tracking and resolving database security incidents.
Incident generation processes can be defined and scheduled to read the policy violations log and generate new incidents. From an incident generation process, each
selected incident is:
In addition, policy violations can be assigned manually (by authorized users) to new incidents or existing incidents from the Policy Violations / Incident Management
report.
Once an incident has been generated, administrators and other users work with incidents from the Incident Management tab, which is included on both the admin and
user portals. From there, all other tasks can be performed (assign incidents, send notifications, assign status, and so forth).
The Incident Management functions can be accessed from the drill-down menus of the Incident Management reports. Each user may only have a subset of reports or
functions available, depending on the security roles assigned to the user account.
An incident generation process executes a query against the policy violations log, and generates incidents based on that query. By default, the definition and scheduling of
incident generation processes is restricted to users with the admin role.
Procedure
1. Click Comply > Tools and Views > Incident Generation to open Incident Generation Processes.
2. Click the Add Process button to open the Edit Incident Generation Process panel.
3. Select a query from the Query list. There are several restrictions that apply to queries used in an incident generation process. Open the query in the Query Builder to
verify that it satisfies the following criteria:
The query must be from the Policy Violations domain.
The query must have the Add Count checkbox checked. See Query Builder Overview (Queries) for more information.
The main entity for the query must be the Policy Rule Violation entity.
The query fields for the query must not include a SQL string (from either the SQL entity or the Full SQL String attribute of the Policy Rule Violation entity).
4. Select a Severity for the incident (defaults to Info).
5. Optionally enter a Category for the incident (defaults to none).
6. Optionally enter a Threshold for generating the incident. The default is one, meaning every "row" returned by the query will generate an incident.
7. From the Assign to User list, select the user to whom the incident will be assigned.
8. Enter the From and To Dates for the query. For a scheduled query, use relative dates (for example: now -1 day and now).
9. Click Save to save the process definition. You cannot run or schedule the process until it has been saved.
10. To run the query now, click Run Once Now.
11. To schedule the query, click Modify Schedule to open the scheduling utility. For instructions on how to use the scheduler, see Scheduling.
12. Assign/Reassign to Incident - Double-click on the policy violation to be assigned or reassigned, in one of the Incident Management reports.
13. Select Assign/Reassign to Incident from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of open incidents (for
example, Assign to Incident #123), and one additional option: Assign to a new incident.
14. Select an incident to assign this violation to, or select Assign to a new incident to assign this Policy Violation to the next incident number available (they are
numbered in sequence).
A message displays when the change has been completed, and the Incident Management panel will be refreshed. If a new incident has been created, it will be
listed first on the Open Incidents report.
From the Incident Policy Violations / Incident Management report, users can:
A message displays when the change has been completed, and the Incident Management panel will be refreshed.
18. Change Severity - Double-click on the incident on which the severity is to be changed, in one of the Incident Management reports.
19. Select Change Severity from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of severity codes: Info, Low, Med, and
High.
20. Select the desired severity code.
A message displays when the change has been completed, and the Incident Management panel will be refreshed.
Once a policy violation has been assigned to an incident the incident displays in the Open Incidents report. From the Open Incidents report, users can perform the
actions shown:
21. Notify - Double-click on the incident a user is to be notified about, in one of the Incident Management reports.
22. Select Notify from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of users.
23. Select a user.
24. Change Status - Double-click on the incident on which the status is to be changed, in one of the Incident Management reports.
25. Select Change Status from the drill-down menu. When selected, this menu will be replaced by a new menu containing a list of status codes:
ASSIGNED - Once an incident has this status, it cannot have additional policy violations added to it. To add policy violations, change the incident status back
to Open, add the violations, and then change the status back to Assigned.
CLOSED - Once an incident is marked Closed it cannot be modified, and is no longer listed.
OPEN - This is the initial status for a new incident.
26. Select the desired status code.
A message displays when the change has been completed, and the Incident Management panel will be refreshed.
27. Add Comments - Double-click on the incident to which comments are to be added, in one of the Incident Management reports.
28. Select Comments from the drill-down menu, to open the User Comment window. For instructions on how to add comments, see Commenting.
Each user portal displays a My Open Incidents report for that user. From the My Open Incidents report, users can perform the actions shown:
Query rewrite
The modification of queries happens transparently and on-the-fly, such that a user issuing queries seamlessly receives results based on rewritten SQL statements.
Query rewrite functionality is implemented through a combination of query rewrite definitions indicating how queries should be changed or augmented and a run-time
context indicating the specific circumstances where the query rewrite definitions should be applied.
Rewriting database queries on the fly allows administrators to implement several types of access control, as illustrated by the following examples.
Limiting access to rows by adding a WHERE clause SELECT C from T SELECT C from T WHERE [values]
Limiting access to columns by modifying the SELECT list SELECT C1 from T SELECT C2 from T
Restricting database activities by rewriting SQL statements to do nothing. SELECT EMAIL from T SELECT++ EMAIL from T
Restricting what users can do by modifying query verbs (SELECT, INSERT, UPDATE, etc.) DROP TABLE T UPDATE T SET [values]
Restricting what users can do by modifying query objects (TABLE, VIEW, COLUMN, etc.) SELECT C from T1 SELECT C from T2
The ability to seamlessly rewrite database queries provides an extremely powerful and flexible form of access control that allows organizations to quickly address a wide
range of security concerns. For example, query rewrite definitions can be developed to accomplish any of the following:
enforcing security in multi-tenancy scenarios where multiple users and applications share a single database, but where not all users and applications should have
access to all data
exposing a database to a production environment for testing purposes without exposing the entire database
rapidly correcting critical security vulnerabilities while permanent solutions are developed at the database or application level
Please review the following sections to learn more about how query rewrite works and how to configure it for use within your Guardium environment.
Note: If the S-TAP is set for firewall_default_state=1, the default state for Query Rewrite, qrw_default_state=1 cannot be set at the same time.
Overview
Once query rewrite has been enabled on the S-TAP for supported database servers (see Enabling query rewrite), query rewrite functionality is implemented through three
policy rule actions:
These rule actions are installed as access policy rules. The access policy rules specify both query rewrite definitions that indicate how queries should be rewritten and a
run time context that indicates when those definitions should be applied.
Once query rewrite rules have been specified, sessions are handled as follows:
1. A SQL request triggers a QUERY REWRITE: ATTACH rule, and all subsequent activity in the session is watched by query rewrite.
2. While sessions are being watched by query rewrite, traffic is held at the S-TAP and the session information is checked against access policy rules.
3. If a query in the watched session matches a QUERY REWRITE: APPLY DEFINITION rule, the query is rewritten according to the definition and sent to the S-TAP.
4. The S-TAP releases the rewritten query to the database server.
5. When a QUERY REWRITE: DETACH rule is triggered, query rewrite stops watching activity for the remainder of the session or until another QUERY REWRITE:
ATTACH rule is triggered.
Oracle
DB2 (Linux and Unix only)
Microsoft SQL
For information about supported database servers and any associated restrictions, see Platforms supported for IBM Guardium 10.1. For detailed information about
database client support for query rewrite, contact IBM Guardium support.
Important: When query rewrite is watching a session, the sniffer is required to send engine verdicts to the S-TAP for each SQL request in the session. This process is
asynchronous and introduces latency between the sniffer and S-TAP. Create query rewrite rule conditions that avoid attaching to sessions for performance-sensitive or
trusted applications.
Parent topic: Query rewrite
This task guides you through the changes you need to make in your guard_tap.ini file.
Procedure
1. Open guard_tap.ini in a text editor.
2. Locate the parameter qrw_installed = 0 and change it to qrw_installed = 1. The parameter qrw_installed must be set to a value of 1 to enable query rewrite
functionality. Set qrw_installed = 0 to disable query rewrite functionality.
3. Save your changes to guard_tap.ini.
4. On the Guardium system, log in as the CLI user and restart the inspection engine using the restart_inspection_engines CLI command.
Results
Upon completion of this task, query rewrite functionality is enabled and will respond to policy rules that contain query rewrite actions.
Parent topic: Using query rewrite
Next topic: Creating query rewrite definitions
Procedure
1. Open Protect > Security Policies > Query Rewrite Builder.
2. Provide a unique and meaningful name for the query rewrite definition in the Name field.
3. Create and parse a model query.
a. Provide a model query in the Enter a model query field.
For example, to create a rewrite definition preventing the use of SELECT * from statements, enter SELECT * from EMPLOYEE as a model.
b. Click the DB Type menu and select a SQL parser to use with the model query.
c. Click Parse to process the model query.
Your model query will be broken down into individual components with each actionable component highlighted with underlined text.
Options:
Select and modify an individual verb, field, or object from the parsed query
Add a component to the query (shown as gray underlined text next to the parsed query)
Rewrite the entire query by clicking the gray underlined [R] next to the parsed query
In the example SELECT * from EMPLOYEE where we want to prevent the use of SELECT * from statements, click the * to provide rewrite content.
For example, to prevent the use of SELECT * from statements, replace the * component with a list of specific objects: EMPNO, FIRSTNME, MIDINIT,
LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB, EDLEVEL, SEX.
Important:
Rewrite definitions are based on syntax, so any statement with the form SELECT * from [OBJECT] will match the example. For instance, both SELECT *
from DEPARTMENT and SELECT * from EMPLOYEE statements match our example.
Query rewrite definitions can be restricted to specific objects using access policy rules. See Defining a security policy to activate query rewrite for
instructions.
c. Click Save to save the rewrite definition, then click Back to close the dialog.
5. Review the output of the query rewrite definition using the Real time preview field and make any changes as needed.
Using our example, SELECT * from EMPLOYEE is rewritten as SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE, JOB,
EDLEVEL, SEX from EMPLOYEE.
6. When you are satisfied with the results, click Save to save your query rewrite definition.
Your query rewrite definition is saved and displayed in the list of available query rewrite definitions in the Query Rewrite Builder.
What to do next
Continue working with query rewrite definitions:
Create additional definitions by clicking New and repeating the steps in this task.
Edit an existing query rewrite definition by double-clicking an item in the list of available query rewrite definitions.
Copy and edit an existing query rewrite definition by selecting the item in the list of available query rewrite definitions and clicking Clone.
Delete an existing query rewrite definition by selecting the item in the list of available query rewrite definitions and clicking Delete.
When you are finished working with query rewrite definitions, continue to the next step in this sequence to test and implement your definitions.
Parent topic: Using query rewrite
Previous topic: Enabling query rewrite
Next topic: Testing query rewrite definitions
Related tasks:
Defining a security policy to activate query rewrite
Procedure
1. Open Protect > Security Policies > Query Rewrite Builder.
2. Click Set Up Test to open a dialog and select query rewrite definitions for testing.
a. Drag and drop items from the Available query rewrite definitions field to the Test query rewrite definitions field.
b. Drag and drop items with the Test query rewrite definitions field to order multiple definitions as you would within an access policy.
c. Click Save to close the dialog when you are finished.
3. Type or paste test queries into the test field.
For example, to test a rewrite definition preventing the use of SELECT * from statements (see Creating query rewrite definitions), enter sample queries such as:
4. Click Run Test to process the sample queries and review the results.
For example, the sample queries provided in the previous step return the following results:
Rewrite definitions are based on syntax, so any statement with the form SELECT * from [OBJECT] will match the example. For instance, both SELECT * from
DEPARTMENT and SELECT * from EMPLOYEE statements match our example.
Query rewrite definitions can be restricted to specific objects using access policy rules. See Defining a security policy to activate query rewrite for instructions.
5. Continue entering sample queries to test your rewrite definitions. Click Set Up Test to change or reorder the rewrite definitions used for the test.
What to do next
When you are satisfied with the test results, create a security policy to begin using your query rewrite definitions with live queries.
Parent topic: Using query rewrite
Previous topic: Creating query rewrite definitions
Next topic: Defining a security policy to activate query rewrite
Related tasks:
Defining a security policy to activate query rewrite
Creating query rewrite definitions
Procedure
1. Open Protect > Security Policies > Policy Builder.
2. Create a new policy or modify an existing policy to use your query rewrite definitions.
Tip: Consider creating a new policy for testing query rewrite definitions. Add your rewrite rules to existing security policies once you are satisfied with the behavior
of the test policy.
3. Click Edit Rules to begin adding rewrite rules to the selected policy, then select Add Rules > Add Access Rule.
Note: Query rewrite rules are always classified as access rules.
4. Add a rule with a QUERY REWRITE: ATTACH rule action. Be sure to check the Continue to next rule checkbox. This rule identifies the specific session parameters
that must be matched in order to trigger a query rewrite session, for example a specific database user name or client IP address.
5. Add a rule with one or more QUERY REWRITE: APPLY DEFINITION rule actions and select the query rewrite definition(s) you would like to apply. This rule identifies
the specific objects or commands that must be matched in order to apply the rewrite definitions and modify the source query.
For example, you can limit the data that displays back to a user when a SELECT * from EMPLOYEE query is issued. To do so, set the Object field to EMPLOYEE and
create a query rewrite definition to replace the * with a list of defined columns for the data you want the user to have access to.
6. Add a rule with a QUERY REWRITE: DETACH rule action. This detaches the query rewrite session and prevents further monitoring of session traffic. The conditions
set for the detach rule should not be the same as the attach rule.
7. To install the new policy, return to the Policy Finder, select your security policy, and choose Select an installation action > Install and Override. Click OK when asked
to confirm installation of the policy.
8. Log in to your database server and run test queries to verify that your access policy rewrite rules are functioning as intended.
a. Log in to your database server.
b. Issue queries that should trigger (or should not trigger) the installed access policy rules and match the criteria of your query rewrite definitions.
For example, if you set the Object to EMPLOYEE and you issue SELECT * from EMPLOYEE, you should only see results for the columns you defined for * in
the query rewrite definition. In contrast, if you issue a SELECT * from DEPARTMENT, you should see all column data returned for the DEPARTMENT object.
Procedure
1. Open Reports > Report Configuration Tools > Query Builder
2. Select Query Rewrite from the Domain menu.
5. Select one of the available options from the Main Entity menu.
Include the following items as a starting point for a query rewrite report:
Client/Server: Timestamp
Client/Server: DB User Name
Client/Server: Server Type
Query Rewrite Log: Applied QR Definition Names
Query Rewrite Log: Input SQL
Query Rewrite Log: Output SQL
8. Click Save when you are done building your report.
9. Click Create Report to create the report.
10. Click Add to My Custom Reports to add the report to your custom reports.
11. Open Reports > My Custom Reports and select the report you created to view a report of query rewrite actions.
Groups: Guardium uses the concept of groups for policy and report creation.
Guardium groups are created and maintained on the Guardium collector or Central Manager. Do not confuse Guardium groups with file system groups.
It is recommended that you consider a naming strategy for your groups, including groups of data sources (file servers), groups of files (such as by sensitivity level or
combination of sensitivity level and application), groups of users (a list of all known users, “authorized†users, users with special privileges).
A overly broad rule (a rule that monitors too many files) can overload the system and increase processing and response time.
A FAM rule can have more than one pattern in it. To protect both a directory and its contents, define a rule with two patterns /FAMtest/* and /FAMtest.
A group comprised of file paths: each path must be unique irrespective of case. For example, these two paths can co-exist in a group: C:\ABC and C:\abcdef.
However, these two paths cannot co-exist in a group C:\ABC and C:\abc. The Group builder is not case sensitive. It is not required to input members with all upper
case characters or all lower case characters. However, in UNIX, which is case sensitive, the path /IBM/Guardium is different from the path /ibm/guardium. If the
user wants to monitor both of these paths, the current Group builder has a limitation and will not see them as two paths.
The ordering of rules in the security policy is very important. The rules are sent to the S-TAP as a set and are processed strictly in order. Any given user activity is
checked against each rule in the policy in order. The first rule that meets the criteria of this file access is applied and subsequent rules are ignored. In most cases,
put the most specific rule first and the most general rule last. For example, you have two rules:
Rule A: audit only all access to /data/*
Rule B: block, log violation and audit user 'joe' from accessing /data/salaries
If you put Rule A first, and Joe tries to read /data/salaries, there is no need to go to the next rule, and Joe will be audited. If you put Rule B first, Joe is blocked from
accessing /data/salaries and there is no need to go to the next rule.
Behavior of FAM when using a pre-10.1.2 S-TAP (no multi-action support) with a 10.1.2 or higher Sniffer (includes multi-action support)
If using a pre-10.1.2 S-TAP with a new 10.1.2 Sniffer/UI with multi-action rule, blocking is implemented correctly since this action is on the S-TAP side.
On the Sniffer side, the actions are accumulative of all actions specified.
For example, if you select Audit Only for READ command and Block, Log Violations and Audit for DELETE command, then the DELETE command is blocked, but not
the READ command. However, both the READ command and the DELETE command trigger audit, log violations and alerts even though the READ command was
Audit Only.
In the other instance where the user uses a 10.1.2 S-TAP and a pre-10.1.2 Sniffer/UI, that works fine since there is no way to define a multi-action rule ( thus no UI
or GuardAPI to support).
Rule attributes
Rule name
A unique name
Datasource
Rule Action
The rule action is the action taken when the criteria are met. Actions are one of:
One action for any file access that matches the rule criteria
Multi-action rule comprised of multiple actions, each one is per a specified command category or a specified group. Note that Continue to next rule is not
supported when using Multi-action rules.
Alert and audit: Send an alert directly generated from the sniffer with specific behavior, and log the event.
Audit only: Log the event in GDM tables
Block, log violation, and audit: Block access to the object, log a policy violation, and log the event. A blocking action requires an alert configuration as well.
Ignore: No action taken.
Log as violation and audit: Log this as a policy violation and log the event.
Access commands: Because there are hundreds of file system commands, they are grouped into these categories:
Read
Write
Execute
Delete
File Operation, including any calls that affect file metadata such as change file ownership, change file permissions, and similar calls.
These categories are fixed in the system and cannot be changed. However, you can create a Guardium group that contains any combination of categories, and use
that group in the security policy. For example, you can create a Guardium group that contains Write and Execute as members.
If you leave the command unspecified, all file system commands are counted as a match. Some calls, such as get system time, do not affect files at all and are
ignored.
Rule criteria
For any given file access, rule criteria are used to evaluate whether a particular action should be taken. For any datasource or group of datasources (file servers), the
rule criteria that you can specify include:
User: The OS user who is accessing files. This can also be a group of users, as defined in a Guardium group. If this is left blank then the rule applies to all users
(except root).
File Path: This can be a Windows or UNIX file path, an individual file path, or a group of file paths, as defined in a Guardium group. This cannot be blank (except
when removable media is selected). You can also select to monitor the subdirectories in the file path.
Tip: Wild cards take extra processing. Excessive use of wild cards impacts performance.
UNIX
Usage:
meaning
Directory: /
File name: FAM*
/guardium/modules/SUPERVISOR/10.0.0/FAM.output
This matches. The file name, FAM.output, matches the name, FAM, and is located in a subdirectory of the given directory '/'.
Windows: For Windows, you must specify the drive, such as C:\
Usage:
To monitor all files on the C drive, enter C:\ and mark the Monitor subdirectories checkbox.
policy1 -> rule1 -> "DELETE" -> "Alert and Audit" -> "SYSLOG"
policy1 -> rule2 -> "READ" -> "Alert and Audit" -> "MAIL"
policy1 -> rule1 -> "DELETE, READ" -> "Alert and Audit" -> "SYSLOG"
policy1 -> rule1 -> "WRITE" -> "Alert and Audit" -> "MAIL"
Adding another action using commandGroupId, assuming commandGroupId=20000 exists, and it has "DELETE, WRITE"
FAM behavior with pre-V 10.1.2 S-TAP and V. 10.1.2 and higher Sniffer
Multi-action for FAM was introduced in V. 10.1.2. The pre-10.1.2 S-TAP does not support FAM multi-action, while V. 10.1.2 and higher Sniffer does support multi-
action. On the Sniffer side, the actions are accumulative of all actions specified.
For example, if the policy specifies Audit Only for READ command, and Block, Log Violations and Audit for DELETE command, then the DELETE command is blocked,
but not the READ command. However, both the READ command and the DELETE command will trigger audit, log violations and alerts even though the READ
command was Audit Only.
Procedure
1. On a standalone or MU, access the FAM policy builder, navigate to Protect > Security Policies > Policy Builder for Files.
2. Enter a name for the new policy. (You can save the policy once a rule is defined.)
3. To add existing rules to the policy.
a. Click Show Templates. The Rule Templates table opens.
b. Optionally filter the list with the filter function.
b. Click , change the name, modify the other attributes as relevant, and click Save.
6. Change the order of the rules using the
Creating a FAM policy rule from the Investigative Dashboard Entitlements tab
You can use the monitored data, such as datasource names, user names, actions, and file paths, in the Investigation Dashboard Results Table to create policy rules.
Procedure
1. Choose File from the dropdown list in the product banner and click the search icon to open the Investigation Dashboard for file data.
2. Open the Results Table Entitlements tab. Click Details to see individual entries.
3. Choose one or more entries in the results that you want to use to populate a rule. You can use the Select all check box to include all the entries that are currently
displayed (not all the entries in the database).
4. Right-click and choose Add Policy Rule. The Build Rule dialog opens with values from the entries that you selected. If you selected multiple entries, a group is
created that contains the values from those entries. You can create a rule that is to be added to an existing policy, or create a new policy that includes your new rule.
Note: A overly broad rule (a rule that monitors too many files) will overload the system and increase processing and response time.
Note: A FAM rule can have more than one pattern in it. To protect both a directory and its contents, define a rule with two patterns /FAMtest/* and /FAMtest.
Note: When using FAM policy, setting a group to define monitored file paths requires either consideration of case sensitivity. Otherwise the group cannot be created
successfully. The workaround is to create two different FAM policy rules. Clarification - If strings defined as members of group are different without considering
case sensitive, the group can be created successfully. For example: 1. C:\ABC 2. C:\abcdef. If strings defined as members of group are same without considering
case sensitive, the group can NOT be created. For example: 1. C:\ABC 2. C:\abc So it is not required to input members with all upper case characters or all lower
case characters. Group builder is not case sensitive. However, in UNIX, which is case sensitive, the path /IBM/Guardium is different from the path /ibm/guardium. If
the user wants to monitor both of these paths, the current Group builder has a limitation and will not see it as the same path.
5. Choose datasources, actions, and criteria. Overwrite any values that you want to change. Click Edit to modify each field.
6. To create a new policy and install it, click Create and Install. To create the policy but not install it, click OK.
Automate and integrate the following audit activities into a compliance workflow:
The ability to group multiple audit tasks (reports, vulnerability assessments, etc.) into one process.
Schedule these processes to run on a regular basis.
Run these tasks in the background.
Write the task results to a comma-separated value (CSV) file or ArcSight Common Event Format (CEF) file and/or forward the results to other systems using Syslog.
Add comments and notations.
Assign the process to its originator for viewing (he/she will get a new item in their To-Do list once the result is ready).
Assign the process for other users or to a group of users or a role.
Create the requirement that these assignees sign on the result.
Allow escalation of the result (assign to someone outside of the original audit trail).
Transform the management of database security from time-consuming manual activities performed periodically to a continuous, automated process that supports
company privacy and governance requirements, such as PCI-DSS, SOX, Data Privacy and HIPAA.
Export audit results to external repositories for additional forensic analysis – Syslog, CSV/CEF files, external feed.
The Audit Process Log report, shows a detailed activity log for all tasks including start and end times. This report is available for admin users via the Guardium® Monitor
tab. Audit tasks show start and end times, however the start and end of Security Assessments and Classifications (which go to a queue) is the same.
The results of each workflow process, including the review, sign-off trails, and comments can be archived and later restored and reviewed through the Investigation
Center.
A process definition
A distribution plan, which:
Defines receivers, who can be individual users, user groups, or roles. (See Process Receivers.)
Defines the review/sign responsibility for each receiver.
Defines the distribution sequence by setting the Continuous flag.
A set of tasks (see Process Task Types)
A schedule - The audit process can be run immediately, or a schedule can be defined to run the process on a regular basis
Reports, custom or pre-defined. Guardium provides hundreds of predefined reports, with more than 100 regulation-specific reports.
Security assessment report, The security database assessment scans the database infrastructure for vulnerabilities, and provides an evaluation of database and
data security health, with both real-time and historical measurements. It compares current environment against preconfigured vulnerability tests based on known
flaws and vulnerabilities, grouped using common database security best practices (like STIG and CIG1), as well as incorporating custom tests. The application
generates a Security Health Report Card, with weighted metrics (based on best practices) and recommends action plans to help strengthen database security.
An entity audit trail, A detailed report of activity relating to a specific entity is produced (for example, a client IP address or a group of addresses).
A privacy set, A report detailing access to a group of object-field pairs (a Social Security number and a date of birth, for example) is produced during a specified time
period.
A classification process, The existing database metadata and data is scanned, reporting on information that may be sensitive, such as Social Security numbers or
credit card numbers.
An external feed, Data can be exported to an external specialized application for further forensic analysis.
Note: The Optional External Data Feed is an optional component enabled by product key. If this feature has not been enabled, this choice will not appear in Audit
Task selection and the Feed Type list will be empty.
Workflow Automation (audit processing) for the Aggregator server now includes the capability to create ad-hoc databases for each Aggregator task and specify only the
relevant days for that task.
Note: The ad-hoc databases for the Aggregation server may be kept in the system for up to 14 days (depending on the value of the CLI command, drop_ad_hoc_audit_db)
for post-run analysis by Guardium support services if required.
When defining reports in Audit Process, the number of days of the report (defined by the FROM-TO fields) should not exceed a certain threshold (one month by default). If
this threshold is exceeded, a run-time error will result when trying to run the audit task on the Aggregator.
It is permissible to create an audit task with a FROM-TO range that is wider than the max_audit_reporting value  (set in CLI) because Audit processes defined on the
Aggregator may be run on managed collectors (when this aggregator is a manager). Audit tasks run on collector unit, do not have a max_audit_reporting limitation.
So, it is valid to save tasks beyond the allowed range, but you will get a Run Time Exception when the task is executed on the Aggregator.
The Audit Report threshold can be configured using the CLI command, show max_audit_reporting or store max_audit_reporting. There is no warning message when a
report is created with an invalid FROM-TO range. Instead a fixed message appears in the Task Parameters panel in the Audit Process setup menu screen (Tools/Audit
Process Builder. open up Audit Tasks to display Task Parameters). The fixed message is:
On aggregators, only reports not exceeding the allowed time range (CLI: max_audit_reporting) will be executed.
Note: When running a patch install, all audit processes are stopped.
Stop an audit process by using invoking GuardAPI (place the cursor on any line and double-click for a drill-down) from Comply > Tools and Views > Audit Process Log
report.
For any user, stopping an audit process, will display only the line belonging to that user (just the tasks, not all the details). An admin user can see all the details and can
stop anyone's audit processes. A user can only stop their own audit processes.
Note:
Queries using a remote source can not be stopped. Online reports using a remote source can not be stopped.
Stopping an audit processes does not apply to Privacy Sets Audit Tasks or External Feed Audit Tasks. If the Privacy Set or External Feed tasks have started, they will finish
even if the process is stopped.
Results Distribution
Audit process receivers will be notified via email and/or their To-Do list of pending audit process results. You can designate any receiver as a signer for a process, in which
case the results can optionally be held at that point on the distribution list, until that receiver electronically signs the results or releases them. Receivers can be individual
users, user groups, or roles.
There is also a button to delete any audit process results. See the Audit Process Finder screen. Look for the Results button, next to the Run Once Now button (choices of
View or Delete).
Delete audit process results, but track or log who deleted the report. The audit-delete role is used to track or log when an audit process result has been deleted. Users
with the audit-delete role can delete reports. Admin users can also delete reports. Tracking is done through the User Activity Audit Trail report.
Note: Audit process results from remote sources is limited to 100,000 results. To go beyond that limit, use the CLI command, store save_result_fetch_size (show
save_result_fetch_size).
Process Receivers
If a group receiver is selected, and any workflow automation task uses the special run-time parameter ./LoggedUser in a query condition, the query will be executed
separately for each user in the group, and each user will receive only their results.
For example, assume that your company has three DBAs, and each DBA is in charge of a different set of servers. Using the Custom Data Upload facility, upload the areas of
responsibilities of each DBA (with server IPs) to the Guardium system, and correlate that to the database activity domain, and then use a report in this custom domain as
an audit task. If a user group that contains the three DBAs is designated as the receiver, each DBA will receive the report relevant for his or her collection of servers only.
If a group receiver is selected, and sign-off is required, each group member must sign the results separately (as explained earlier, each member of the group may be
looking at a different set of results).
A receiver can be solely an email address and results will be sent to that email address. When entering an email address, the user will be required to enter a user that will
be used to filter the data. The user must be the same user that is logged in or a user under the user that is logged in the data hierarchy.
If a role receiver is selected, only one user with that role will need to sign the results, and other users with that role will be notified when the results have been signed.
Note:
When a workflow event is created, every status used by that event can be assigned a role (meaning that events can only be seen by this role when in this status). Â When
an event is assigned to an audit process, it is important that every role that is assigned to a status of this event have a receiver on this audit process. Â Otherwise, it is
possible that an audit result row can be put into a status where none of its receivers are able to see this row or change its status.
If this is to occur, the admin user (who is able to see all events, regardless of their roles) would be able to see the row and change its status. Â However, if data level
security is on, the admin user may not be able to see this row. Â The admin user would need to either turn data level security off (from Global Profile) or have the
dataset_exempt role. It is important to configure the audit process so that all roles who must act on an event associated with this audit process are receivers of this audit
process.
email Notification
Optionally, receivers can be notified of new process results via email, and there are two options for distributing results via email:
Link Only  - The email notification will contain a hypertext link to the results stored on the Guardium system. For the link to work, you must access your mail from a
system that has access to the Guardium system. See the following section for more information about email links.
Full Results - A PDF file or generated CSV file containing the results will be attached to the email, except for an Escalation that specifies a receiver not included in
the original distribution list, in which case no PDF or CSV file will be attached. When the Full Results option is selected, care must be taken, since sensitive and
private data may be included in the PDF or CSV file. When running an audit process, if there is a receiver with Full Results with CSV checked, it does not generate
CSV files for tasks of type Assessment, Classifier or External Feed. These task types also can not generate CSV/CEF/PDF files for export. Only for tasks of type
Report, Privacy Set or Entity Audit Trails, and if there is a receiver with Full Results via CSV checked, will CSV files be generated.
Note: When viewing audit results, if a generated PDF already exists, a Recreate PDF button will appear for the user to recreate and download the regenerated PDF.
If you are accessing email from a location where you cannot normally access the Guardium system, the links will not work. For example, when out of the office, you
may have access to your email over the Internet, but not to your company's private network or LAN, where the system is installed.
If you have not accessed your email for a longer period of time than the report results are kept, those results will not be available when you click the link. For
example, if the results are kept for seven days but you have been on vacation for two weeks, your email may contain links to results older than seven days, and
those links will not work.
If the Continuous check box is marked, distribution continues to the next receiver on the list without interruption.
If the Continuous check box is cleared, distribution to the next receiver is held until the current receiver performs the required action (review or sign).
DBAs - All DBAs should receive their results at the same time, with each DBA receiving a different result set based on the server IPs associated with him/her
Only when ALL DBAs have signed, the DBA Manager should see the results
Only when DBA Manager releases the report, the Auditors should see the results
In addition, CEF and CSV file output can be written to syslog. If the remote syslog capability is used, this will result in the immediate forwarding of the output CEF/CSV file
to the remote syslog locations. The remote syslog function provides the ability to direct messages from each facility and severity combination to a specific remote system.
See the remotelog (syslog) CLI command description for more information.
Each record in the CSV or CEF file represents a row on the report.
The exported file is created in addition to the standard task output, it does not replace it. These files are useful when you need to:
Integrate with an existing SIEM (Security Incident and Event Manager) in your infrastructure (Qradar, ArcSight, Network Intelligence, LogLogic, TSIEM, etc.).
Review and analyze very large compliance task results sets. (Task results sets that are intended for Web presentation are limited to 5,000 rows of output, whereas
there is no limit to the number of rows that will be written to an exported CSV or CEF file.)
Exported CSV and CEF files are stored on the Guardium system, and are named in the format:
process_task_YYYY_MMM_DD-HHMMSS.<csv | cef>
Where process is a label you define on the audit process definition, task is a second-level label that you can define for each task within the process, and YYYY_MMM_DD-
HHMMSS is a date-time stamp created when the task runs.
You cannot access the exported CSV or CEF files directly on the Guardium system. Your Guardium administrator must use the CSV/CEF Export function to move these files
from the Guardium system to another location on the network. To access those files, check with your Guardium administrator to determine the location to which they have
been copied.
The fact that exported files are sent outside of the Guardium system has two important implications:
The release of these files is not connected to the results distribution plan defined for the audit process. These files are exported on a schedule defined by the
Guardium administrator.
Once the CSV/CEF Export function runs, all exported files will be available to anybody (Guardium user or not) who can access the destination directory defined for
the CSV/CEF Export operation. For this reason, your Guardium administrator may want to schedule additional jobs (outside of the Guardium system) to copy sets of
exported files from the Guardium CSV/CEF Export destination directory, to directories with appropriate access permissions.
Note: If observed data level security has been enabled, then audit process output (including files) will be filtered so users will see only the information of their assigned
databases. Files sent to an email receiver as an attachment will be filtered. However, files downloaded locally on the machine and then moved elsewhere using the Results
Export function are not subject to data level security filtering. See CSV/CEF Export later in this topic for further information on CSV/CEF Export.
The following table summarizes what happens when exporting an Audit Process file to CSV/CEF/PDF.
Attach to email Receiver Full Details radio --> PDF check box N/A Full Details radio --> PDF check box
Report empty and Approve if Receiver Export not affected (empty files will be Export not affected (empty files will be Export not affected (empty files will be
Empty = yes exported) exported) exported)
How Zip for Email and Compress work for Audit Task Output
Zip for Email is the highest level of control for Audit Task Export. Zip for email produces a set of CSV or CEF files. PDF is not ever zipped and is not ever compressed.
Note: For CSV attachments, when Zip for Email is cleared, Compress can still be applied. And Compress can be per task. Thus one Audit Task may send a .csv file while
another may send a .csv.gz file, in the same email.
With Zip for email checked (regardless of whether Compress is also checked), the attachment is one zip file of CSV files.
With Zip for email not checked, and Compress checked, the attachment is a set of csv.gz files.
With Zip for email not checked, and Compress not checked, the attachment is a set of csv files.
With Compress checked, Download All will be csv.gz.
With Compress cleared, Download All will be csv.
With Compress checked or cleared, Download displayed will still be csv.
With Compress checked, export of CSV/CEF files will be gzipped.
With Compress cleared, export of CSV/CEF files will not be gzipped.
SCAP is Security Content Automation Protocol. AXIS is Apache EXtensible Interaction System and is used by QRadar.
Upon entering a subject, it will check whether any variable (starting with %% is present) and will ensure all are valid variables.
For example, to define an assessment task in Audit Process Builder, it is first necessary to go to Security Assessment Builder to create assessment tests and then to
Datasource Definitions to identify the database(s) to be assessed. Save your work when creating Audit Workflow and then go to other tasks or perform those other
tasks first and then create the Audit Workflow Process.
Add Receivers
1. In the Receiver column, select a receiver from the drop-down list of Guardium individual users, groups, or roles. If you select a group or a role, all members of the
group or users with that role will receive the results; and if signing is required, only one member or user will need to sign the results.
2. In the Action Required column, select one option:
Review (the default) - Indicates that this receiver does not need to sign the results.
Review and Sign - Indicates that this receiver must sign the results (electronically, by clicking the Sign Results button when viewing the results online).
1. Select title.
2. Enter an optional label for the file in the CSV/CEF File Label box. The default is from the Description for the task. This label will be one component of the generated
file name (another will be the label defined for the workflow automation process).
3. Mark either Export CSV file or Export CEF file.
Note: CEF file output is appropriate for data access domain reports only (Access, Exceptions, or Policy Violations, for example). Other domains like the Guardium
self-monitoring domains (Aggregation/Archive, Audit Process, Guardium Logins, etc.) do not map to CEF extensions.
4. If Export CEF file was selected, optionally mark the Write CEF to Syslog box to write the CEF records to syslog. If the remote syslog facility is enabled, the CEF file
records will thus be written to the remote syslog.
5. If the Compress box is checked, then the CSV/CEF files to be exported will be compressed.
6. If the Export PDF file box is checked, then a PDF file (with similar name as CSV Export file) for this Audit Task is created and exported together with the CSV/CEF
files.
Note: The Export PDF file will not be compressed, even if the Compress box in the previous step is checked.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Report radio button.
3. There a number of choices for CSV/CEF File Label, Export CSV/CEF, Export PDF, Write to Syslog, and Compress. See Export a CSV or CEF File.
4. The selection of PDF Options are: Report (the current results), Diff (difference between one earlier report and a new report) and Reports and Diff (both).
Note: The selection of PDF Options applies to both PDF attachments and PDF export files. The Diff result only applies only AFTER the first time this task is run.
 There is no Diff with a previous result if there is no previous result. The maximum number of rows that can be compared at one time is 5000. If the number of
result rows exceeds the maximum, the message
Workflow Builder
The formal sequence of event types created in Workflow Builder is managed by clicking on the Event and Additional Column button in the Audit Tasks window. This button
will appear after an audit task has been created and saved. This additional button will not appear until the audit task is saved. Configure these workflow activities when
Adding An Audit Task:
1. Create and save an Audit Task. After saving, an additional button will appear, Events and Additional Columns.
2. Click this additional button.
3. At the next screen, place a checkmark in the box for Event & Sign-off. The workflow created in Workflow Builder will appear as a choice in Event & Sign-off.
4. Highlight this choice. Apply (save) your selection.
5. If additional information (such as company codes, business unit labels, etc.) is needed as part of the workflow report, add this information in the Additional Column
section of the screen and then click Apply (save). In order to select the predefined or created groups column, change the Type column to Group. When done, close
This Event and Additional Column button appears in all audit tasks. By placing the cursor over this button, an information balloon will appear telling the user if the audit
task has an Event or a Sign-off column linked to the specific audit task.
Note:
If data level security at the observed data level has been enabled, then audit process output will be filtered so users will see only the information of their databases.
Under the Report choices within Add an Audit Task are two procedural reports, Outstanding Events and Event Status Transition. Add these two reports to two new audit
tasks to show details of all workflow events and transitions  These two reports will not be filtered (observed data level security filtering will not be applied). These two
reports are available by default in the list of reports only to admin user and users with the admin role.
Clone an Audit Task - If you are cloning a process, and you made changes to a cloned task before the cloned process is saved, the workflow associated with the original
task will not be cloned.
Deletion of a event status is permitted only if the status is not in the first status of any events, and if it not used by any action. The validation will provide a list of
events/actions that prevent the status from being deleted.
The owner/creator of a workflow event can always see all statuses of this event, regardless of what roles have been assigned to these statuses. Â
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Security Assessment button.
3. Select a security assessment from the Security Assessment list.
4. The selection of PDF Content are Report (the current results), Diff (difference between one earlier report and a new report) and Reports and Diff (both).
5. Click Apply.
Note:
If data level security at the observed data level has been enabled, then audit process output will be filtered so users will see only the information of their databases.
If a security assessment task  is empty (for example, a security assessment with a set of no roles), this empty security assessment will not show up in the drop-down list
in Audit Builder.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Entity Audit Trail button.
3. Select the type of entity to be audited. Depending on the type selected, you will be required to supply the following information:
Object: Enter an object name.
Object Group: Select an object group from the list.
Client IP: Enter a client IP address.
Client Group IP: Select a client IP group.
Server IP: Enter a server IP address.
Application User Name: Enter an application user name.
4. There a number of choices for CSV/CEF File Labels, Write CEF to Syslog, Compress and Export PDF. See Export a CSV or CEF File.
5. In the Task Parameters pane, supply run-time parameter values (only the From and To periods are required).
6. Click Apply.
Note: If data level security at the observed data level has been enabled, then audit process output will be filtered so users will see only the information of their databases.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Privacy Set button.
3. Select a privacy set from the Privacy Set list.
4. Select either Report by Access Details or Report by Application User to indicate how you want the results sorted and displayed.
5. There a number of choices for CSV/CEF File Labels, Write CEF to Syslog, Compress and Export PDF. See Export a CSV or CEF File.
6. Enter starting and ending dates for the report in the Period Start and Period End boxes.
7. Click Apply.
Note: If data level security at the observed data level has been enabled, then audit process output will be filtered so users will see only the information of their databases.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click the Classification Process button.
Note: If data level security at the observed data level has been enabled, then audit process output will be filtered so users will see only the information of their databases.
Note: If this feature is used in a Central Manager environment, the External Feed Patch must be installed on the Central Manager, and on all managed units on which the
task will run.
For more information about how the data is mapped from Guardium to the external application, refer to the documentation for the option that was purchased.
If you have not yet started to define a compliance workflow automation process, create a workflow process before performing this procedure.
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click External Feed.
3. Select a feed type from the Feed Type list.
4. The controls that appear next depend on the feed type selected. See Optional External Feed for additional information on specific External Feed Types.
5. Select an event type from the Event Type list.
6. Select a report from the Report list. Depending on the report selected, a variable number of parameters will appear in the Task Parameters pane.
7. In the Extract Lag box, enter the number of hours by which the feed is to lag, or mark the Continuous box to include data up to the time that the audit task runs.
8. In the Datasources pane, identify one or more datasources for the external feed.
9. Enter all parameter values in the Task Parameters pane. The parameters will vary depending on the report selected.
10. Click Apply.
Note: If there are outstanding events, then the results can not be signed either from the audit viewer or from the To-do list. If there are outstanding events and an attempt
is made to sign the results, the following message appears:
Note: When viewing audit process results, if a result has events associated with it, the Sign Results button is not available on this result until all events are in a Final state
or cannot be seen by this user (due to data-level security).
Note: This report also contains a date or Last Action Time, located in a column between Receiver and Status. This report shows that the result was signed by user AAA, but
also when this user AAA signed this result.
Regardless of who is a receiver of an audit result, an escalation can involve any user on the system, provided the Escalate result to all users box is checked in the Setup >
Tools and Views > Global Profile menu. A check mark in this box escalates audit process results to all users, even if data level security at the observed data level is
enabled. The default setting is enable. If the check box is disabled (no check mark in the check box), then audit process escalation will only be allowed to users at a higher
level in the user hierarchy. If the check box is disabled, and there is no user hierarchy, then no escalation is permitted.
Also, depending on event permissions, if for example, the infosec user can only see events in status1 and dba user can only see events in status2, the dba user will receive
a different result than the result the infosec user saw when the infosec user clicked Escalate. Â It is possible that infosec will escalate to dba, and dba will receive an audit
result with 0 rows in it.
1. If the compliance workflow automation results you want to forward are not open, open them now.
Note:
Audit process results cannot be escalated to a group of users, only to users or roles.
When escalating to an user who already has the result in the user's to-do list, a popup message will appear, asking if an additional email should be sent. If yes, an
additional email will be sent to the user, but the to-do list will not be incremented.
1. Open the Audit Process Builder by navigating to Comply > Tools and Views > Audit Process Builder.
2. Select the audit process from the Process Selection List.
3. Click Modify.
4. In the Audit Process Definition panel, mark the Active box to start running the process according to the schedule; or clear the Active box to stop running the process
(ignoring any schedule defined).
Note: If you are activating the process but there is no schedule, click Modify Schedule to define a schedule for running the process.
5. Click Save.
See the Compliance Workflow Automation topic for additional information on this subject.
Procedure
1. Open the Audit Process Finder by navigating to Comply > Tools and Views > Audit Process Builder.
2. Click the New button to open the Audit Process Definition panel.
The Audit Process Definition panel is divided into three sections: General, Receiver Table, and Audit Tasks.
3. Go to the General section. Enter a name in the Description box. Do not include apostrophe characters.
4. Check the Active box to associate a schedule with the process. At least one audit task must be defined before you can save the process.
5. Mark the Archive Results box if you want to store the results offline after the retention period has expired. When results have been archived, you can restore them
to the appliance for viewing again, later.
6. In the Keep for a minimum of (n) days or (n) runs boxes, specify how long to keep the results, as either a number of days (0 by default) or a number of runs (5 by
default). After that, the results will be archived (if the archive box is marked) and purged from the appliance.
7. If one or more tasks create CSV or CEF files, you can optionally enter a label to be included in all file names, in the CSV/CEF File Label box. These files can also be
compressed, or Zipped, by clicking on the Zip CSV for mail box to add a checkmark.
8. The Email Subject field in the Audit Process definition is used in the emails for all receivers for that audit process. The subject may contain one (or more) of the
following variables that will be replaced at run time for the subject:
%%ProcessName will be replaced with the audit process description
%%ExecutionStart will be replaced with the start date and time of the first task.
%%ExecutionEnd will be replaced with the end date and time of the last task.
Upon entering a subject, it will check whether any variable (starting with %% is present) and will ensure all are valid variables.
9. Go to the Receivers section. Open the drop-down box and add the receivers for the process. See Add Receivers in the Compliance Workflow Automation topic for
further information. Checkoffs are needed to determine action required, additions to To-do list, notification via email notifications and continuous distribution.
Again, see Add Receivers for complete information in setting these choices. In this example, do not check the continuous boxes for the receivers. If the Continuous
checkbox is marked, distribution continues to the next receiver on the list without interruption. If the Continuous checkbox is cleared, distribution to the next
After the report has run, distribution status can be observed from the report. In the example, the DBA has viewed and signed the report and the supervisor has not.
Distribution Status
The Audit Process Log report shows a detailed activity log for all tasks including start and end times. This report is available by navigating to Reports > Guardium
Operational Reports > Audit Process Log. Audit tasks show start and end times.
Open your Workflow Automation To-do List (see Audit Process To-Do List) and click View for the results set you want to view or sign.
If you have received an e-mail notification containing hypertext links to your To-Do List or the results, click one of those links to open your To-Do List or the results
directly from the e-mail. You must have access to the Guardium system at the location from which you are accessing your e-mail (or these links will not work). If you
are not logged in, you will be prompted to log in to the Guardium system.
Note: When you register a new managed unit to a central manager, you might be unable to view audit results. The application does not show results that have a timestamp
before the managed unit was registered to the central manager. The timestamp of the registration uses the central manager time, and the timestamp of the audit result
uses the managed node time. So, if the central manager time is ahead of the managed unit time, results generated on the managed unit are not visible until the managed
unit time passes the time of registration. This should happen in no more than 24 hours, possibly less depending on the locations of the two machines. You should be able
to view the results of audit processes on the managed unit within 24 hours of registration.
Parent topic: Building audit processes
Value-added: Setup a single audit process and distribute the appropriate results to the appropriate manager. This saves having to create separate audit processes for
separate receivers.
For example, consider a large organization that has fifteen DBA managers that need to review the activities for the DBAs they manage without viewing the activities of the
other manager’s DBAs. One solution would be to setup fifteen separate audit processes; one for each manager. This would take a lot of time to configure and it is
difficult to manage: Each audit process needs to be scheduled separately and any global change would need to be made individually for all fifteen audit processes.
The user group distribution method, on the other hand, permits the setup of a single audit process and distributes the appropriate results to each manager based on a
manager/DBA mapping. This process requires more upfront configuration but reduces to maintenance time. Only one audit process needs to be scheduled and changes
only need to be applied in one location.
User mapping
The first step in the process is to map the users to the data elements within Guardium that will be the basis for report distribution. The example that will be used in this
document will be based on objects, but you can apply these concepts with any data element within Guardium.
Example: Three users have responsibility over three different sets of tables, based on audit requirements (PCI, HIPPA, and CCI) within a database server, as follows:
User01 db2inst1.cc_numbers
User01 db2inst1.ccn
User02 db2inst1.ADDRESSES
User02 db2inst1.SSN_NUMBERS
User02 db2inst1.G_CUSTOMERS
User02 db2inst1.G_EMPLOYEES
User02 db2inst1.G_FUNDS
User03 db2inst1.doctor
User03 db2inst1.medicare
User03 db2inst1.med_history
This table must be added as a custom table within Guardium, either manually or through a data upload. The following steps demonstrate how to create a custom table
manually. The screenshots are from the “admin†user interface, but they can also be accessed from within the “user†user interface.
1. Navigate to Reports > Report Configuration Tools > Custom Table Builder and press the Manually Define button.
2. At the Custom Table Builder screen, define the table layout. Make sure that Group Type matches the correct data element in Guardium. Press Apply and Back
when complete.
3. Press Edit Data to manually add the records. Note, if you have a large amount of data, choose Upload Data to import from an external data source.
5. Enter each combination of values and press Insert until you have added all of the required records.
1. Navigate to Reports > Report Configuration Tools > Custom Domain Builder. Highlight [Custom] Accessand press Clone.
b. Highlight the table under Domain entities to which you would like to join the custom table.
c. Under Join condition choose the fields on each table on which to create the join and press Add Pair.
Custom Report
Next, create a report to distribute to the users.
1. Navigate to Reports > Report Configuration Tools > Report Builder and select the new domain from the Domain drop-down menu.
2. Press New.
User Group
Create a new group of “Guardium Users†based on the custom table.
1. Navigate to Setup > Tools and Views > Group Builder and create a new group with Guardium Users as the Group Type.
4. In the run-time parameter, enter the special tag “./LoggedUser†. This will cause the results to be distributed based on the custom mapping.
When the audit process completes, each receiver should a different result set based the mapping:
User02
User03
There are several ways to open the Audit Process To-Do List, including:
The following steps describe how to use the Audit Process To-Do List:
1. Select the user whose To-Do list you want to open, either by opening up the drop-down menu or clicking Search Users. You will be informed if the list is empty.
2. As an administrator, you can perform any actions on any to-do list entry. Any actions you perform are logged, indicating that the action was performed on behalf of
the user by the administrator.
3. The choices available per to-do list entry are View, Download as PDF and Sign viewed results.
The selection of PDF Content are: Report (the current results), Diff (difference between one earlier report and a new report) and Reports and Diff (both).
Note: The selection of PDF Content applies to both PDF attachments and PDF export files. The Diff result only applies only AFTER the first time this task is run.
 There is no Diff with a previous result if there is no previous result. The maximum number of rows that can be compared at one time is 5000. If the number of
result rows exceeds the maximum, the message compare first 5000 rows only will show up in the diff result.
4. Click on the icon of arrows circling to Refresh the set.
Note: To send files on an external server without sending email and without adding results to the to-do list, define an audit process without receivers. Also clear the to-do
list checkbox in the Add Receiver section and remove/ do not add any receiver in the receiver section in order not to add results to To-do list.
When a user accesses another user's results, the data presented in the report is filtered according to the Data Level Security and the role of the user selected (for example,
in the case of a custom workflow, the data is filtered according to the role of the user selected and the status defined for that role).
If a user with role admin accesses a result of a user that is UNDER in the hierarchy, then it behaves as explained in the previous paragraph. If administrator accesses a
result of a user which is NOT under in the hierarchy, then it will show the result using the Data Level Security of the administrator and will show for all roles.
When a result is added to a user's to-do list because a change in a status of an event, if the result was not in the to-do list previously, then an email is sent to the user. The
email will not contain a PDF, just a notification and link.
If a user goes to some other user's to-do list, a message will indicate which user is determining the DLS filtering.
All domains and their contents are described in the Domains, Entities, and Attributes appendix.
There is a separate query builder for each domain, and access to each query builder is controlled by security roles. Regardless of the domain, the same general-purpose
query-builder tool is used to create all queries. For detailed instructions on how to build queries, see Queries. Â Â
In addition to the standard set of domains, users can define custom domains to contain information that can be uploaded to the Guardium appliance. For example, your
company might have a table relating generic database user names (hr23455 or qa4872, for example) to real persons (Paula Smith, John Doe). Once that table has been
Many customers have valuable information in many different databases in their environment. Â It is extremely useful for an audit report, to correlate relevant information
need to make these reports easy and useful to understand. Â The External Data Correlation allows you to create custom tables on the Guardium appliance for enterprise
information that is needed in addition to the existing Guardium internal data. You can do this either manually within the GUI or based on an existing table on a database
server. Queries and reports can then be created for this information just as if it were predefined data.
There is a distinction between a custom table, a custom domain, and a custom query.
For example, perhaps a table exists on a database servers containing all employees, their database usernames, and the department to which they belong (for example,
Development, Financial, Marketing, HR, etc.). Â If you upload this table and all its data, you could cross-reference this table with Guardium's internal tables to see, for
example, which employees from Marketing are accessing the financial database (which may constitute a suspicious activity).
Custom Tables
A custom table contains one or more attributes that you want to have available on the Guardium appliance. For example, you may have an existing database table relating
encoded user names to real names. In the network traffic, only the encoded names will be seen. By defining a custom table on the Guardium appliance, and uploading
data for that table from the existing table, you will be able to relate the encoded and real names.
Before defining a custom table, first verify that the data you need from the existing database is a supported data type. A data type is supported if it is taken as one of the
following SQL type by the underlying JDBC driver: INTEGER, BIGINT, SMALLINT, TINYINT, BIT, BOOLEAN, DECIMAL, DOUBLE, FLOAT, NUMERIC, REAL, CHAR, VARCHAR,
DATE, TIME, TIMESTAMP. The following table summarizes some of the supported and unsupported data types for uploading to a custom table.
Oracle float number char varchar2 date nchar nvarchar2 long clob raw nclob longraw bfile rowid urowid blob
DB2® char varchar bigint integer smallint real double decimal date time blob clob longvarchar datalink
timestamp
Sybase char nchar varchar nvarchar int smallint tinyint datetime smalldatetime text binary varbinary image timestamp
MS SQL bigint bit char datetime decimal float int money nchar numeric nvarchar text
real smalldatetime smallint tinyint smallmoney varchar unique identifier
Informix® char nchar integer smallint decimal smallfloat float serial date money text
varchar nvarchar datetime
MY SQL bigint decimal int mediumint smallint tinyint double float date datetime longtext tinyblob tinytext blob text mediumblob mediumtext longblob
timestamp time year char binary enum set longtext
Note: Blob value (even a value of 1K) in dynamic SQL can be captured, but same size blob value in static SQL cannot be captured.
The Custom Table Data Purge screen has a checkbox for Archive. Checking this box results in the data of the custom table being included in the normal data archive.
This custom table data is archived according to the date in SQLGUARD_TIMESTAMP column of the custom table.
The data of the custom table can be archived from a collector or an aggregator.
The data of the custom table archived from a collector can be restored to any collector or aggregator managed by the same Central Manager as the source collector (the
metadata must be present).
The data of the custom table archived from an aggregator can be restored to any aggregator managed by the same Central Manager as the source aggragator.
If the archive file to be restored to a Guardium system does not have the metadata, then the data of the custom table is not restored.
If the structure of the custom table has changed between the time of archive and the time of restore in a way that results in an SQL error (for example, columns removed
or type changed), then a warning message appears on the aggregation/archive activity report and the data is not restored.
If a custom table is set to be purged by the default purge, then the restored data will be kept for the number of days specified on the restore screen.
If the custom table is set to overwrite data when it uploads, then restored data will be deleted at the time an upload is performed.
Custom Domains
A custom domain contains one or more custom tables. If it contains multiple tables, you define the relationships between tables when defining the custom domain.
Custom Queries
Note: Custom Tables uploaded to Guardium are optional components enabled by product key. If these components have not been enabled, the Custom Tables choices
listed will not appear in the Custom Table Builder selection.
Do not include any newline characters in the SQL statement. All columns must be explicitly named; making use of a column alias if necessary.
6. Click Add Datasource to open the Datasource Finder in a separate window. This will allow us to define where the external database is located, and the credentials
needed to retrieve the table definition and content later in the process.
7. Use the Datasource Finder to identity the database from which the table definition will be uploaded.
8. Click Retrieve to upload the table definition. This will execute the SQL Statement and retrieve the table structure. The SQL request is sent to the external database
from the Guardium system. Remember that only the definition is being uploaded and you can upload data later.
Invalid Queries
If you modify the definition of a custom table, you may invalidate existing reports based on queries using that table. For example, an existing query might reference an
attribute that has been deleted, or whose data type has been changed. It is a good idea to check for invalid queries after the table modification process.
Note: Changing the engine type is disallowed (and the selection greyed out) if the row number in the table is greater than 1M.
The other selection in the Maintain Custom Table menu is Manage Table Index. Click Insert to open Table Index Definition. The pop-up screen suggests columns in the
table to add to indexes based on columns used on custom domains as Join conditions. Select the columns and save. Indexes will be created (or re-created).
Note: New installations do not automatically start Enterprise reports. There is one upload schedule for each custom table. The total amount of disk space reserved on the
Guardium appliance for custom tables is 4GB.
The Enterprise reports custom upload are like other jobs. There are two ways to enable them:
In the Custom Table Upload GUI. (requires license for custom upload)
Use GuardAPI from the CLI:
Note: DB Entitlements Domains are optional components enabled by product key. If these components have not been enabled, the choices listed in the Custom Domains
help topic will not appear in the Custom Domain Builder selection.
From the Domain Entities box, select an entity. All of the attributes of that entity will become available in the field drop-down list of the Domain Entities box.
Select the attribute from that list that will be used in the join operation.
From the Available Entities list, select the entity you want to add. All of the attributes of that entity will become available in the field dropdown list of the
Available Entities box. Select the attribute from that list that will be used in the join operation.
Select = (the equality operator) if you want the join condition to be equal (e.g., domainA.attributeB = domainC.attributeD). Select outer join if you want the
join condition to be an outer join using the selected attributes.
Click Add Field Pair. Add Field Pair can be used to add more attributes pairs of these two entities to the join condition.
Repeat the steps for any additional join operations.
Note: When data level security is on, internal entities added to the custom domain cannot belong to different domains with filtering policies.
8. Select the Timestamp attribute for the custom domain entity.
Note: At least one entity with a timestamp must be used, since a timestamp is required to save a custom domain.
9. Click Apply.
1. Open the Custom Query Builder by navigating to Comply > Custom Reporting > Custom Query Builder.
2. Select a custom domain from the list.
3. Click Search to open the Query Finder
4. To view, modify or clone an existing query, select it from the Query Name list, or select a report using that query from the Report Title list.
5. To view all of the queries defined for a specific custom table, select that custom table from the Main Entity list and click the Search button (only the custom tables
included in the selected custom domain will be listed).
A customer of the IBM Guardium product can use a bidirectional interface to transfer identified sensitive data information from one product to another. Those customers
who have already invested the time in one InfoSphere product can transfer the information to the other InfoSphere product.
Note: In IBM Guardium , the Classification process is an ongoing process that runs periodically. In InfoSphere Discovery, Classification is part of the Discovery process that
usually runs once.
Export from Guardium - Run the predefined report (Export Sensitive Data to Discovery) and export as CSV file.
Import to Guardium - Load to a custom table against CSV datasource; define default report against this datasource.
1. As an admin user in the Guardium application, go to Tools > Report Building >Classifier Results Tracking > Select a Report > Export Sensitive Data to Discovery.
Note: Add this report to the UI pane (it is not by default).
2. Click the Customize icon on the Report Result screen and specify the search criteria to filter the classification results data to transfer to Discovery.
3. Run the report and click Download All Records.
4. Save as CSV and import this file to Discovery according to the InfoSphere Discovery instructions.
Import to Guardium
1. Export the classification data as CSV from InfoSphere Discovery based on InfoSphere Discovery instructions.
2. Open the Custom Table Builder by navigating to either of the following:
Comply > Custom Reporting > Custom Table Builder
Reports > Report Configuration Tools > Custom Table Builder
Select ClassificationDataImport and click Upload Data.
3. In Upload Data screen, click Add Datasource, click New, define the CSV file imported from Discovery as new datasource (Database Type = Text). Â
Note: Alternatively you can load the data directly from Discovery database if you know how to access the Discovery database and Classification results data.
4. After defining the CSV as Datasource, click Add in Datasource list screen.
5. In Upload data screen click Verify Datasource and then Apply.
6. Click Run Once Now to load the data from the CSV.
7. Go to Report Builder, select the Classification Data Import report, Click Add to Pane to add it to your Portal and then navigate to the report.
8. Access the Report, click Customize to set the From/To dates and execute the report.
The report result has the classification data imported from InfoSphere Discovery. Double click to invoke APIs assigned to this report. The data imported from Discovery
can be used for the following:
Type DB2
Host 9.148.99.99
Port 50001
Datasource URL Â
TableName MK_SCHED
ColumnName ID_PIN
ClassificationName SSN
Privacy Sets
A privacy set is a collection of elements that can be used to do special monitoring.
It consists of one or more object-field pairs - for example, the salary field of the employee table, or all fields of the salary history table. All access to these elements within
a given timeframe can be reported.
1. Open the Identify Privacy Set panel by navigating to one of the following:
Comply > Tools and Views > Privacy Set Builder
Discover > Database Discovery > Privacy Set Builder
2. Do one of the following:
Click the New button to define a new privacy set (see Create a Privacy Set).
Select a privacy set from the list, and click one of the following buttons:
Clone - See Clone a Privacy Set.
Modify - Use this button to modify the definition or to run a report based on that definition. See Modify a Privacy Set, or Run a Privacy Set Report.
Remove - See Remove a Privacy Set.
1. Select the privacy set to be removed, in the Identify Privacy Set panel. See Open the Privacy Set Builder.
2. Click Delete and confirm the action.
3. Click Done.
1. Open the privacy set for the report, in the Privacy Set Builder. See Open the Privacy Set Builder.
2. Click Run.
3. In the Task Parameters, enter the starting and ending times for the task.
4. Select Report by Access Details, or Report by Application User, to specify how the results should be displayed. The first option is the default, in which case a count
of accesses is shown for each combination of client IP, server IP, server (name), server type, database protocol, source program name, and database user name. If
Application User is selected, the report will contain a separate column with that name (following DB User Name) and the output will be additionally qualified by the
application user.
5. Click Run Once Now. After the report has been executed, it will be displayed in a separate window.
6. Click Done.
Custom Alerting
Alert messages can be distributed via e-mail, SNMP, syslog, or user-written Javaâ„¢ classes. The last option is referred to as custom alerting.
When an alert is triggered, a custom alerting class can take any action appropriate for the situation; for example, it might update a Web page or send a text message to a
telephone number.
To create a custom alerting class, first contact Technical Support to obtain the necessary interface file. The following topic describes how to implement the interface. See
Use the Custom Alerting Interface, and also the following topic which contains an example: Sample Custom Alerting Class.
Once the class has been compiled, it must be uploaded to the Guardium® appliance. See Manage Custom Classes.
For guidelines on testing a custom alerting class, see the Test a Custom Alerting Class section later in this topic.
Note: Do not take or run custom code from untrusted data sources to order to reduce the risk of security vulnerabilities.
Note: Do not take or run custom code from untrusted sources.
Note: Do not write a custom class that gets data from an untrusted source.
package com.guardium.custom
public class YourClassNameHere implements CustomerDefinedAlertingIfc {
       }
/*
 * Sample Custom Alerting Class
 *
 */
package com.guardium.custom;
import java.text.DateFormat;
import java.util.Date;
public class HandleAlerts implements CustomerDefinedAlertingIfc {
private String message = "";
private Date timeStamp = null;
public void processAlert(String message, Date timeStamp){
setMessage(message);
setTimeStamp(timeStamp);
System.out.println(getMessage() + " on " +
  DateFormat.getDateInstance(). format(getTimeStamp()));
}
    public void setMessage(String inMessage){
        message = inMessage;
    }
    public String getMessage(){
        return message;
    }
    public void setTimeStamp(Date inDate){
        timeStamp = inDate;
    }
    public Date getTimeStamp(){
        return timeStamp;
    }
}
1. Upload the custom class to the appliance. This is an administration function that is performed from the Administrator Console. See Manage Custom Classes.
2. Define a correlation or real-time alert to use the custom alerting class. Regardless of which alert type generates the alert, testing is easier if you assign a second
notification type (email, for example) against which you can compare the custom alerting results.
3. Check the environment by doing one of the following:
For a correlation alert:
Check that the Anomaly Detection polling interval is suitable for testing purposes and that Anomaly Detection has been started. If the polling interval
is too long (it may be 30 minutes or more), you may have a long wait before the query runs.
Check that the Alerter polling interval is suitable for testing purposes and that the Alerter has been started.
Check that the alert to be tested has been marked Active.
For a real-time alert:
Check that policy containing the rule with the custom alert action is the installed policy.
Verify that the inspection engine was restarted after the updated policy was installed.
Check that the Alerter polling interval is suitable for testing purposes and that it has been started.
4. Take whatever action is necessary to trigger the alert (generate a number of login failures, for example).
This saves processing resources, so that a heavier traffic volume can be handled. The parsing and amalgamation of that data to Guardium's internal database can be done
later, either on a collector or an aggregator unit.
There are two Guardium features involving the Flat Log Process - Flat Log by policy definition and Flat Log by throttling mechanism.
Flat Log by throttling mechanism - This is the feature implemented by running the CLI command, store alp_throttle 1. The same policy that is applicable to real-time S-TAP
traffic is used to process traffic that was logged into the GDM_FLAT_LOG table.
For Flat Log by throttling mechanism, the Flat Log checkbox should NOT be checked in Policy Builder.
Flat Log by policy definition - Selection of this feature involves the Policy Builder menu in Setup >Tools and Views and Flat Log Process menu in Manage > Activity
Monitoring.
The following actions do not work with rules on flat policies: LOG FULL DETAILS; LOG FULL DETAILS PER SESSION; LOG FULL DETAILS VALUES; LOG FULL DETAILS
VALUES PER SESSION; LOG MASKED DETAILS.
When the Log Flat (Flat Log) checkbox option listed in the Policy Definition screen of the Policy Builder is checked,
Use this feature when you need to add a condition that is based not on the entire content of the attribute as is, but on part of the attribute, a function of the attribute, or a
function that combines more than one attribute.
An example is: INSTR(:attribute, '150.1') = 5, which will return all instances of Client IP matching the five characters listed. Type the character 5 in the entry box
next to the Add Expression icon. Type the INSTR(:attribute, '150.1') expression in the separate Build Expression window. Test the validity of the expression in the
Build Expression window. Another example is: LENGTH(:attribute) >= 40, which will return the length of any SQL statement greater than 40 characters. The
expression may (or may not) contain references to the actual attribute and can also contain references to other attributes.
Along with authenticating users and restricting role-based access privileges to data, even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have the privileges required to perform their duties. This is also known as database user rights
attestation reporting.
Use Guardium’s predefined database entitlement (privilege) reports (for example) to see who has system privileges and who has granted these privileges to other
users and roles. Database entitlement reports are important for auditors tracking changes to database access and to ensure that security holes do not exist from lingering
accounts or ill-granted privileges.
Custom database entitlement reports have been created to save configuration time and facilitate the uploading and reporting of data from the following databases: Oracle;
MYSQL; DB2®; SYBASE; SYBASE IQ; Informix®; MS SQL 2000/2005/2008; Netezza®; Teradata; and, PostgreSQL; DB2 on z/OS.
For Microsoft SQL Server and Oracle databases you can also use Entitlement Optimization to access this information.
Follow these steps to use Guardium’s predefined database entitlement (privilege) reports with up-to-date snapshots of database users and access privileges:
1. Add datasources/databases to the appliance (navigate to Comply > Custom Reporting > Custom Domain Builder.
2. Assign datasources to entitlements (navigate to Comply > Custom Reporting > Custom Table Builder. Select the custom table listing of your entitlement. Click
Upload Data. Assign datasources to the entitlement report at the Import Data menu screen. When done, click Run Once Now.
3. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
DB Entitlement Reports use the Custom Domain feature of Guardium® to create links between the external data on the selected database with the internal data of the
predefined entitlement reports. See External Data Correlation for further information on Custom Domain Builder/Custom Query Builder/Custom Table Builder.
User Identification
Guardium® provides several methods to identify application users, when the actual database user is not apparent from the database traffic.
Some database applications are designed to use or share a small number of database user accounts. These applications manage their users independently of the
database management system, which means that when observing database traffic from outside of the application, it can be difficult to determine the application user who
is controlling a database connection at any given point in time. However, when questionable database activities occur, you need to relate specific actions to specific
individuals, rather than to an account shared by groups of individuals. In other words, you must know the application user, not just the database user.
Guardium provides several methods to identify application users, when the actual database user is not apparent from the database traffic:
Identify Users via Application User Translation - For some of the most popular commercial applications (Oracle EBS, PeopleSoft, SAP, etc.), Guardium can identify
users automatically.
Within the enterprise, it may be necessary to employ several methods to identify users, depending on the applications used.
For some widely used applications, Guardium has built-in support for identifying the end-user information from the application, and thus can relate database activity to
the application end-users.
1. Define an Application User Translation configuration for the application. See Configure Application User Detection.
2. Populate any pre-defined groups required for that application. See Populate Pre-Defined Application Groups.
3. Regenerate any portlets for special reports for that application, and place the portlets on a page. See Regenerate Special Application Report Portlets.
The policy will ignore all of the traffic that does not fit the application user translation rule (for example, not from the application server).
Only the SQL that matches the pattern for that security policy will be available for the special application user translation reports.
Note: The first time Run Once Now is clicked after installing the Application User Translation setting(s), Â it retrieves the last update-date for the tables it looks at. Â After
that, it imports only new data. Otherwise, we could find ourselves needlessly importing decades worth of data and filling many tables/databases.
The examples in this section are for the EBS portlets, but the procedure is identical for other application types.
1. Do one of the following to open the Report Finder: Users with the admin role: Select Tools - Report Building - Report Builder. All Others: Select Monitor/Audit - Build
Reports - Report builder.
2. Click Search to open the Report Search Results panel.
3. Select a report portlet for the application type (EBS Application Access, for example, and click Regenerate Portlet. You will be informed that the portlet has been
regenerated
4. Repeat the previous step for each application report (EBS Processes Database Access, or the PSFT Processes Database Access report, for example). Now add a tab
to your layout, and include the two regenerated portlets on that tab.
5. Click Customize to open the Customize pane.
6. Click Add Pane to define a new tab.
7. Enter a name for the tab - EBS Reports, for example - and click Apply. The new tab appears as the last tab in the list.
8. Click on the new tab name to edit that pane.
9. Click Add Portlet, and click Next until you locate the reports you want (the EBS reports, for example), and mark the checkbox next to each desired report.
10. Click Apply, and then click Save and Apply and then click Save to save the new pane layout. The new tab will appear at the end of the first row of tabs.
11. Click on the new tab name to open the tab.
12. Click Customize to set the runtime parameters (date range and Show Aliases, for example).
Supply the username and password that EBS uses to talk to Oracle (often APPS/$passwd).
If the customer does not want to supply/enter the password for the DB_USER EBS uses to access Oracle, it is still possible to get Application User Translation,
however the process is more complicated. Â
1. Make/choose a login for Oracle that will permit access to the database for gathering aliases/users/responsibilities. Â That user needs access to the table
[APPLSYS.]FND_USER and the view FND_RESPONSIBILITY_VL which combines two tables: APPLSYS.FND_RESPONSIBILITY and
APPLSYS.FND_RESPONSIBILITY_TL.
2. Run the following SQL statements directly from the Guardium system: select RESPONSIBILITY_ID, RESPONSIBILITY_NAME from FND_RESPONSIBILITY_VL order
by RESPONSIBILITY_ID;   and   SELECT USER_ID, USER_NAME from FND_USER ORDER BY USER_ID;
Once the user is set up so that those two statements successfully run, two different Application User Translation entries are needed. Â Both need to have the same
server IP, port, and instance name, (and of course EBS and Oracle chosen for APP type and APP server type).
It does not matter if the Application Code is identical or not. One entry needs the username that EBS uses to connect to the database (usually APPS), but you can
put in an incorrect (dummy) password. The second entry needs the username and password that has been created to access those tables. Â
3. Once both are entered with Active and Responsibility selected, click Run Once Now, and start or restart EBS (assuming there is an Inspection Engine (S-TAP® or
net) looking at the traffic). The collection of data and the assignment of APPS user names to that data for the EBS traffic will now take place. Â Â
APPLSYS.FND_USER
APPLSYS.FND_RESPONSIBILITY
APPLSYS.FND_RESPONSIBILITY_TL
ABAP Stack and Java Stack systems will have different tables.
ABAP Stack
Traditional ECC (Enterprise Core Components) SAP systems are written in ABAP code and are predominantly accessed via the SAP GUI, although web access is possible.
SAP ABAP systems have direct (read/write/update) access to traditional SAP databases. The databases are very large and contain all the sensitive data. This is where IBM
Guardium will be best utilized.
The following screen will appear when you enter the SAP GUI (ABAP Stack):
To validate the ABAP Stack SAP Kernel module for Application User Translation, follow these steps:
1. Log in to SAP.
2. Go to System > Status
SAP with a DB2® backend is also available for SAP kernel 640, but the user needs to set DB6_DBSL_ACCOUNTING=1 (in kernel 700 and up, this
DB6_DBSL_ACCOUNTING value is 1 by default). SAP for Oracle backend requires a kernel of 710 or higher.
Data gets put into the app user field and the app event string.
Java Stack
SAP Portal systems are written in Java code and are the front end web applications utilizing pre-canned queries to display SAP related web pages.
Portal systems can only be accessed via a web browser. Portal system databases are much smaller with only a few tablespaces.
The following screen will appear when you enter SAP Portal System (Java Stack).
To validate the Java Stack SAP Kernel module for Application User Translation, follow these steps: 1. Click on System Information.
SAP sets similar client properties in the Java stack as it did for ABAP Stack.
The Application Events API provides simple calls that can be issued from within the application to signal Guardium when a user acquires or releases a connection, or when
any other event of interest occurs.
Note: If your Guardium security policy has Selective Audit Trail enabled, the Application Events API commands that are used to set and clear the application user and/or
application events will be ignored by default, and the application user names and/or application events will not be logged. To log these items so that they will be available
for reports or exceptions, include a policy rule to identify the appropriate commands, specifying the Audit Only rule action.
GuardAppEvent
GuardAppUser
These each have start and stop triggers, and the Event has sub-triggers to set Type, Username, StrValue, NumValue, and Date.
The Guardium system is able to read special Select statements for the AppUserName and the AppEvent details.
 Â
MS-SQL <blank>
Sybase <blank>
GDM_CONSTRUCT_INSTANCE
GDM_APP_EVENT
The Named template %%AppUserName parameter in Guardium (see Global Profile menu) is mapped to the Turbine table, GDM_CONSTRUCT_INSTANCE. In order to use it
in the Named Template, Guardium needs the APP_USER_NAME in the GDM_CONSTRUCT_INSTANCE table to be populated with the App User value.
SELECT 'GuardAppUser:<value>'
This will put the values into the right table and this will replace the %%AppUserName parameter in the Named template with the right value.
Example
........
........
To signal when other events occur (you can define event types as needed), use the GuardAppEvent call, described in the following section.
user_name is a string containing the application user name. This string will be available as the Application User attribute value in the Access Period entity.
FROM location is used only for Oracle, DB2®, or Informix®. (Omit for other database types.) It must be entered exactly as follows:
FROM location is used only for Oracle, DB2, or Informix. (Omit for other database types.) It must be entered exactly as follows:
Syntax:
‘GuardAppEventType:type’,       Â
‘GuardAppEventUserName:name’,       Â
‘GuardAppEventStrValue:string’,       Â
‘GuardAppEventNumValue:number’,       Â
Start | Released - Use the keyword Start to indicate that the event is taking control of the connection or Released to indicate that the event has relinquished control of the
connection.
type identifies the event type. It can be any string value, for example: Login, Logout, Credit, Debit, etc. In the Application Events entity, this value is stored in the Event
Type attribute for a Start call, or the Event Release Type attribute for a Released call.
name is a user name value to be set for this event. In the Application Events entity, this value is stored in the Event User Name attribute for a Start call, or the Event
Release User Name attribute for a Released call.
string is any string value to be set for this event. For example, for a Login event you might provide an account name. In the Application Events entity, this value is stored in
the Event Value Str attribute for a Start call, or the Event Release Value Str attribute for a Released call.
number is any numeric value to be set for this event. For example, for a Credit event you might supply the transaction amount. In the Application Events entity, this value is
stored in the Event Value Num attribute for a Start call, or the Event Release Value Num attribute for a Released call.
date is a user-supplied date and optional time for this event. It must be in the format: yyyy-mm-dd hh:mm:ss, where the time portion (hh:mm:ss) is optional. It may be the
current date and time or it may be taken from a transaction being tracked. In the Application Events entity, this value is stored in the Event Date attribute for a Start call, or
the Event Release Date attribute for a Released call.
FROM location is used only for Oracle, DB2, or Informix. (Omit for other database types.) See the following example. However, any dummy table name is acceptable for the
dummy SQL.
The GuardAppEvent call populates an Application Events entity (see Application Events Entity in the Entities and Attributes section of the Appendices). When creating
Guardium queries and reports, you can access the Application Events entity from either the Access Tracking domain or the Policy Violations domain.
If any Application Events entity attributes have not been set using the GuardAppEvent call, those values will be empty.
Event Date is set using the GuardAppEvent call, or from a custom identification procedure as described in the following section.
Timestamp is the time that Guardium stores the instance of the Application Event entity.
In the simplest case, an application might have a single stored procedure that sets a number of property values, one of which is the user name. A call to set the user name
might look like this:
set_application_property('user_name', 'JohnDoe');
In a custom procedure mapping (described later), you can tell Guardium to:
Watch for a stored procedure named set_application_property, with a first parameter value of user_name.
Set the application user to the value of the second parameter in the call (JohnDoe, in the example).
There may be multiple stored procedures for an application: one to start an application user session, one to end a session, and others to signal key events particular to
that application. Guardium’s custom identification procedure mechanism can be used to track any application events you want to monitor.
Since each of your applications may have a different way of identifying users, you may have to define separate custom identification procedure mappings for each
application. To do that, follow the procedure outlined.
The Value Change Auditing feature tracks changes to values in database tables. For each table in which changes are to be tracked, you select which SQL value-change
commands to monitor (insert, update, delete). Each time a value-change command is run against a monitored table, before and after values are captured. On a scheduled
basis, the change activity is uploaded to a Guardium® system, where all the reporting and alerting functions can be used. The basic steps to perform to use the Value
Change Auditing feature are:
1. Create an audit database on the database server. This database is where value-change data is stored until it is uploaded to the Guardium system. See Create an
Audit Database.
2. Identify the tables to be monitored, and for each table select the value-change commands (insert, delete, update) for which changes will be recorded. To record the
changes, a trigger is created for each table to be monitored, and that trigger writes the value-change data to the audit database. To allow updates to the audit
database (by the trigger), all users with update privileges for the monitored table are given appropriate privileges for the audit database. This has implications for
1. Open the Value Change Auditing Builder by navigating to Harden > Configure Change Control (CAS Application) > Value Change Auditing Builder.
2. Click Add Datasource to open the Datasource Finder panel.
3. Select a datasource on which an audit database is defined. If an audit database is not yet defined, see Create an Audit Database.
4. Click Add to close the Finder and add the selected datasource to the Value Change Audit panel.
5. Optionally enter a Schema Owner and/or Object Name to limit the number of tables that are displayed when choosing the tables to be monitored. You can use the
% (percent) wildcard character. For example, to limit the display to all tables that begin with the letter a, enter a% in the Object Name box.
6. Click Choose Tables To Monitor to open the Define Data Audit panel.
7. Mark the Select box for each table to be monitored.
Note: You cannot define a trigger for a table that contains one or more user-defined data types.
The Trigger Defined column indicates if a trigger is already defined for the table. The Audit Insert, Audit Delete, and Audit Update check boxes indicate if the trigger
will record changes for that command.
If the Trigger Defined column is not marked, marking the Select checkbox for a table automatically marks all three the Audit checkboxes (Audit Insert, Audit Delete,
and Audit Update). If you do not want to monitor one or two of those commands, clear the appropriate checkbox.
8. Click Add Selections to define triggers for the selected tables. You will be informed of the action taken.
9. Click OK to close the message box and re-display the Define Data Audit panel. The selected tables remain selected, and the Trigger Defined column is now marked
for those tables. Note: The instant a trigger is defined for a table, it is active and recording changes for the selected commands in the audit database. The
configuration of triggers is done entirely on the database server, which is unlike most other Guardium configurations, which are defined on the Guardium database,
and then activated or deactivated as a separate task.
10. To define additional actions, repeat these steps, or remove triggers by marking the appropriate Select check boxes and clicking Remove Selections.
11. Click Done after you complete all changes.
Note: The Cancel button does not back out any changes that you have made to triggers using the Add or Remove Selections buttons.
To update the audit database privileged users list, the database user ID that is used to log in to the monitored database must be the creator of any role to which new users
have been added. Otherwise, the members of that role will not be available.
1. Open the Value Change Auditing Builder by navigating to Harden > Configure Change Control (CAS Application) > Value Change Auditing Builder.
2. Click Add Datasource to open the Datasource Finder panel, select the appropriate Datasource from the list, and click Add.
3. Click Update Audit Tables Privileged Users. The permissions for all users who can run triggers to update the audit database tables are updated, and you are
informed when the operation completes.
4. Click OK to close the message box.
Value-Change Reporting
You can view value-change data from the default Values Changed report, or you can create custom reports using the Value Change Tracking domain. By default, the Value
Change Tracking domain is restricted to users having the admin role.
The main entity for the Values Changed report is the Changed Columns entity. In most cases, there is a separate row of the report for every column change that is detected
for every audit action (Insert, Update, Delete). However, for MS SQL Server and Sybase, if the monitored table does not have a primary key, there are two rows per change,
with the old and new values displayed on separate rows.
To create an audit database and perform value-change monitoring activities, you must have a user account with appropriate permissions to:
Log in to each database to be monitored Create tables and triggers on each database to be monitored
You should use any other database space that has been defined, or to create a new database space, perform one of the following procedures (depending on the operating
system).
C:\Program Files\Informix\bin
5. Restart the Informix server, and use a suitable tool (Aqua Data Studio remote client, for example) to connect and verify that the space named guardium_dbs has
been created. Your first connection attempt may fail with a message about the server running in Quiescent Mode. Â If this happens, attempt to re-connect at least
two more times, and it should work.
6. To verify that the guardium_dbs database space has been created, use Aqua Data Studio, and look under Storage.
su - informix
cd demo/server
vi guardium_dbs
use master
go
disk init name="guardium_auditdev", size=8192
go
disk init name="guardium_auditlog",
use master
go
disk init name = 'guardium_auditdev', physname
 ='/home/sybase/data/guardium_auditdev' , size = 8192
go
disk init name = 'guardium_auditlog', physname
 ='/home/sybase/data/guardium_auditlog' , size = 8192
go
1. Open the Value Change Database Builder by navigating to Harden > Configuration Change Control (CAS Application) > Value Change Audit Database Creation.
2. Click Add Datasource to open the Datasource Finder panel. Datasources that have been defined from the Value Change Auditing application are labeled Monitor
Values. Datasources that have been defined for other applications will have different labels (Listener, or DBanalyzer, for example), and those datasources may not
have the appropriate set of database access permissions for Value Change Auditing application, which requires a user account having database administrator
authority. If a suitable datasource is not available, click the New button to define a new one for the database to be monitored (see Datasources in the Common
Tools book for detailed information on defining datasources).
Note: If a GUARDIUM_AUDIT database is already created on this dbserver, another one cannot be created. The GUARDIUM_AUDIT database/user must be dropped
before a new one can be created.
3. Select a datasource that uses an administrator account, and click Add, to add it to the Datasources pane on the Create Value Change Audit Database panel.
4. Enter an Audit Datasource Name. This is the name that will be used to identify the datasource later, to define monitoring tasks and to upload data. Do not confuse
this name with the name of the Datasource from the Datasources panel.
5. Optionally mark the Share Datasource box to share this datasource with other applications (Classification, for example). The default is not to share the datasource.
This type of datasource requires administrator privileges, so you may not want to share this datasource with other applications.
Note: To share a datasource with other users, assign security roles to that datasource.
6. For any database type other than DB2®, there will be additional fields in the Audit Configuration pane. All fields are required. Referring to the following table, enter
the appropriate values.
Table 1. Additional Audit Configuration Fields Table
Database Type Field: Description
Informix Database Space: Enter the name of an existing database space to use, or enter the name of the database space you created for
the audit database (guardium_dbs in the example shown previously). If you leave this blank, the default root_dbs space will be
used, which we do not recommend.
MS SQL Server Audit User Name: Enter a new database user name to use when accessing the audit database. This user will be given the
sysadmin role.
An additional choice appears in Value change Audit Database Creation menu screen when then the datasource is MSSQL server.
This additional choice appears only when the datasource is MSSQL Server.
Compatibility Mode: Choices are Default or MSSQL 2000. The processor is told what compatibility mode to use when monitoring
a table.
Use the GuardAPI command, grdapi list_compatibility_modes to show the compatability modes for MS SQL Server.
Oracle Audit Password: Enter the password for the system user, which will be the database account used to access the audit database.
Sybase Audit User Name: Enter a new database user name to use when accessing the audit database. This user will be granted the
sa_role.
Data Device Name: Enter the same data device name used when initializing the disk for the audit database (guardium_auditdev
in the disk initialization procedure described earlier).
Log Device Name: Enter the same log device name used when initializing the disk for the audit database (guardium_auditlog in
the disk initialization procedure described earlier).
7. Click Create Audit Database to create the audit database.
8. Use the selection Value Change Audit Database Update and Upload on the Config and Control tab to select the actions in this table.
Action Description
This feature uses Guardium’s External Feed that is preconfigured with the data (a predefined External Feed map), and an audit process to run it.
Note: The resulting table will show only the last run. The receiver count is the count of the receivers, and not the count of run results since the last run only.
IBM Guardium can detect external references to database objects, specifically tables. This capability, in conjunction with Optim Designer, can be used to manage the
retirement of inactive tables or archiving with certain retention policies.
Guardium® collects and maintains a list of tables with the date of last reference. The list is built using policies in Guardium that dictate the interval of last reference and
the frequency to be used for updating the list content. The information captured by Guardium is referred to as the “last reference†list and supplies the following
information: What tables are no longer referenced? What table access trends exist for retirement candidates?
Having the ability to accurately plan for the retirement of applications will help to:
This functionality of IBM Guardium has been added directly to the Optim Designer user interface.
The information supplied by Guardium to Optim consists of the following attributes per table entry:
Field Comment
DataSourceDesc Description
Server IP Â
Host Name Â
User Name for example, for Oracle it mostly defines the schema
Database Name Â
Schema Â
Table Â
    datasource_desc     varchar(100),
    server_ip           char(39),
    host_name           varchar(200),
    db_vendor           char(40),
);
Last_referenced_table
    user_name           char(32),
);
Guardium provides several compliance monitoring templates--groups, security policies, and reports corresponding to specific standards and regulations--including the
following:
These quick start compliance monitoring templates are especially useful for organizations that must comply with one of the associated standards or regulations in a short
period of time. After installing security policies, the compliance monitoring tool guides administrators or compliance officers through the initial setup and population of
groups with organization-specific information such as client IP addresses and specific privileged user IDs. In addition, the compliance monitoring tool periodically checks
your Guardium environment for new databases that can be monitored using the compliance monitoring templates.
After choosing a compliance monitoring template and indicating databases where that compliance type should be applied, the compliance monitoring tool takes the
following actions:
A security policy is created and installed for the selected compliance type. In a centrally-managed environment, the policy is installed on the collectors.
A policy installation schedule is defined for 10:30 AM daily. In a centrally-managed environment, the policy installation schedule runs on the collectors.
A server IP group is populated with the server IP addresses of the selected databases.
The current user is assigned to the selected compliance-type role. This role enables access to related reports and accelerators from the main Guardium navigation.
When supported, a discover sensitive data scenario is created.
If a discover sensitive data scenario is created and at least one of the selected databases has a datasource defined, the scenario is scheduled to run once per week
on Sundays at 10:30 AM. In a centrally-managed environment, the schedule runs on the central manager.
The following table summarizes the features supported for each of the available compliance types:
Table 1. Summary of features supported by the Compliance Monitoring tool per
compliance type.
 Basel II GDPR HIPAA PCI PII SOX
Security policy
Reports Â
The quick start for compliance monitoring tool uses templates to quickly establish compliance monitoring on new database servers in your environment. These templates
are optimized for use with new or expanding Guardium deployments. Ensure the easiest configuration and most complete functionality by verifying the following
prerequisites before you begin:
You are a Guardium user with administrative privileges running Guardium V10.1.3 or newer configured as a central manager or standalone system.
S-TAPs are installed and operational on the new database servers.
The database servers are supported by the compliance monitoring templates.
There are no policies installed other than the Default - Ignore Data Activity for Unknown Connections policy.
Warning:
You can install quick start compliance monitoring security policies alongside your preexisting policies only if the preexisting policies have the following Policy
Definition settings:
Installation of quick start security policies will fail if any preexisting policies have conflicting settings. When working with an existing deployment, considering
uninstalling your existing policies before working with the quick start policies. This restriction should not impact new Guardium deployments.
The following sections describe the quick start compliance monitoring prerequisites in detail.
For more detailed information about S-TAPs, including other installation methods, see S-TAP administration guide.
Supported databases
The compliance monitoring tool detects databases in your Guardium environment based on the following criteria:
The detection method varies by supported database type, as summarized in the following table.
Table 1. Summary of supported database types and detection methods.
Database Active traffic Discovered instances
Informix
MySQL
Netezza Â
Oracle
PostgreSQL Â
Sybase
Teradata
Extrusion rules require that the Inspect returned data setting is enabled for all inspection engines using the policy. To use the extrusion rules included with the following
compliance templates, you must allow the inspection engines to inspect returned data:
GDPR
HIPAA
PCI
PII (data privacy)
Important: Enabling Inspect returned data increases network traffic with the returned results set.
Enable Inspect return data from Manage > Activity Monitoring > Inspection Engines. For more information about the Inspect return data setting, see Creating policies and
Inspection engine configuration.
Policies with conflicting Log flat, Rules on flat, or Selective audit trail settings cannot be installed in the same Guardium environment. As a result, you cannot use the quick
start compliance monitoring templates if you have installed any policies that use different settings.
For new Guardium deployments or deployments without user-defined policies, you are unlikely to encounter any conflicts with these policy settings. For existing Guardium
deployments, if you receive a conflicting policies message while using the Set up compliance monitoring tool, review your policy definition settings.
For more information about selective audit trails, see Rule actions.
Exception: If it is the only installed policy, the Default - Ignore Data Activity of Unknown Connections policy is overridden by the installation of compliance monitoring
policies.
Parent topic: Quick start for compliance monitoring
Procedure
1. Open the compliance monitoring page by navigating to Setup > Quick Start > Compliance Monitoring.
2. Open the compliance monitoring set-up tool by clicking the icon in the Set up compliance monitoring tile.
3. From the Compliance type section, use the Select the compliance type you want to enable menu to select the type of database monitoring you want to configure.
For example, to enable GDPR monitoring, select General Data Protection Regulation (GDPR). Click Next to continue.
4. From the Databases section, select databases from the Available databases table and click the icon to add them to the Selected databases table.
Tips:
Use the Exclude monitored databases check box to hide databases where compliance monitoring is already configured.
When using the General Data Protection Regulation for Db2 for z/OS (GDPR for Db2 for z/OS) compliance type, the list of available databases is filtered to
include only Db2 for z/OS databases. Similarly, Db2 for z/OS databases are not displayed when working with non-z/OS compliance types.
Select databases from the Selected databases table and click Provide credentials to store database credentials. Storing credentials enables the discovery
and classification of sensitive data for some compliance types. If automated configuration is not supported, the datasources created when you store
credentials can be used in your own discover sensitive data scenarios.
To disassociate databases from a compliance type, edit the configuration and remove databases from the Selected databases table or navigate from the
compliance type tile to View details > Databases and click the icon next to a database.
5. When you are finished identifying databases to monitor, click Run setup to install the policy, populate the server IP group, and run compliance monitoring reports.
6. From the Refresh the page to show new content? dialog, click Yes to refresh the page and complete the set up.
What to do next
After configuring compliance monitoring, you may notice several icons on the compliance monitoring tiles. These icons indicate that additional configuration is
required. Use the Populate group links to populate additional groups or the Datasource credentials link to provide database credentials for a discover sensitive data
scenario.
Important: A default server IP group is automatically created and populated when monitoring is configured using the compliance monitoring set-up tool. However, it is
important to define the users and applications that are allowed to access your databases by populating several additional groups. For information about populating groups
from the compliance monitoring page, see Populate groups.
Parent topic: Quick start for compliance monitoring
Related concepts:
Prerequisites for compliance monitoring
Related information:
Deploy monitoring agents
Populate groups
Learn how to populate groups for compliance monitoring.
Empty groups are not treated as wild cards and will not capture any traffic.
Hierarchical or nested groups are not supported.
Procedure
1. Use one of the following methods to identify unpopulated groups and open the Edit group dialog to begin.
In the Monitoring enabled section of a compliance monitoring tile, look for icons and click the associated Populate group link.
Click the View details link on a compliance monitoring tile to open the details panel, select the Summary tab, and click the icon next to a group.
Tip: In the details panel, unpopulated groups are highlighted with a small icon.
At this point, the details view and Edit group dialog will open on top of the compliance monitoring dashboard.
2. From the Edit group dialog, optionally provide a Category and Classification for the group. The Application type, Group type, and Description (used as the Group
name) fields are populated based on the group selected in the previous step and are not editable.
3. From the Edit group dialog, use one of the following methods to begin populating the selected group.
Click the icon to add an item to the Member table and manually specify a group member.
Click Import > From CSV to import group members from a CSV file.
Click Import > From group to import group members from another Guardium group of the same group type. For example, you can populate an authorized
users group from another group containing a list of users but not from a group containing a list of IP addresses.
Click Import > From external datasource to import group members from an external datasource. The Datasource menu will include all datasources marked
shared or of type custom domain. For more information, see Importing from external datasources.
4. When you are finished adding members to the group, click OK to return to the compliance monitoring dashboard.
Procedure
1. Use one of the following methods to identify where database credentials are required.
In the Scanning for sensitive data section of a compliance monitoring tile, look for a icon and click the associated Datasource credentials link. The
compliance monitoring databases view will open to a filtered list of databases that require credentials.
If you select multiple databases and click Datasource actions > Provide credentials, the provided credentials are saved for all selected databases. When
providing credentials for multiple databases, make sure that the selected databases all use the same credentials. Otherwise, databases that use different
credentials will fail the connection test.
Storing credentials enables the discovery and classification of sensitive data for some compliance types. If automated configuration is not supported, the
datasources created when you store credentials can be used in your own discover sensitive data scenarios.
3. From the Provide credentials dialog, use the User name and Password fields to provide credentials for the selected databases. Click OK to return to the compliance
monitoring database view.
4. From the compliance monitoring database view, select databases that have stored credentials and click Datasource actions > Test connection. Use Test connection
to validate that the stored credentials allow access to the database. If the connection test fails, the discovery and classification of sensitive data will not work.
Important:
Testing connections can be time-intensive. It is not recommended to test a large number of connections at once.
If a connection test fails, navigate to Setup > Tools and Views > Datasource Definitions, select the datasource, and validate the datasource definition. For
example, you may need to specify the correct port for Db2 for z/OS databases, correct mixed-case PostgreSQL database names, or set other connection
properties required for your environment.
If a Microsoft SQL Server connection test fails, verify that the SQL Server Browser Windows service is started.
Results
After enabling scanning for sensitive data, scan results and any changes made to the policy (including changes to groups and group membership) become available after
the policy is installed according to the policy installation schedule. By default, the quick start compliance monitoring tool defines a policy installation schedule that runs
daily at 10:30 AM.
Parent topic: Quick start for compliance monitoring
User interface
The Compliance Monitoring tool consists of the following views:
Dashboard view
This is the default view and provides an overview of the current status of compliance deployment, organized by compliance type. Individual tiles reflect the current
configuration status of several compliance monitoring components, making it easy to quickly identify which compliance types require additional configuration.
Database view
The database view provides a table indicating which databases are configured with any of the supported compliance monitoring templates.
Access the tool by clicking the icon on the Set up compliance monitoring tile of the dashboard view or by selecting databases and clicking the Set up
compliance monitoring button on the database view.
The compliance monitoring views provide several interrelated ways to complete the configuration tasks associated with establishing compliance monitoring. The following
table summarizes the tasks supported by the different views.
Table 1. Summary of tasks supported by compliance monitoring views.
Task Set up compliance monitoring Dashboard view Database view
Associate compliance type with databases From the Databases section, select  Â
databases from the Available
Define datasources for discovering sensitive data From the Databases section, select From a compliance type tile, click Select databases and click
databases from the Selected the Datasource credentials link, Datasource actions > Provide
databases table and click the select databases, and click credentials.
Provide credentials button. Datasource actions > Provide
credentials.
Important: Once configured with a compliance monitoring template, databases that have been taken offline will continue to appear in the compliance monitoring tool.
Policies
The quick start compliance monitoring templates provide security policies that are designed to work effectively and without any modification. Use these policies to quickly
get up and running with compliance monitoring. From the compliance monitoring dashboard view, click View details > Policies to see the policies associated with a specific
When compliance monitoring is configured from a central manager, quick start security polices are automatically pushed-down to all collectors. If policies other than the
default quick start security policies are installed, the quick start policies are installed last.
If you want to review the compliance monitoring policies in detail, they are available through the Policy Finder. Quick start compliance monitoring policies are identified
with the following naming convention: Quick Start compliance type. For example, the default GDPR policy is named Quick Start GDPR. It is also possible to edit the
compliance monitoring security policies using the Policy Builder for Data.
Restriction: Prior to Guardium V10.1.4, modifying the rules and groups used with quick start security policies may result in inaccurate configuration status in the
Compliance Monitoring tool.
If you have modified the compliance monitoring policies, revert to the default settings from the Compliance Monitoring dashboard view by clicking View details in the
desired compliance type tile, selecting the Policies tab, and clicking Reset to default. Before restoring the default settings, any customized settings are retained in a policy
with the following naming convention: Quick Start compliance type timestamp (where timestamp indicates the date and time default settings were restored). For example,
Quick Start GDPR 2017-05-01 19:17:59.
Important: Prior to Guardium V10.1.4, it may be necessary to reinstall the quick start security policy after using Reset to default. For more information, see Installing
security policies.
When compliance monitoring is configured from a standalone machine, a policy installation schedule is defined if there are no pre-existing policy installation schedules
(regardless of whether the schedules are active or paused). When compliance monitoring is configured from a central manager, the policy installation schedule is
configured for all collectors (regardless of whether existing policy installation schedules exist).
Groups
The compliance monitoring tool relies on several groups associated with each compliance type. These groups should be populated to establish effective compliance
monitoring. From the compliance monitoring dashboard view, click View details > Summary to see the groups associated with a specific compliance type.
Restriction:
You may notice a discrepancy between the number of databases and the members of the Server IP group shown on the View details > Summary tab for a compliance type.
This discrepancy reflects multiple databases running on a single database server or a Server IP group that has been updated outside of the compliance monitoring tool.
Reports
The quick start compliance monitoring templates provide several predefined reports for each compliance type. From the compliance monitoring dashboard view, click
View details > Reports to see the reports associated with a specific compliance type. These reports are also available under the Accelerators section of the main Guardium
navigation. This list of reports is predefined for each compliance type and does not reflect any custom reports you may have defined.
Restriction: The HIPAA compliance monitoring template does not provide any predefined reports.
For example, if user1 configures GDPR and user2 configures PCI, user1 will not have access to the PCI reports and accelerators because the PCI role has not been
assigned to user1. For information about manually assigning specific roles to users, see Access management overview.
Sensitive data
You may notice a discrepancy between the Matches found value on a compliance type tile and the associated objects groups on the View details > Summary tab. Matches
found indicates the number of unique table and column name pairs that matched criteria from the sensitive data discovery scenario. The number of members in the
OBJECTS group is the number of unique table names and is a cumulative value from all scans.
Important: In the Scanning for sensitive data section of a tile, icons indicate that one or more datasource has been configured for the discover sensitive data scenario.
Click View databases to investigate which databases have datasources defined for discovering sensitive data.
Parent topic: Quick start for compliance monitoring
PCI/DSS (Payment Card Industry/ Data Security Standard) is a set of technical and operational requirements designed to protect cardholder data.
Value-added: Give customers a whole view of PCI/DSS and provide predefined policies and reports to save configuration time.
2. In the user role form, check PCI, and then save the assignment.
Overview
Click the Overview for an introduction of how the predefined reports follow the compliance.
1. Cardholder Server IPs List: Cardholder information database server list. According to the company's actual situation, set the PCI Authorized Server IPs group
information, which specifies the database server that stores cardholder information.
2. Cardholders Databases: Cardholder information database. Set the PCI Cardholder DB: designated group information, which is stored in the database's
cardholder information.
3. Cardholder Objects: Cardholder information object. This needs to set the PCI Cardholder Sensitive objects.
4. DB Clients to Servers Map: Client/server mapping and PCI Authorized Server IPs set group information, which specifies the database servers storing
cardholder information. Query can be used to find client access to the cardholder database.
5. Active DB Users: Administrator in addition to categories of users, which visited the cardholder database. Set the “PCI Authorized Server IPs†and
“PCI Admin Users†.
6. Cardholder DB Administration: Cardholder database management operations. Set the PCI Authorized Server IPs and Admin Users.
7. Authorized Source Programs: Credit program access. Set the PCI Authorized Server IPs, PCI Authorized Source Programs. Procedure for recording Credit
Cardholder database access.
8. Unauthorized Application Access: Non-credit program access. Set the PCI Authorized Server IPs, PCI Authorized Source Programs. Records of credit
program for the cardholder database access.
9. 8.5.8 Shared Accounts: PCI eighth requirements to have each person having computer access to be assigned a unique ID. Set PCI Authorized Server IPs to
count the number of times the same database username is trying to access from the cardholder database IP.
In the statements, click to view a report form, and then determine what specific group content needs to be filled in.
Navigate to Setup > Tools and Views > Group Builder, and in the Modify Existing Groups selection, select the group name.
Click the Overview for an introduction of how the Guardium monitor and predefined reports follow the compliance.
1. 10.2 and 10.3 Automation - Use the online help Protect help book and Comply Help book to automate this section.
2. 10.2.1 Data Access - PCI Access to cardholder data, Set the PCI Authorized Server IPs and PCI Admin Users.
3. 10.2.2 Admin Activity - PCI Activity by Admin. user. Set the PCI Authorized Server IPs and PCI Admin Users.
4. 10.2.3 Audit Trail Access - To follow this section completely, at least four kinds of reports must be defined: Logins to SQLGuard; User activity audit trails on
Guardium server; Scheduled job exceptions; and, User to-to lists. Navigate to Setup > Reports > Report Builder to create reports as you need.
5. 10.2.4 Invalid Access - PCI - Invalid Login Access Attempts: record the login failed try in the database. PCI - Unauthorized Application access: record the
database access not defined in PCI Authorized Source Programs.
6. These three sections can also use the Monitor and Audit Help Book in the embedded online help - 10.2.6 Initialization Log, 10.5 Secure audit trails, and 10.6
Access Auditing.
Click Overview for a discussion on the importance of vulnerabilities assessment. Click Harden > Assessment Builder to build an assessment process.
Workflow Builder
The Workflow Builder is used to define customized workflows (steps, transitions and actions) to be used in the Audit Process.
For additional information, see Building audit processes. Follow these steps to:
Note: If the task type in Audit Process Builder is Classification Process, then Workflow Builder can not create customized workflows.
Warning Note: When a workflow event is created, every status used by that event can be assigned a role (meaning that events can only be seen by this role when in this
status). Â When an event is assigned to an audit process, it is important that every role that is assigned to a status of this event have a receiver on this audit process.
 Otherwise, it is possible that an audit result row can be put into a status where none of its receivers are able to see this row or change its status.
If an audit row becomes inaccessible, the admin user (who is able to see all events, regardless of their roles) would be able to see the row and change its status. However,
if data level security is on, the admin user may not be able to see this row. The admin user would need to either turn data level security off (from Global Profile) or have the
dataset_exempt role. It is important to configure the audit process so that all roles who must act on an event associated with this audit process are receivers of this audit
process.
Note: Deletion of a event status is permitted only if the status is not in the first or final status of any events, and if it not used by any action. The validation will provide a list
of events/actions that prevent the status from being deleted.
Prerequisites
See How to create an Audit Workflow. For additional information, see Compliance Workflow Automation.
After creating this customized workflow, See How to combine Customized Workflow with Audit Workflow.
Procedure
1. Open the Workflow Builder by navigating to Comply > Tools and Views > Workflow Builder.
2. At the first screen (Event Type), click the Event Status button to go to the Event Status configuration.
3. Click on Add Event Status to define a new Event Status. A multiple of Event Status are expected. Fill in the status description and place a check mark in the Is Final
check box if the task is a final task in the workflow. When done, go to the next step.
An example of a simple three-step workflow is: Open to Review state to Approve or Not Approved. Each step of the workflow is a separate Defined Task Event
Status.
The workflow tasks of the example are: Open, Review state, Approve after review, or Not approved. Also, if the task is the final task in a workflow, place a check
mark in the Is Final column. Examples of a final task in the example are Approved or Not Approved.
From the simple three-step workflow example, an Event Action of Under Review has a prior status of Open and a next Status of Review State. The Event Action of
Approved follows Under Review with a prior status of Review State and next status of Approve after review. Or the Event Action of Not approved has a prior status of
Review State and a next status of Not Approved. There is also a signoff capability for designated reviewers per Event Action (continuous or sequential). See the
previous screen shot.
10. Fill in the Event Action Description and designate Prior status, Next status and if Sign-off of this event action is required. Click the Apply button.
11. Repeat Steps 9 and 10 until all event actions are described and designated.
12. Go to the Roles section of the Event Type menu screen. Roles involve defining who can see the event when it is in a particular Event Action. For example, who can
see events that are "Under Review" and who can see events that are "Approved".
13. Select the Event Type Status and click the Roles button.
14. In the Assign Security Roles panel, mark all of the roles you want to assign (you will only see the roles that have been assigned to your account). Click Apply to save
security role choices. Click the Back button.
15. Repeat steps 13 through 14 until all event type status have had roles defined.
16. The configuration effort from Workflow Builder is done.
17. Open the Audit Process Builder by navigating to Comply > Tools and Views > Audit Process Builder to schedule the workflow and build and show workflow reports.
See the Audit Process Builder steps under Define a Report Task.
The formal sequence of event types created in Workflow Builder is managed by clicking on the Event and Additional Column button in the Audit Tasks window. This button
will appear after an audit task has been created and saved. This additional button will not appear until the audit task is saved.
Prerequisites
See How to create Customized Workflows. For additional information, see Workflow Builder.
See How to create an Audit Workflow. For additional information, see Compliance Workflow Automation.
Define an audit process that follows the customer's customized workflow practices by following the additional steps.
Procedure
1. Configure these workflow activities when Adding An Audit Task.
2. Create and save an Audit Task. After saving, an additional button, Events and Additional Columns, will appear.
3. Click this additional button.
4. At the next screen, place a checkmark in the box for Event & Sign-off. The workflow created in Workflow Builder will appear as a choice in Event & Sign-off.
5. Highlight this choice. Save your selection.
6. If additional information (such as company codes, business unit labels, etc.) is needed as part of the workflow report, add this information in the Additional Column
section of the screen and then click Apply (save). When done, close this window.
7. Apply (save) your Audit Task. Apply (save) the entire Audit Process Definition. Click on the Run Once Now to create the report. Click on View to see the report.
8. Click on the Run Once Now to create the report. Click on View to see the report.
This Event and Additional Column button appears in all audit tasks.
Note:
If data level security at the observed data level has been enabled (see Global Profile settings), then audit process output will be filtered so users will see only the
information of their databases.
Under the Report choices within Add an Audit Task are two procedural reports, Outstanding Events and Event Status Transition. Add these two reports to two new
audit tasks to show details of all workflow events and transitions. These two reports will not be filtered (observed data level security filtering will not be applied).
These two reports are available by default in the list of reports only to admin user and users with the admin role.
Threat detection analytics scans and analyzes audited data to detect symptoms that may indicate SQL injection or Stored Procedure database attacks. Guardium does not
rely on a comparison against an ever-changing dictionary of attack signatures. Instead, Guardium analyzes audit data activity, exceptions, and outlier data (Outliers
Detection) over extended periods of time looking for patterns that indicate an attack. By tracking the suspicious events over time and correlating them, Guardium creates a
comprehensive picture of potential risks. This approach is more flexible and comprehensive, and does not require continual signature updates.
An attacker trying to identify the structure of a dynamic SQL query, for example the number of columns queried
An unusually large quantity of new queries, specifically queries that are uniquely or unusually structured
Access to tables containing information about the database structure
Examples of suspicious activity are: the creation of a stored procedure with a DROP statement with sensitive objects; a DROP verb; SQL exceptions caused by missing
objects; a procedure that is modified after being dormant for an extended period of time.
Guardium tracks the activity around individual stored procedures, and together with Outlier mining data correlates the various symptoms and users. Guardium can detect
these typical symptoms of this malicious stored procedure use case (presented in the order they typically occur):
1. A database administrator creates a malicious Procedure A, which deletes data from the customer table
2. A month later the database administrator changes a commonly used Procedure B to call Procedure A
3. A different user calls the modified Procedure B, such that the customer table data is deleted by that innocent user
Ensure you meet the minimum required memory and storage requirements for search (4 CPU and 24 GB RAM).
Verify your system has logged application data. Specifically, SQLI requires application data because the injection initiates from the application. If the system
"trusts" the application and does not monitor it in Guardium, the injection cannot be identified.
Outlier detection is not required for SQL injection threat detection but it is required to fully support suspicious stored procedure detection. For more information,
see Enabling and disabling outliers detection locally on a Collector.
When upgrading to Guardium V10.1 through the upgrade patch process, you must enable threat detection scanning on each collector by using the following
Guardium API command: grdapi enable_advanced_threat_scanning. See GrdAPI Threat Detection Analytics Functions for more information about
parameters available for the enable_advanced_threat_scanning command.
Important: Threat detection relies on analysis and correlation of logged data. Thus any rules that filter out traffic before logging are not considered for threat detection.
Examine your use of IGNORE S-TAP SESSION rules carefully to determine the risk of not logging these sessions versus optimizing the capacity of the collector.
Policy rules must be installed to collect the necessary traffic for malicious stored procedure analysis.
Recommendation: Create the following rules in your policy in the suggested order. It is important to check the Continue to next rule checkbox for all these rules.
1. Access rule: Log Full Details where Command group filter is PROCEDURE DDL.
2. Access rule: Log Full Details where Command group filter is EXECUTE Commands. If your database is Oracle, include the command BEGIN in the rule.
Guardium analyzes the symptoms over time, correlates them, and assigns a score per identified possible attack. If the score indicates a likely attack, the set of events
becomes a case whose id is unique per collector. Cases are externalized in case reports, one per each suspected attack. Access case reports by one of:
Set up an audit process to receive notifications in your To Do list on the Central Manager, and open the report directly on the relevant associated collector. Note that
the To Do list is updated once an hour.
Access Investigate > Exceptions.
The case reports window A report presents, by default, up to 3 incidents, one per line. Each case includes a risk score from 1 to 3, with 3 being the most severe. You can:
Hover on the case ID to view a summary of the attack (only stored procedure cases).
Hover on the case ID and click Link to Symptoms to access the detailed symptoms report.
Click the ID to open the case-specific threat diagnostic dashboard. See Working with threat diagnostic dashboards.
Each process pulls out the suspected cases on one attack type. You can customize these processes, or copy and create your own.
Procedure
1. Navigate to Comply > Tools and Views > Audit Process Builder. Optionally filter the available audit processes by clicking the Inactive only radio button or typing
Suspected in the Filter box.
The default task for this process is the corresponding report (Suspected Malicious STP Cases or Suspected SQL Injection Cases). Do not modify the runtime
parameters of these reports. However, you can add additional tasks to this same audit process. For example, you can add both the threat reports into a single audit
process.
If you are defining these audit processes from a central manager, define a task for each collector for which you want to see threat data and use the Remote Data
Source option.
2. Click Send results to define the audit process receivers who will receive reports on suspected malicious stored procedures.
3. Select the default receiver (user) and then click the icon to define the appropriate receiver or receivers for your organization. When you are finished, click OK.
4. Click Schedule audit process and review the schedule for the audit process.
The recommendation is to run the process every day, every hour starting at 12:30 AM (after both outliers and threat detection usually run). Note that the check box
Auto run dependent jobs has no effect for this task.
A threat diagnostic dashboard performs much like other investigation dashboards, except that the dashboard for that case is populated with the data from the suspicious
events (db user, server, objects, etc.) and uses different charts to provide different views of the event and surrounding events that may be helpful in investigating the
possible attack. The relevant search and outlier data is also available on the same dashboard page as the charts.
In many cases, you will not need to change any of the preexisting filters for the predefined threat diagnostic dashboards. However, if you want to do some of your own
comparative analysis, you can modify the preexisting filters.
See Investigation Dashboard for more information on working with dashboard and chart filters.
Tip: The threat diagnostic dashboard can only be opened by clicking on the case number in the relevant threat report. You cannot save changes to this dashboard or any
other predefined dashboard. If you make changes and want to keep the dashboard for further investigation, you must copy it and save it under a new name. You must also
save the filters by clicking the Filters menu and selecting Save.
Reference data is a set of predefined, chart-specific filters, for Threat Detection Analytics only, that show data similar to the case you’re investigating but not included
in the general dashboard filter. Reference data cannot be changed by users. Hover over the filter icon in each chart to see the Reference Data.
In a typical suspected SQL injection attack scenario, the threat diagnostic dashboard is filtered for this attack and includes the following general dashboard filters:
Server: 8.34.223.145
DB user: USER1
Database: 8.4.134.213:31.5.12
DB type: MYSQL
Object: stp1_name
The chart for DB user can include reference data for similar DB users, such as USER2, USER3 and USER4. This enables you to compare the activities of the suspected user
with similar users, even though those additional users are not included on the general dashboard filters.
Not all fields include associated reference data. Any field for which there is no predefined reference filter is filtered as on the dashboard.
In some charts, filters can be inactivated so that you can compare data regardless of the filters chosen for the entire dashboard. This gives a wider picture of the activity.
Click the filter icon to open the Chart Filter Settings, and make modifications.
Procedure
1. From the To Do list, or from Investigate > Exceptions, open the Suspected SQL injection Cases dashboard. Each line is a case, with a Confidence (%) rating of
certainty of an attack, and a risk level of the attack.
2. Click View to evaluate for false positives. Hover over the selected case id and click Symptoms to open the SQL Injection Case Symptoms page. Every suspicious
action is described, and the SQL string displayed. You can see the exact modifications the user made to strings. By progressing from string to string, you can
observe how the attacker methodically gained more data using errors returned from previous queries.
3. Click the id number to open the default diagnostic dashboard for SQL injection attacks, which is filtered by the incident's date and suspected web-application
connection details. This helps narrow the investigation to database traffic that occurred during the attack. You can change or drop the filter to broaden the scope of
investigation. Use the bottom grid to get more detailed information on the chart’s data. Note that if you move to a standard dashboard, all filters specific for the
suspected SQL injection attack are canceled.
4. Use these guidelines while investigating the charts:
Change the timescale to look for peaks at time of the attack
Look for violation of any security policy, and see if any violations correlate to other activity at the time of the attack
5. Drill-down by changing filters, timeframe, etc. to see if there are differences across the system.
6. Evaluate the charts in the dashboard:
Procedure
1. From the To Do list, or from Investigate > Exceptions, open the Suspected malicious STP Cases dashboard. Each line is a case, with a Confidence rating of certainty
of an attack, and a risk level of the attack.
2. Click View to evaluate for false positives.
3. Hover over the selected case id to view the case details.
4. Click symptoms to open the Malicious STP Case Symptoms page.
5. 5. Click the id number to open the default diagnostic dashboard for SQL injection attacks, which is filtered by the incident's date and suspected web-application
connection details. This helps narrow the investigation to database traffic that occurred during the attack. You can change or drop the filter to broaden the scope of
investigation. Use the bottom grid to get more detailed information on the chart’s data.
6. Use these guidelines while investigating the charts:
Change the timescale to look for peaks at time of the attack
Look for violation of any security policy, and see if any violations correlate to other activity at the time of the attack
7. Drill-down by changing filters, time frame, etc. to see if there are differences across the system.
8. Evaluate the charts in the dashboard:
Parameter V Description
a
l
u
e
all  Optional. In a central management configuration only, enables all threat detection scanners on all managed units. Allowable values:
true, false.
schedule_start  Optional. Specifies the date and time to start running the processes. The accepted format is yyyy-mm-dd hh:mm:ss (24-hour clock).
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
You will see the following message if threat analytics is enabled when outlier detection is not:
Warning - Enabling advance threat scanning (AKA Eagle Eye) when Analytic anomaly detection is disabled.
Advance threat scanning (AKA Eagle Eye) enabled.
ok
disable_advanced_threat_scanning
Disables threat detection scanners on the collector.
Parameter V Description
a
l
u
e
all  In a central management configuration only, disables all threat detection scanners on all managed units.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
get_eagle_eye_info
Displays the current settings for threat detection parameters.
Parameter V Description
a
l
u
e
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
grdapi get_eagle_eye_info
Eagle Eye Parameters Values:
EI_CASES_DISPLAY_LIMIT = 3
EI_CONFIDENCE_PCT_CHANGE_TO_REDISPLAY_CASE = 30
EI_EAGLE_EYE_ENABLED = 1
EI_PROCESSOR_TIMEOUT_SEC = 420
set_eagle_eye_parameter
Use under the direction of IBM personnel. Changes configuration parameters for threat detection. These parameters must be set explicitly using parameter_name and
parameter_value as follows:
Parameter V Description
a
l
u
e
EI_CASES_DISPLAY LIMIT Â The number of cases to be displayed in the to-do list report. Default is 3.
EI_CONFIDENCE_PCT_CHANG  The percent of “confidence†change that will cause this case to be redisplayed in the to-do list report, even if it has already
E_TO_REDISPLAY CASE appeared before. This can happen if Guardium detects another symptom or symptoms that raise the confidence by this percentage
value. Default is 30.
EI_PROCESSOR_TIMEOUT_SEC Â Processors that run longer time than this threshold are turned off. Default is 420 seconds.
EI_SCANNER_PATCH_DEF Â To avoid false positives as a result of patch installation, if in a single process run the number of stored procedures created exceeds
this parameter then the process assumes a patch is installed and it stops analyzing symptoms. Default is 10 stored procedure
creations detected in one run.
EI_SCANNER_TIMEOUT_SEC Â Scanners that run longer time than this threshold are turned off. Default is 300 seconds.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
get_eagle_eye_scanners_info
Return scanner settings information.
Parameter V Description
a
l
u
e
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Field Description
I: in progress
D: done
K: killed
True: enabled
False: disabled
Permanent disabled If the scanner was disabled 3 times in 24 hours, then it is permanently disabled.
True: disabled
False: enabled
Example:
grdapi get_eagle_eye_scanners_info
ID=0
ID:1, Name:SQLInjectionExceptionsScanner, Status:D, Enabled:true, Permanent disabled:false
ID:2, Name:NumNewConstructScanner, Status:D, Enabled:true, Permanent disabled:false
ID:3, Name:SQLInjectionSuspiciousObjectScanner, Status:D, Enabled:true, Permanent disabled:false
ID:4, Name:SqliQueryScanner, Status:Unknown, Enabled:false, Permanent disabled:true
ID:5, Name:EagleEyeSTPCreateProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:6, Name:EagleEyeSTPCallProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:7, Name:EagleEyeSTPExceptionProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:8, Name:EagleEyePreviousStpUsageProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:9, Name:EagleEyeSTPViolationProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:10, Name:EagleEyeSTPUserOutlierScanner, Status:D, Enabled:true, Permanent disabled:false
ok
set_eagle_eye_scanner_parameter
Use under the direction of IBM personnel. Activate or deactivate a scanner. These parameters must be set explicitly using parameter_name and parameter_value as
follows:
Parameter V Description
a
l
u
e
scanner_id  Required. The unique ID of the scanner, which you can get from get_eagle_eye_scanners_info GuardAPI command.
is_active  Defines if the scanner should run. Used to start a scanner that was stopped automatically because it timed out.
0 : the scanner is stopped
is_permanent_inactive  If the scanner was permanently disabled after it was disabled 3 times in 24 hours then it can only be enabled again using this
GuardAPI.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
get_eagle_eye_symptom_period_hours
Show the value of the symptom period parameter in hours. The symptom period determines how long back the process is looking and analyzing the collected symptoms
for one case.
Parameter V Description
a
l
u
e
case_name  Required. The case type. The following values are allowed:
STP: malicious stored procedure case
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
set_eagle_eye_symptom_period_hours
Set a value for the symptom period parameter in hours. The symptom period determine how long back the process is looking and analyzing the collected symptoms for a
case.
Parameter V Description
a
l
u
e
case_name  Required. The case type. The following values are allowed:
STP: malicious stored procedure case
symptom_period_hours  Required. Integer. The number of hours in the past to analyze symptoms for a case.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
get_eagle_eye_debug_level
For use by IBM Service personnel. Displays current debug level:
1: on
0: off
Parameter V Description
a
l
u
e
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
grdapi get_eagle_eye_debug_level
ID=0
component=EAGLE_EYE level=1
ok
set_eagle_eye_debug_level
For use by IBM Service personnel. Displays current debug level.
Parameter V Description
a
l
u
e
0: off
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
Investigation Dashboard
The Investigation Dashboard provides powerful tools for identifying and assessing problems that might exist in your Guardium environment. It uses either local or system-
wide unfiltered data, and provides numerous filter options to query data across an entire Guardium environment, potentially from any Guardium collector within that
environment.
The Investigation Dashboard provides inter-related charts that help reveal patterns, anomalies, and relationships across your data. It does not require detailed knowledge
of topology, aggregation, or load balancing schemes. It contains the original quick search for enterprise functions, and other tools for visualizing and analyzing data.
Operating Modes
The Investigation Dashboard supports three operating modes:
Queries are submitted on a Central Manager return enterprise-wide results from all Guardium collectors with search enabled. Queries that are submitted on
managed units return local results.
All machines
Local only
This mode limits search queries to the local collector where the search is submitted: no data is retrieved from other collectors in the Guardium environment. On a
CM on local only mode, there is no data displayed.
See GuardAPI Quick Search for Enterprise Functions for information about setting the search mode.
Dashboard Components
A dashboard is a collection of one or more of the following items:
Three-axis data graphs, which are known as trimetric charts. These graphs can be displayed as a color map, bar graph, bubble graph, line graph, pie graph, step
graph, and area graph.
Animated bubble chart - an animated visualization of data changes over the last 48 hours.
Activity chart - a line chart that displays the volume of activity and outliers. It is located above the Results table.
Results table - provides the search results and investigation features of the original quick search. The Results Table is always at the bottom of the dashboard. It can
be added to any dashboard.
Facet list of one or more of Where, Who, What, Exception, and When. It appears on the left side of every dashboard and cannot be removed.
There are four default DAM views and two default FAM views, each with different charts and tables. Select the view from the dashboard menu . The default views
cannot be modified.
64-bit architecture
24 GB RAM
4-core CPU
Restriction: The Investigation Dashboard and Data Level Security cannot be enabled concurrently.
Procedure
1. Log in to the machine as a user or administrator with the CLI role.
2. Use the following GuardAPI command to enable the Investigation Dashboard:
By default, violations are not included in search results. To include violations, set the includeViolations parameter to true:
Additional parameters may be specified, such as the search index update interval. For a complete list of parameters and descriptions, see the GuardAPI
Investigation Dashboard Functions reference information.
grdapi disable_quick_search
Results
Once enabled, see Accessing the investigative dashboard to learn more and begin using the investigation dashboard.
Attention:
Investigation Dashboard functionality opens ports 8983 and 9983 on both Central managers and collectors. The ports are opened when the Investigation
Dashboard is enabled and closed when it is disabled. To use the Investigation Dashboard, ensure that bidirectional communication between Central managers and
collectors on ports 8983 and 9983 is not blocked by any firewall.
Indexed search data is retained for 3 days. Use the purge object Guardium CLI command to change the retention period. For example, the following command
changes the retention period to 5 days: store purge object age 39 5. Note that 39 is the default object identification number associated with the search index. For
additional information, see Configuration and Control CLI Commands reference information.
Procedure
1. On the collector, at the CLI prompt, run the GuardAPI command:
grdapi enable_fam_crawler [extraction_start] [schedule_start] [activity_schedule_interval] [activity_schedule_units]
[entitlement_schedule_interval] [entitlement_schedule_units] Example: The following command sends updated discovery and classification results
to enterprise search for classification data every 2 minutes and for entitlement information every day.
grdapi enable_fam_crawler activity_schedule_interval=2 activity_schedule_units=MINUTE entitlement_schedule_interval=1
entitlement_schedule_units=DAY
By default, the extraction starts when you enter the command, extracting data from the moment (time) you entered the command.
2. Repeat on each collector.
Results
The default investigation dashboard for data or files opens. By default, the only filter that is applied to the entire dashboard is to show the last hour of data.
Parent topic: Investigation Dashboard
There are four default views for data activity monitoring, each with different charts and tables. Select the view from the dashboard menu . The default views cannot be
modified.
The default dashboards contain data for the last hour presented in one or more of:
Trimetric charts (3–axis data graphs). The default view is a color map. Additional views are bar graph, bubble graph, line graph, pie graph, step graph, and area
graph.
Activity: Summary and Details tabs. Each row in the Summary tab gives the number of instances of recorded activities per server–DB pair and the number
of DB types. The Detailed Summary adds the count of Source Programs, DB users, OS users, Client hostname, Client IP, and date. Each row in the Details tab
gives full details on one activity.
Outliers: see Interpreting data outliers in the investigation dashboard
Errors: Summary and Details tabs. Each row in the Summary tab gives the number of instances of reported errors per server and the number of DB types and
DB users. The Detailed Summary adds the number of Client IPS, error types and dates. Each row in the Details tab gives full details on one error.
Violations: Summary and Details tabs. Each row in the Summary tab gives the number of instances of recorded violations per server–DB pair and the
number of DB types. The Detailed Summary adds the count of Source Programs, DB users, OS users, Client hostname, Client IP, severity, violation, and date.
Each row in the Details tab gives full details on one violation.
Topology view Search server status view: see Using the topology view
Animated bubble chart: an animated visualization of data changes over the last 48 hours. The chart depicts the behavior of objects over a period of 24 hours. Each
object is depicted as a circle, and its area and position (x and y axis) represent three user-selected variables. The animation represents the object's behavior over
the 24 hours. Access from the Add Chart drop-down.
Activity chart: a line chart that displays the volume of activity and outliers, located above the Results table. Access from the Add Chart drop-down.
Data in-sight: 3D visualization of data activity, see Using Data In-Sight. Access from the Add Chart drop-down.
A categorized facet list of Where, Who, What, Exception, and When, from the search results appears on the left side of every dashboard and cannot be removed.
Filter the entire dashboard by the specific facets by expanding the list and clicking individual facets.
The Active Filters row at the top of the window shows the current filters. Delete filters by clicking the .
Search field: free text search that filters the results in all fields simultaneously, irrespective of facet
Distributed search: see Local and distributed search
Time period for which data is presented: modify by clicking the drop-down in the upper right corner. Options are last 1 hour, last 3 hours, last 1 day, last 3 days, any
time period you specify. Default is one hour.
Filters drop-down: see Filtering data and saving filters in the investigation dashboard
add new dashboard save changes in dashboard save dashboard as: see Creating, saving, and exporting investigation dashboards
There are two default FAM views, each with different charts and tables. Select the view from the dashboard menu . The default views cannot be modified.
Note: The Server IP and Client IP are always the same in the dashboard, except for the case of connecting through remote desktop on Windows. Client IP is only
supported when connecting through a remote desktop session.
Note: The FAM queries the server for the server IP addresses and takes the first one it finds. There is no way to select "the appropriate" IP address from a host name when
the host has multiple IP addresses. Specify the IP address explicitly you want to be guaranteed to see that IP address in the reports.
The default dashboards contain data for the last hour presented in one or more of:
Trimetric charts (3–axis data graphs). The default view is a color map. Additional views are bar graph, bubble graph, line graph, pie graph, step graph, and area
graph.
Results table: provides the search results and investigation features of the original quick search. The Results Table is always at the bottom of the dashboard. It can
be added to any dashboard. Tabs are:
Activity: Summary and Details tabs showing monitored data, based on the file server policy rules. Each row in the Summary tab gives the number of
instances of recorded access activities per server and OS user. The Details tab adds the Server Hostname, Server, Client Hostname, Client IP, OS user, File
Full Name, Command, Date and Time. Each row in the Details tab gives full details on one activity. Data in the Activity tab is consistent with the date and time
of the collector.
Outliers: see Interpreting file activity outliers
Errors: Summary and Details tabs. Each row in the Summary tab gives the number of instances of reported errors per server and client IP, and the date. The
Detailed Summary adds the error details, and the time. Each row in the Details tab gives full details on one error.
Violations: Summary and Details tabs. Each row in the Summary tab gives the number of instances of recorded violations per server, source program and OS
user combination. The Detailed Summary adds the Client IP, severity, violation and violation details, date, and time. Each row in the Details tab gives full
details on one violation. Data in the Violations tab is consistent with the data and time of the file server.
Entitlement: Summary and Details tabs. For file servers, this tab presents sensitive data based on the current FAM decision plans. Each row in the Summary
tab gives the number of instances of recorded access activities per server and owner. The Details tab adds the Server Hostname, full path, Type, , Size,
Classification Entities (the decision plan that caused this file to be identified as sensitive), Owner, Client Hostname, Client IP, OS user, File Full Name, users
and groups with write, read, execute, and delete permissions, last modification, Version (Sharepoint only), creation time, Date, and Time. Each row in the
Details tab gives full details on one activity. You can use the data in this table to create policy rules and groups for file servers, see Creating a FAM policy rule
from the Investigative Dashboard Entitlements tab.
Topology view Search server status view: see Using the topology view
A categorized facet list of Where, Who, What, Exception, and When, from the search results appears on the left side of every dashboard and cannot be removed.
Filter the entire dashboard by the specific facets, by expanding the list and clicking on individual facets.
The Active Filters row at the top of the window shows the current filters. Delete filters by clicking the .
Search field: free text search that filters the results in all fields simultaneously, irrespective of facet
Distributed search: see Local and distributed search
Time period for which data is presented: modify by clicking the drop-down in the upper right corner. Options are last 1 hour, last 3 hours, last 1 day, last 3 days, any
time period you specify. Default is one hour.
Filters drop-down: see Filtering data and saving filters in the investigation dashboard
add new dashboard save changes in dashboard save dashboard as: see Creating, saving, and exporting investigation dashboards
You can save filters for your future use. When you save a filter set, you choose if you want to share it, and choose the roles that you share it with.
Procedure
1. Use the rules and syntax to filter data:
To match an exact phrase, use double quotation marks around the search terms. For example, “Profiling Alert List†returns entries for Connection
Profiling Alert List but not for Profiling List Alert.
To match all specified search terms, separate the terms with a space. For example, Hadoop getlisting returns any entries that contain both Hadoop and
getlisting in any location or sequence.
To match any specified search terms, separate the terms with OR or a vertical bar (|). For example, Hadoop OR getlisting returns any entries that contain
either Hadoop or getlisting in any location.
To exclude a specified search term, use NOT or a period (.). For example, NOT Hadoop does not return any entries that contain Hadoop in any location.
Wildcards are supported by using asterisks (*) at the beginning or ending of a string. For example, 10.10.70.* returns any entries with the string 10.10.70.
followed by any additional characters.
Search rules can be used in combination. For example, 2016–5-08 (19.*|20.*) returns results in the time range of May 8 between the hours of 19:00:00
– 20:59:59.
Adding filters changes each view based on the RefFilter specified for the view. Current filters appear in the menu bar. Each one can be cleared by clicking its X.
2. Refine search results with any of the following methods:
Select specific filters based on the facets list:
Note: You can select one or more rows and right-click one of the server/DB user/Client IP cells to add the them to an existing group, or to create a new group.
3. Drill down by individual results by right-clicking on specific search results and exploring related outliers, errors, or violations, or viewing one of several available
drill-down reports.
You can filter an individual chart. The icon becomes red when specific filters are set for a chart that are different than the general dashboard filters. Hover over the icon
to see which filters are used in that chart.
A chart can have filters set as inactive, which means the chart data is not filtered by that field. This enables Guardium to display other items, in addition to the ones related
to the case, that may be similar or in some way provide additional insight into the investigation.
Example: While investigating activity on a server, you want to compare one of the charts with data from other servers. This is possible by deactivating the Server filter for
just that one chart. To do this, you would click the icon and select the Inactive radio button for the Server row.
Procedure
You can open the same dashboard and toggle through the different filter sets associated with that chart by using the and icons above the Active filters list.
Any investigation dashboards, including threat diagnostics, can be encrypted and exported for sharing. Only the dashboard definitions are exported, not the filters.
If you have a dashboard that is configured with a good set of charts for investigating particular incident types, you can share this knowledge with other Guardium users
without including actual attack data or revealing the filters.
Procedure
2. To save a dashboard with a different name for modification and subsequent use, click the icon, and save it with a descriptive name and optionally a category.
You can also define a category when you save the dashboard. The name and category can include spaces. To retrieve the dashboard later, click the icon to open
the dashboard menu.
3. To export investigation dashboards, go to Manage > Data Management > Definitions Export. From the Type menu, select Investigation Dashboard and select the
dashboard definitions to export. Then, click Export.
Procedure
1. To open the topology view, click the Search server status view icon Search server status viewin the toolbar of the investigation dashboard.
2. Hover the mouse over an object to display detailed information about that object.
3. Select an object to narrow the search results to only that object and its children if any exist. Use Ctrl + click to select or deselect multiple objects in the topology
view.
4. Close the topology view by clicking the close icon or clicking outside the topology browser. The search results update automatically to reflect the available data
based on the scope selected in the topology view.
Procedure
1. To toggle between local or distributed search, click the Enable / Disable search all appliances icon in the search window toolbar. Search results automatically
update to reflect the available data based on the selection of local or distributed search.
2. See Using the topology view for information about filtering global search results by a specific segment of the Guardium environment.
Data in-sight converts audited data to a 3-D chronological visualization of data flow, from sources to destinations, showing data transactions unfold exactly as they
occurred.
The visualization space contains two planes, each represents entities of the audit domain of a specific type. Every entry in the audit data is represented as a moving
‘flash line’ from an object of the upper plane (for example, client IPs) to an object of the lower plane (for example, databases). The flash line between the source
and the destination leaves a trail (a dotted line) indicating the presence of interaction between the specific source and destination, which gradually fades into the
background. The trails form an overview of the interaction between sources and destinations in the selected time period. The size of each source and destination is
relative to their level of activity. The sources are located near their destinations, and near other similar sources. The display can be modified in various ways, giving
additional information or aspects on the data. You can view data in-sight with vr headsets.
Data in-sight is an answer to this constantly changing paradigm. It adds the flexibility of human visual perception to spot associations and movements in the raw data,
irrespective of known attack types, that would otherwise be unnoticed.
Data in-sight converts audited data to a 3-D chronological visualization of data sources and destinations, showing data transactions unfold exactly as they occurred. The
visualization space contains two planes, each represents entities of the audit domain of a one type. Each entry in the audit data is represented as a moving ‘flash
line’ from an object of the upper plane (client IP, OS user, DB user, or source program) to an object of the lower plane (database, object, or server). The flash line
between the source and the destination leaves a trail (a dotted line) indicating the presence of interaction between the specific source and destination, which gradually
fades into the background. The flash line has the same color as the destination database. The trails form an overview of the interaction between sources and destinations
in the selected time period. The sources are located near their destinations, and near other similar sources. The size of the destination entity is proportional to the volume
of transactions relative to the other destination entities. There a many ways of modifying the display, including: color-code the top entity (color changes as data source
details change), filter from the data in-sight chart, and the investigation dashboard facets. You can also view data in-sight with vr headsets.
Procedure
1. In the Investigation Dashboard window, click Add Chart > Data in-Sight chart. The Chart Settings window opens.
2. In the Chart Settings pane, modify the object types that are represented in both planes, the type of data flow between them. You can optionally color-sort the
entities in the top plane by a secondary criteria, providing another level of analysis. For example, if the objects of the top plane represent client IPs and you select
color-sorting for source program, you can see the usage of different source programs by a specific IP client, and the usage of a common source program by different
client IPs. An object whose color changes repeatedly indicates a frequent change of source program usage in a single client IP. Click Apply.
Table 1. Data In-Sight Chart Settings
Field Description and Values
Data flow domain The type of data flow displayed. One of: Activities, Errors, Violations, Outliers.
Top plane entities The entity that is represented in the top plane. One of: Client IP, DB User, OS User, Source Program.
Bottom plane entities The entity that is represented in the bottom plane. One of: Database, Object, Server.
Color sort top entities by Extra (optional) color classification of top entities by: None, Client IP, DB User, OS User, Source Program.
Max. entities in top plane Maximum number of entities that are shown in the top plane.
Max. entities in bottom plane Maximum number of entities that are shown in the bottom plane.
Top entities color Opens color palette to select color for top plane entities. Disabled if top entities are color sorted.
Planes color Opens a color palette to select color for planes (one color for both planes).
3. Modify the display by:
Click the magnifier icon to enter full screen mode for more details
Rotate the view by holding down the left mouse button and dragging
Pan by holding down the right mouse button and dragging
Zoom in and out with the mouse wheel
4. View entities by:
Hover over an entity to show its details in the legend
Click an entity to show only its data flows (other entities fade out). Click the background to exit.
Double-click an entity to use it as the active filter (over the entire dashboard)
5. The information pane, which is located in the upper right corner, shows the time stamp of the current displayed actions, the number of actions shown so far, and an
indication of the rate of events per second. You can modify the display by:
Pause/restart data flow
Outliers Detection
Enable and start auditing outliers detection in two easy steps, letting Guardium do the work of identifying abnormal server and user behavior, and providing early detection
of possible attacks.
An outlier is behavior by a particular source (in DAM either a database or a particular user on a database, and from Guardium V.10.1.2 in FAM either a server or an OS
user), in a particular time period or scope that is outside of the “normal†time frame or scope of the particular database or user's activity. Outliers can indicate a
security violation that is taking place, even if the activities themselves do not directly violate an existing security policy.
Outlier Mining findings are available from the Investigation Dashboard (Quick Search) and in Reports.
Outlier mining operates on data that is already audited by a security policy. Make sure that the data you want evaluated for outliers is already audited by a security Policy.
An aggregator, with data from all its collectors (except a collector that is running outliers detection locally).
A collector, using data only its own data.
Outlier detection is a separate process from security policy rules and enforcement, so you cannot set up real-time alerts on outliers. However, because outlier data is
included in reports, you can create a correlation alert. A correlation alert is triggered by a query that looks back over a specified time period to determine whether the alert
threshold has been met.
Procedure
1. Enable outliers, see Enabling and disabling outliers detection on an Aggregator or Enabling and disabling outliers detection locally on a Collector.
Results
When the learning period is complete after one week, there should be data in the reports, and alerts are sent.
Parent topic: Outliers Detection
When run on the aggregator, outliers detection data is extracted from the managed units and the learning and analysis phases happens on the aggregator.
Outliers detection is disabled by default. This procedure is run on a central manager, to enable or disable outliers detection on all collectors that send their data to the
specified aggregator, except a collector that is running outliers detection locally. (For more details on local collection, see Enabling and disabling outliers detection locally
on a Collector).
If a collector has moved from one aggregator to another, or if you want to enable outliers detection locally on a collector, disable the outliers detection on the aggregator,
enable outliers detection locally if relevant, and then enable outliers detection on the aggregator. Whenever you enable outliers detection on the aggregator, it refreshes
the list of the its collectors.
Procedure
1. Log in to the central manager as a user or administrator with the CLI role.
2. To enable the outliers detection function, enter:
where:
FAM_DAM is an optional parameter specifying the type of outliers. The default is DAM.
where:
Results
The system starts collecting outlier data. Once the learning has completed (14 days), outliers data is available in the Investigation Dashboard (Interpreting data outliers in
the investigation dashboard and Interpreting file activity outliers) and the Outlier Analytic List Report.
Outliers detection is disabled by default. Follow the steps described below to enable or disable outliers detection locally on a collector. When outliers detection is enabled
locally on a collector, its data is not combined with the data on its aggregator.
To identify a collector that is running outliers mining locally, access the outlier mining status window, and look at the row of the individual collector (not under the
aggregator). The column Outlier Mining Enabled/Disabled shows green.
To change a outliers detection from local to the aggregator, disable outliers detection locally, disable outliers collection on the aggregator, and refresh the list of collectors
by re-enabling outliers detection on the aggregator.
Procedure
1. Log in to the collector as a user or administrator with the CLI role.
2. To enable the outliers detection function, enter:
where:
FAM_DAM is an optional parameter specifying the type of outliers. The default is DAM.
grdapi disable_outliers_detection
Results
The system starts collecting outlier data. Once the learning has completed (7 days), outliers data is available in the Investigation Dashboard (see Interpreting data outliers
in the investigation dashboard and Interpreting file activity outliers), and the Outlier Analytic List Report.
Quick Search must be enabled (grdapi enable_quick_search) to see outlier detection data in the investigation dashboard.
The Activity chart includes red (high) and yellow (medium) indicators that reflect the severity or total outliers score for the selected time interval. Red indicators reflect
highly anomalous events requiring immediate attention. Yellow indicators represent less extreme anomalies that warrant attention as part of other or related
investigations.
Hover over an outlier icon to view detailed information about outliers detected during that time period. To filter the Results Table to activities or outliers that occurred
during the same time period, click Show details.
From Guardium V.10.1.2 the Outliers tab in the Results Table has two views:
Summary has one row per source per hour in which an outlier was found, with an anomaly score and one or more reasons. Note that not every outlier presented in
the Summary Tab has further details in the Details tab.
Details is a sample of events that occurred, with one row per event with a reason (except diverse, see table) and other details (source program, object, verb, etc.).
For example, for high volume, the sampling presents the events with the highest score. You can configure the number of samples (rows) that appear in the Details
Tab, per each outlier in the Summary tab.
Anomaly Score Summary Tab: A calculated aggregate value based on the volume Right-click the score to open a menu with additional actions you
of outliers, the severity of individual events, the predicted volume can perform. In the Details tab the score can be 0, indicating that
of outliers for a given time of day, and other factors. For example, the individual events are not suspicious on their own, but the
on a system that typically identifies 0 outliers at 1am and 5-10 accumulated events in that hour are suspicious.
outliers at 1pm during weekdays, the presence of two additional
outliers (of 2 outliers at 1am or of 12 outliers at 1pm) is more
significant, and weighted more heavily, than the hourly total itself.
Details Tab: The anomaly score is only relevant for a high volume
event.
High volume Outlier True or False. High volume of activities of some type, for example Â
on an object, of a DB user.
New Outlier True or False. High volume of activities on new objects, for Â
example an admin uncharacteristically creates a high number of
new tables.
Diverse Outlier Summary view only. True or False. High volume of different types See the Activity table for more details.
of activities, for example a DB user performs many more activities
than usual, or performs them at an unusual time. A sample of the
diverse events does appear in the Details tab, they can be
identified by the database user. Although Diverse is not a column
in the details tab, they may have other reasons assigned to them.
Otherwise they appear without a reason.
Ongoing Outlier Summary view only. True or False. Event in the last few hours that There are no specific events to view. See the Activity table, filter
was not high enough to create an outlier, but does raise by the database in the facet list, and modify the time interval to
suspicions. the time of the suspicious behavior.
Number of Instances Details view only. Number of times this particular event has been Â
seen in the hour.
Source Program Details view only. Source Program in which the event occurred Â
DB User Details view only. DB User that executed the outlier event. Â
Privileged User Summary view only. True or False. Whether the user is privileged Â
or not
Verb Details view only. Verb with which the user executed the event Â
rare
a seldom seen condition
high volume
an unusually high incidence of a condition
new
a condition seen for the first time
error
an unusually high incidence of error conditions
Outlier reasons are assigned in combinations when needed. For example, an outlier may be flagged as both rare and high volume if a seldom-seen condition suddenly
occurs many times.
Note:
If a negative result ("-") appears in the Records affected result report, user should re-enable outliers to clear this negative result.
Quick Search must be enabled (grdapi enable_quick_search) to see outlier detection data in the investigation dashboard.
View outliers in the Investigation Dashboard Activity Chart and Results Table (investigation dashboard must be enabled), or review the Analytic Outlier List report.
Access the summary chart by selecting Data or from the User Interface drop-down, and clicking Enter; or by entering quick search in the search field and clicking Enter.
The Activity chart includes red (high) and yellow (medium) indicators that reflect the severity or total outliers score for the selected time interval. Red indicators reflect
highly anomalous events requiring immediate attention. Yellow indicators represent less extreme anomalies that warrant attention as part of other or related
investigations.
Hover over an outlier icon to view detailed information about outliers detected during that time period. To filter the Results Table to activities or outliers that occurred
during the same time period, click Show details.
Summary has one row per source per hour in which an outlier was found, with an anomaly score and one or more reasons. Note that not every outlier presented in
the Summary Tab has further details in the Details tab.
Details is a sample of events that occurred, with one row per event with a reason and other details. For example, for high volume, the sampling presents the events
with the highest score. You can configure the number of samples (rows) that appear in the Details Tab, per each outlier in the Summary tab.
This table describes the columns in both the Summary and Details views:
Anomaly Score Summary Tab: A calculated aggregate value based on the volume Right-click the score to open a menu with additional actions you
of outliers, the severity of individual events, the predicted volume can perform. In the Details tab the score can be 0, indicating that
of outliers for a given time of day, and other factors. For example, the individual events are not suspicious on their own, but the
on a system that typically identifies 0 outliers at 1am and 5-10 accumulated events in that hour are suspicious.
outliers at 1pm during weekdays, the presence of two additional
outliers (of 2 outliers at 1am or of 12 outliers at 1pm) is more
significant, and weighted more heavily, than the hourly total itself.
Details Tab: The anomaly score is only relevant for a high volume
event.
High volume Outlier True or False. High volume of activities of some type, for example Â
on an object, of a DB user.
New Outlier True or False. High volume of activities on new objects, for Â
example an admin uncharacteristically creates a high number of
new tables.
Ongoing Outlier Summary view only. True or False. Event in the last few hours that There are no specific events to view. See the Activity table, filter
was not high enough to create an outlier, but does raise by the database in the facet list, at the time of the suspicious
suspicions. behavior.
Number of Instances Details view only. Number of times this particular event has been Â
seen in the hour
File Full Name Name of file on which the user executed the event Â
The outlier mining status page that is viewed in the CM presents details of all managed aggregators and their collectors. All collectors in the CM appear in individual rows
under their aggregators. When viewed in an aggregator this window presents details of the specific aggregator’s collectors. When viewed from a collector, only the one
collector is presented.
The page is located in the Guardium menu Manage > Maintenance > Outlier Mining Status
The following tables describe the page and the recommended user actions.
Opens the list of units that send data to this aggregator Click to view the list of units
Outlier Mining Enabled/Disabled Aggregator: Indicates whether outlier mining on the aggregator is enabled. If NA
disabled, then the rest of the row after this column is empty.
Individual row of one collector or standalone unit: Green indicates that outlier
mining is enabled locally.
Send data for outlier mining Collectors only. The collector sends outlier mining data to the aggregator. Data for outlier NA
mining is sent is sent from the collector to the aggregator if the aggregator is enabled for
outlier mining, and the collector is not running outliers mining locally.
Anomaly Last Found The local date and time on the CM of the last outlier mining run that found one or more NA
anomalies (outliers). Shows data only for units running version 10.1.2 and up.
Last Analysis The local date and time of the CM of the last outlier mining run (process end date/time). NA
Shows data only for units running version 10.1.2 and up.
Outlier Mining Status The status of the last outlier mining run. If an error/warning occurred
only once, let the process run
Green: the process ended successfully. again (next hour) and check the
result. If an error repeats,
Yellow: the process ended with warnings.
contact support.
Red: the process ended with errors.
Shows data only for units running version 10.1.2 and up.
Details The status can be red (error), yellow (warning), or green. For processes that ended with
warnings (yellow), click to open
a pop-up with the warning. For
processes that ended with
errors (red), click to open a pop-
up with the error.
Learning Since Date and time at which the outlier mining process was enabled. The process learns the NA
resource's behavior since this time.
Quick Search on/off Indicates whether Quick Search and Solr are enabled on the managed unit. When Quick See Enabling and disabling the
Search is disabled, this machine's data is not included in the Investigation Dashboard. Investigation Dashboard
Last Info. Update Last date and time the information in this row was updated. Data is usually updated in NA
intervals of about 5 minutes.
Table 2. Outliers Mining Status Page Buttons
Button Description Actions
Plus Sign This button appears only when the unit detailed in the row is an aggregator. Click to open the list of units
that send data to this
aggregator.
Parent topic: Outliers Detection
Procedure
1. This task requires that you know the internal group ID to use with the grdapi command. To get the group ID, you can use the following command: grdapi
list_group_by_desc desc=[group name]. For example, if you have a group named BadGuys, you can enter the following command to get its internal group ID:
2. Once you know the desired ID, add it as privileged user group for a boosted score as follows (note that you must also include the default group 1 if you want to
boost scores for that as well). To add a group with the ID 1234: grdapi set_outliers_detection_parameter parameter_name="privUsersGroupIds"
parameter_value=1,1234
3. To add sensitive objects with the IDs 333 and 156: set_outliers_detection_parameter parameter_name="sensitiveObjectGroupIds" parameter_value=5,333,156
Results
The specified groups or sensitive objects are added to the outlier detection and are given additional weight by the algorithm.
For example, to ignore all activity from server 10.70.144.159, database ON1PARTR, and any database user beginning with GUARD, your dialog looks like:
The Group Builder has options for bulk uploading including the ability to populate from a query on a custom table.
grdapi create create_member_to_group_by_desc desc=†Analytic Exclude Source Program†member=†OMNISERVER%â€
The data protection dashboard contains several charts and graphs in addition to compliance and risk statistics designed for continuous display on a large monitor. To open
the dashboard, navigate to Investigate > Guardium Data Protection Dashboard.
CAUTION:
The session will not expire and you will not be automatically logged out while viewing the data protection dashboard. Use care when leaving dashboard open for long
periods of time.
Information:
An Anomalous activities chart displays a summary of outliers in relation to overall activity. On this chart, an outliers summary dot represents an unexpected volume of
outliers.
Information: The y-axis of these charts is a log axis and may distort the chart proportions, and the values or counts are not logged.
Monitored datasources shows the number of datasources for which the system is logging activities. This statistic is calculated by looking at the available access domain
data.
Compliance to-do list tasks shows the following summary of audit processes: the number of processes that were closed today, the number of processes that have been
open for less than three days, and the number of processes that have been open for more than three days.
Information:
Reports
A report defines how the data collected by a query is presented.
The default report is a tabular report that reflects the structure of the query, with each attribute displayed in a separate column. All presentation components of a tabular
report (the column headings, for example) can be customized. All graphical reports are defined using the Report Builder. In addition to the start and from date (query to
and query from) parameters, values can now be displayed between the beginning of the page and start of the table in all reports.
Before using the Report Builder, create a query using the Query Builder. See Using the Query Builder.
The fastest way to create and view a report is by using the steps to Create a Report, then select the report from My Dashboard.
Move back and forth between menu screens using the Back and Next buttons. The back arrow in the web browser does not work for navigation between Guardium®
screens.
Refresh
Add a report
Add to favorites
Delete
Clone
-->
<--
To access a report definition, select the Reports lifecycle icon and then click Report builder.
Search for a report by choosing Domain, Query or Report title. The results display in the Report Search Results panel.
To locate a specific report, select that report from the Report Title list. The selected report displays immediately in the Report Search Results panel.
For the remaining types of search, click the Search button after making entries in one or more fields, or just click the Search button to list all reports available for
your Guardium account.
To list all reports that use a specific query, select that query from the Query list.
To list all reports for a specific chart type, select it from the Chart Type list.
To locate a specific report, select that report from the Report Title list. The selected report displays immediately in theReport Search Results panel.
If the search locates any reports, they display in the Report Search Results panel. Click any of the following buttons:
Create a Report
1. To access a report definition, select the Reports lifecycle icon and then click Report builder.
2. Click New to open the Create Report panel.
3. From the Query list, select a query value to be used by the report (for example, Guardium Logins)
4. Enter a unique name for the report in the Report Title field.
1. Follow the previous steps in Customize the Report Presentation for Report Column Descriptions, Report Parameter Descriptions, and Report Attributes.
2. In the Report Chart Type panel, select the Chart type and click Next. The choices are Area, Bar, Bar Area, Bar Line, Column, Date Area, Date Column, Date Line,
Distributed Label Line, Individual Bar, Individual Column, Line, Pictogram, Pie, Polar, Speedo, and Stack Bar. Pie, Polar, Speedo, and Stack Bar are recommended.
Choose one and click Next.
3. If the Report Chart Type panel is not displayed, skip this step (all necessary data has been entered). Select the type of chart for the report from the Chart Type list.
4. Click Next to open the Report Presentation Parameters panel.
Review the parameters, which varies for each type of chart.
Optionally override any of the default settings for the chart type selected.
5. Click Next to continue to the Submit Report panel, and continue with the Submit Report Definition procedure.
6. To view your graphical report, go to My Dashboards, and add your graphical report.
Note:
A refresh icon appears in all graphical reports next to the help icon.
Modify a Report
1. Find the report to be modified. Go to the Report Builder finder menu.
2. Click Modify to open the Report Columns panel.
3. Continue with Customize the Report Presentation.
Clone a Report
1. Find the report to be cloned. Go to the Report Builder finder menu.
Remove a Report
Be aware that you cannot remove predefined reports, and you cannot remove reports that are used in Audit Processes.
Limits
The limit for the buttons when viewing a report (generate PDF, generate CSV, and printable) is 30,000 rows. This is non-customizable.
The limit for the Populate From Query in Group and Alias Builder when run via Run Once Now is 5,000 rows. This is non-customizable.
The limit for the Populate From Query in Group and Alias Builder when run via Scheduling is 20,000 rows. This limit is customizable, via the CLI command, show/store
populate_from_query_maxrecs.
API Assignment
By default, the Guardium application comes with setup data that links many of the API functions to reports; providing users, through the GUI, with prepared calls to APIs
from reporting data. Use API Assignment to link additional API functions to predefined Guardium reports or custom reports.
For more information on using linked API functions, see the documentation on GuardAPI Input Generation.
If there are no fields in the report that are linked to API parameters, it might be irrelevant to link an API function to a report. The mapping of API parameters to
report fields can be accomplished through both the GUI and the Guardium CLI. For additional information on mapping API parameters to report fields, see Mapping
GuardAPI Parameters to Domain Entities and Attributes in the GuardAPI Input Generation section.
4. Click the greater-than sign '>' to add the selected API function to the current list of functions that are assigned to this report.
5. Click Apply to save the changes.
Report parameters
You can use parameters to control the contents and presentation of a report.
Creating dashboards
You can create one or more dashboards, add reports to them, and configure their appearance.
Viewing a report
There are several ways to view a report, including your dashboard and UI search.
Creating a report
If the predefined reports do not meet your needs, you can create your own.
Creating reports for z/OS
Learn how to create Guardium reports for z/OS data sources by customizing built-in reports and example queries.
Data Mart
A Data Mart is a subset of a Data Warehouse. A Data Warehouse aggregates and organizes the data in a generic fashion that can be used later for analysis and
reports. A Data Mart begins with user-defined data analysis and emphasizes meeting the specific demands of the user in terms of content, presentation, and ease-
of-use.
Audit and Report
Guardium organizes the data it collects into a set of domains. Each domain contains a different type of information relating to a specific area of concern: data
access, exceptions, policy violations, and so forth.
Queries
Use one of the many predefined queries that come with Guardium to get information about your data. Use the Query Builder to work with queries.
Domains, Entities, and Attributes
A domain provides a view of the data that Guardium stores.
How to take advantage of predefined reports
Instead of creating custom reports from scratch, take advantage of the predefined content in the Guardium application.
How to ask questions of the data
Use the Query Builder to define and modify questions about the collected data.
How to report on dormant tables and columns
Guardium offers functionality that can help data architects and DBAs discover which tables and which fields are not being used.
How to Generate API Call from Reports
Generate Guard API calls from a report either from a single row within a report or based on the whole report
How to use Constants within API Calls
Create a new entity attribute to be used during an API function call.
How to use API Calls from Custom Reports
Link API functions to reports and map report fields to the API functional parameters.
Report parameters
You can use parameters to control the contents and presentation of a report.
A runtime parameter provides a value to be used in a query condition. There is a default set of runtime parameters for all queries, and any number of runtime
parameters can be defined in the query that is used by the report.
A presentation parameter describes a physical characteristic of the report; for example, whether a graphical report includes a legend or labels, or what colors to use
for an element. All presentation parameters are provided with initial settings when you define a report.
1. Click Configure Report Parameters from the choices within the report. See the icon .
2. In the panel, enter runtime and presentation parameters in the boxes that are provided, as necessary for the task to be performed.
3. Click Save.
4. To view the report, go My Dashboards.
Runtime
Parameter Default and Description
Enter Period None for a new report, varies for default reports. The starting date for the report is always required.
From
Enter Period None for a new report, varies for default reports, though the default is almost always NOW. This date is the ending date for the report, and is always
to required.
Remote Data None. In a Central Manager environment, you can run a report on a managed unit by selecting that Guardium® system from the Remote Data Source list.
Source
Show None (meaning the system-wide default is used). Select the On to always display aliases, or Off to never display aliases. Select the default button to revert
Aliases to the system-wide default (controlled by the administrator) after either the On or Off button has been used.
Use this GuardAPI command, list_parameter_names_by_report_name. This function takes a report name as input parameter and returns a list of runtime parameter
names for that report.
Creating dashboards
You can create one or more dashboards, add reports to them, and configure their appearance.
Procedure
1. Click My Dashboards > Create New Dashboard to open a new dashboard.
2. Enter a descriptive name in the Name field. This name is used in the list of dashboards in the menu.
3. Click Add Report to display a list of available reports. If you have designated certain reports as favorites, you can check the My Favorites box to see only a list of
those reports. If you want to see only graphical reports, check the Chart Only box.
4. The Add a Report dialog shows a list of all reports that meet your criteria. You can browse the list of reports, or type a string in the Filter field. The list of reports is
updated as you type.
5. Click the title of a report to add it to your dashboard. Continue adding as many reports as you want. When you are finished adding reports, click Close.
Results
What to do next
Review the appearance of your dashboard. Is it easy to use, and to find the information that you want? If not, you can configure it further.
Parent topic: Reports
Think about how you use your reports. What arrangement makes it easy to achieve your goals? Experiment with these changes.
Procedure
1. Rearrange the reports. To move a report, place your cursor on the report’s title bar, and drag it to a new location.
2. Choose a new number of columns by clicking 1, 2, or 3 in the Number of columns area. By default, your reports are shown in two columns. If you need more space
for each report, click 1 to see how your reports look when they are the full width of the dashboard. If you prefer to see more reports at one time, try three columns.
3. Resize your reports. Drag the resize icon to make a report longer or shorter, narrower or wider. If you adjust the width of a report, all the reports in that column use
the new width. If you change the number of columns, all columns return to their default widths.
Procedure
Viewing a report
There are several ways to view a report, including your dashboard and UI search.
If you have saved the report to a dashboard, open the dashboard to view the report.
You can add the report to a dashboard. Open the dashboard and click Add Report, then choose the report from the list.
Some reports are listed in categories in the Reports lifecycle.
Some reports are listed under the lifecycle to which they are most relevant.
You can use the user interface (UI) search function to find the report. On the banner, choose User Interface from the drop-down list next to the Search box. Enter
the name of the report into the Search box. Results begin to appear after you type a few characters. Choose the report from the list of results.
The following choices (with icons) permit editing and configuring of the report:
and any number of run-time parameters can be defined in the query that is used by the report.
Add to favorites
Refresh
You can hide columns from view. Click the columns icon and clear the check boxes for the columns that you want to hide.
You can sort report data by the contents of any column. Click the title of the column on which you want to sort. To reverse the order, click the title again. Sorting is always
performed on the actual data values, ignoring any aliases that are defined.
Graphical reports can be customized by clicking the Customize Chart icon. The choices include converting the data to a line chart, changing the X-axis and Y-axis
orientation, converting the report to a pie chart or a stacked column chart.
When viewing reports that display Oracle information, occasionally the ? question mark character is used to inform the viewer that the login information was not available.
Again when viewing reports that display Oracle information, the appearance of the number -1 signifies that an unknown number of records are affected. All Oracle
sessions are recorded, even with missed logins.
The OS user does not appear in reports if a Linux system or a Windows system using remote connections did not send the OS user with the login packet. For Linux local
connections, UID chain can be used to identify the user. See which systems support UID chains in Choosing your S-TAP setup.
Refreshing reports
Some reports are configured to refresh their data automatically. On other reports, you can refresh the data manually through the UI.
Exporting a report
You can export a report to a PDF file or a file of comma-separated values.
Viewing Drill-Down Reports
Many reports provide access to drill-down reports that provide more granular data.
Refreshing reports
Some reports are configured to refresh their data automatically. On other reports, you can refresh the data manually through the UI.
When you view a report that is configured to refresh automatically, the color of the Circular Arrows Refresh icon for this report is green, indicating that the report is
refreshing itself automatically.
At a certain point, the report stops refreshing if no further changes are made to the report and the color of the refresh icon turns from green to red. The point in time where
the color changes is equal to half of the GUI session timeout (which can be found by running the CLI command, show session timeout).
For example, if the session timeout is the default 900 seconds, the Circular Arrows Refresh icon on the Request Rate report is green for 450 seconds, then turns red.
Customize Reports
When the user edits a report or makes a modification to the report through Report Customization, the user must manually click on Refresh. There is no automatic refresh.
UI Customization - In "New Life Cycle" dialog and "New Group" dialog, groups are limited to a maximum of 5 levels deep, so even with longer group names, all levels of
group names and node item text are visible on the navigation pane.
UI Customization - When user enters "<" or ">" in the textbox of "New Life Cycle" dialog or "New Group" dialog, a popup message is displayed to indicate that "The name
cannot contain < or > special characters", and the "OK" button becomes disabled.
UI Customization - In "New Life Cycle" dialog and "New Group" dialog, user can enter a maximum of 50 characters in the text box.
Exporting a report
You can export a report to a PDF file or a file of comma-separated values.
You can export the contents of a report to a Portable Document Format (PDF) file, and save the file or view it. In the report toolbar, click Export > Download as PDF to
create a PDF copy. Follow the prompt to save or view the file.
When you generate a large PDF file, the process can cause the UI to time out. If you plan to generate large PDF files, consider doing so as part of an audit process, or
increasing the UI timeout value to avoid this problem.
You can also export the contents of a report to a comma-separated value (csv) file. You can export either all the records (the entire report) in the report, or only the display
records (the data currently displayed).
In the report toolbar, click Export > Download all records or Export > Download display records. You can save the results or select an application in which to view them.
Note: If editing a report and removing a column (for example, editing a report with seven columns and removing one column, leaving six columns), when the report is
exported as a PDF file, the report will show the original seven columns.
If any drill down actions are available on a tabular report the user will know by right-clicking on a row of the grid and a context-menu will appear with any available drill-
down actions.
All of the runtime parameters for the drill-down report must be available from the report that is being viewed.
If security roles have been assigned, you must have access to the drill-down report.
Creating a report
If the predefined reports do not meet your needs, you can create your own.
Procedure
1. Click Reports > Report Configuration Tools > Report Builder to open the Report Builder finder or filter menu. If you select Search at this point without choosing any
domain or query, a menu will appear with all queries listed. Select a query and use the icons (Add New Report , Modify , Clone , or Delete to work
with the queries.
What to do next
If you want to include this report on a dashboard, open the dashboard, click Add Reports, and select this report from the list.
Parent topic: Reports
While the process of creating reports for z/OS data sources is the same as for other databases, there is not always a direct mapping between mainframe concepts and
Guardium's reporting entities and attributes. To ease communication between auditors and mainframe personnel, this section outlines the mapping of mainframe event
data to Guardium entities and attributes. There are some built-in reports that can be customized, and this information describes additional queries that are useful for
typical auditing scenarios.
Data Mart
A Data Mart is a subset of a Data Warehouse. A Data Warehouse aggregates and organizes the data in a generic fashion that can be used later for analysis and reports. A
Data Mart begins with user-defined data analysis and emphasizes meeting the specific demands of the user in terms of content, presentation, and ease-of-use.
Aggregate summarized and analyzed data from all units to enable high-level/ corporate view in a reasonable response time.
Provide interactive analysis capabilities for finding patterns, trends, and outliers.
A Data Mart is practical and efficient for all the Guardium predefined-reports. It prepares the data in advance to avoid overload, full scans, and poor performance.
The Data Mart Configuration icon is available from any Predefined Report.
Highlights of benefits:
Provide Guardium Analytic capability that supports full lifecycle of data analysis.
The analytic process starts from the Query Builder and Pivot Table Builder where the users define their data analysis needs and then “Set As Data Mart†.
The Data Mart extraction program runs in a batch according to the specified schedule. It summarizes the data to hours, days, weeks, or months according to the
granularity requested and then it saves the results in a new table in Guardium Analytic database.
The data is then accessible to the users via the standard Reports and Audit Process utilities, likewise any other traditional Domain/ Entity. The Data Mart extraction
data is available under the DM domain and the Entity name is set according to the new table name specified for the data mart data. Using the standard Query
Builder and Report Builder, users can clone the default query and edit the Query and report, generate Portlet and add to a Pane.
The summarization of data shrinks the data volume significantly. It eliminates joins of many tables by storing the data analysis in un-normalized and pre-calculated
table.
The corporate view is supported by using the standard Aggregation utility for the new Guardium Analytic tables. If there is a huge amount of detailed row data at the
higher levels of the Aggregation Hierarchy, the Selective Aggregation feature, that enables aggregation of specific module(s), can be configured to aggregate
analytic data only.
The Data Mart builder is accessible via Query builder, Report Results, and Pivot-Table view.
Select the Set As Data Mart icon. The button is available only after Saving.
Access to the screen is enabled for users with Data Mart Building permission (User Role Permission). Display the Set As Data Mart new button only for users with the
appropriate permission.
Data Mart persistency - changes to the original Query, Report, or Pivot Table do not affect the Data Mart; A snapshot of the originated analysis definition is saved together
with the Data Mart upon creation.
If the Data Mart is based on Pivot Table, then the extraction process does not calculate the Total line (sum of columns) and Percent Of Column is not supported.
In addition to the Data Mart definition, the following are created by the Data Mart Definition process:
Default Query
New Data Mart table in the “DATAMART†new database to store the extracted data
The Data Mart definition process creates new Domain, Entity, default Query and Report. The default Query and Report is accessible via the Report Building menu.
Clicking Data Mart opens the Query Finder GUI; The Query, Report, and Entity fields filter only Data Mart domains (domain name starts with -
DatamartDefinition.DOMAIN_PREFIX).
Report Builder GUI: The default Data Marts' reports and all other reports that are related to Data Marts domains are available in the Report Builder GUI.
2. Select New to create a new Data Mart or select from the list of previously created Data Marts.
3. Complete the fields asking for Data Mart name and Table name (Default is DM). Specify a time granularity and select an initial start time from the calendar
icon. Description is optional.
4. Use the Scheduler to schedule when to run this feature (Run Once Now).
5. Use the Roles section to restrict Data Mart only to users with the appropriate permission.
Note: Changes to the originated query/report do not affect the existing Data Mart.
Note: When a data mart extraction runs (Scheduled or Run once now) for the first time, it extracts data from Initial start date to the current time based on the
Time granularity. It saves the next period from in the DM_EXTRACTION_STATE table. On the next run, it extracts data starting from the next period from. If a
data mart extraction is sought earlier than next period from, then the data mart extraction will show as empty, because the extraction has already processed
that time period. In order to extract data earlier than next period from, restore the old data and then run data mart again.
In case of multiple Central Managers, the Data Mart definition can be cloned by using the Export/Import capability.
Add Data Mart Extraction schedule to the Central Manager Distribution screen.
Datamart extraction
Data extracted:
1. Export of: Exception Log - details the Exceptions / Errors captured by Guardium. The log will includes exception/error description, user name, source address, DB
protocol and more.
2. Export of: Session Log - Includes details about datasources’ sessions (login to logout). The log includes session start and end timestamps, OS and DB user of the
session, source program and more.
3. Export of: Session Log Ended - Session may extend for long period. The extraction works hourly. This log sends the sessions that ended later than the hour started.
4. Export of: Access Log - Includes details of the connection information and the activity summary per hour. The log includes the OS and DB user, successful and failed
SQLs, client and server IP and more.
5. Export of: Full SQL - this log includes the executed SQL details. The log includes full SQL, records affected, session ID and more.
6. Export of: Outliers List - this log includes the outliers. The log includes server IP, DB user, Outlier type, DB and more.
7. Export of: Outliers Summary - this log includes an hour summary of outliers. The log includes server IP, DB user, DB and more.
8. Export of: Group Members - Includes a log of all groups members. The log includes Group type, Group description, Group member and Tuple Flag.
9. Export of: Export Extraction Log – Includes log of data relevant to all export or copy files having a name starting with “Export:†.
10. Export of: Policy Violations – A policy violation is logged each time that a policy rule is triggered. This log includes the details about the logged violations, such as
DB User, Source Program, Access Rule Description, Full SQL String and more.
11. Export of: Buff Usage Monitor - Provides an extensive set of sniffer buffer usage statistics
12. Export of: VA Results - Provides VA Results
13. Export of: Policy Violations - Detailed – the same as Export Extraction Log, but has Object/Verb tuples. It is recommended that only one of them has to be used.
14. Export of: Access Log - Detailed – the same as Access Log, but also has the following fields from Application Event entity: Event User Name, Event Type, Event
Value Str, Event Value Num, Event Date. It is recommended that Access Log or Access Log – Detailed should be used and not the both of them.
15. Export of: Discovered Instances - Provides the result of S-TAP Discovery application, which discovers database instances
16. Export of: Databases Discovered –
17. Export of: Classifier Results
18. Export of: Datasources
19. Export of: S-TAP status
20. Export of: Installed Patches
21. Export of: System Info
22. Export of: User – Role
23. Export:Classification Process Log
24. Export:Outliers List - enhanced
25. Export:Outliers Summary by hour - enhanced
By Date Any 28
Standalone 43
Standalone 47
Export:Outliers Summary by hour - enhanced Analytic Outliers Summary by Date - enhanced Any 50
Issue Summary
The DataMart mechanism exports Guardium sniff data periodically based on the Query defined.
Extracted Files prefix is Global_ID and short host name of source Machine.
How to Use
All the examples shown below are for “Export:Exception Log†DataMart, for other extraction, change to one of the following:
"Export:Access Log"
"Export:Session Log"
"Export:Exception Log"
"Export:Full SQL"
"Export:Outliers List"
"Export:Group Members"
"Export:PolicyViolations"
"Export:VA Results"
"Export:Classifier Results"
"Export:Databases Discovered"
"Export:Discovered Instances"
"Export:Datasources"
"Export:STAP Status"
"Export:Installed Patches"
"Export:System Info"
The export extractions are pre-defined in the system (via the Datamarts mechanism) and disabled by default. In order to enable the export extractions (all or specific) you
need to schedule the DataMarts via the grdapi as shown below. You can also use GUI for that.
grdapi schedule_job jobType=dataMartExtraction cronString=†0 1 0/1 ? * 1,2,3,4,5,6,7†objectName=†Export:Exception Log†startTime=†YYYY-MM-DD
HH:MM:SSâ€
Note that startTime is used to set future start if needed and can be removed if you want to start the DataMart immediately.
In order to delete specific export extractions you can run the following
DataMartExtractionJob_43 Export:Datasources
You may enable or disable the extraction by using the following command:
You can determine whether to include the header line (column names) in the output CSV file via the following grdapi:
In order to set target host for the export extraction you need to set the machine host, path and the credential via the following grdapi:
withCOMPLETEfile parameter is optional. The default value is true. If set to true then COMPLETE file is sent after a data file is successfully transferred. See “COMPLETE
file†section for details.
During the execution of this command, a dummy file is sent to a target machine to validate the connection details. You can also use datamart_validate_copy_file_info
grdapi for that.
You can track the extraction log via the pre-defined “Datamart Extraction Log†report. The report is available via the Report Builder screen; you can add it to a pane.
Enter the customize option in “Datamart Extraction Log†report and Define the following:
Click update and DataMart Extraction Log report will be active – shows you latest extractions.
Outliers DataMarts should be scheduled around 10 minutes past an hour, because before that time data is not ready yet – outliers processing starts on the top of an
hour.
It makes sense to schedule Access Log, Exception Log, Full Sql and Session Log/Ended with some time gaps. To get the consistent data by each run Session Log/ Ended
must be scheduled as the last ones.
Our recommendation
Purge /var/exportdir
If a file transfer is failed for any reason, for example, target machine is down, then it retries a transfer on the next run. The backlog is kept in /var/exportdir directory. Purge
Process cleans up the backlog older than 1 day.
COMPLETE file
The empty COMPLETE file is sent to notify an external system that a file is ready.
- For each file, in addition to the file a COMPLETE file is also sent. The COMPLETE file name is [file name]_COMPLETE.gz
1762144738_gibm32_EXP_SESSION_LOG_20151028230000_COMPLETE.gz
- The process is synchronous - for example, first, the file (the SESSION LOG file) is generated, then it is copied and only when it has finished copying then the COMPLETE
file is generated and copied.
In order to change datamart initial start time, please use update_datamart grdapi.
For example,
For example, the bundle included “Export:Full SQL†, “Export:Exception Log†, “Export:Session Log†and “Export:Session Log Ended†as main
datamart.
Create a bundle:
Delete a bundle:
Example:
ID=0
=========================================
=========================================
Datamarts:
Export:Full SQL
Export:Exception Log
Export:Session Log
Example,
=========================================
=========================================
Description:
Active: true
---------------------
File Header: "UTC Offset","Name","Period Start","Period End","Run Id","Start Time","End Time","Status","File Status","Records Extracted","Details","Timestamp"
-----------------------------------------
-----------------------------------------
Directory: /local/incoming/
Bundle Name:
-----------------------------------------
-----------------------------------------
State:1
---------------------
---------------------
Extraction Log
---------------------
Extract Status: OK
Records Extracted: 26
Details: SCP to: host.com, User: admin, Path: /local/incoming/, File: DMv2_gibm32_EXP_DM_EXTRACTION_LOG_20170118180000.gz
Bundle Name:
Comments
================
Full_SQL DataMart will work only if log full details or log masked details is defined and installed.
If DataMart/s scheduler had been stop for some time and you don’t want the data to be extracted retroactively, then before you reschedule extractions to run again,
please set the correct “Initial Start†in the Data Mart Configuration screen.
User Defined DataMart/s can also be used to transfer data to a destination host. DataMart has to be of type File, the Data Mart Name should starts with “Export:â€
and File Patch starts with “EXP_†.
Dependencies
================
================
http://www-01.ibm.com/support/knowledgecenter/SSMPHH_8.2.0/com.ibm.guardium.using.doc/topics/how_to_install_patches.html?lang=en
grdapi datamart_copy_file_bundle
function parameters :
datamart_name - String
main_datamart_name - String
grdapi datamart_include_file_header
function parameters :
grdapi datamart_set_active
function parameters :
grdapi datamart_set_inactive
function parameters :
grdapi datamart_update_copy_file_info
function parameters :
destinationHost - String
destinationPassword - String
destinationPath - String
destinationUser - String
withCOMPLETEfile - Boolean
grdapi datamart_validate_copy_file_info
function parameters :
grdapi update_datamart
function parameters :
Comment - String
grdapi get_datamart_info
function parameters :
isExtended - Boolean
grdapi add_dm_to_profile
function parameters:
category - String
cron_string - String
api_target_host - String
grdapi remove_dm_from_profile
function parameters:
api_target_host - String
All domains and their contents are described in the Domains, Entities, and Attributes appendix.
There is a separate query builder for each domain, and access to each query builder is controlled by security roles. Regardless of the domain, the same general-purpose
query-builder tool is used to create all queries. For detailed instructions on how to build queries, see Queries. Â Â
In addition to the standard set of domains, users can define custom domains to contain information that can be uploaded to the Guardium appliance. For example, your
company might have a table relating generic database user names (hr23455 or qa4872, for example) to real persons (Paula Smith, John Doe). Once that table has been
uploaded, the real names can be displayed on Guardium reports, from the custom domain. For more detailed information on how to define and use custom domains, see
External Data Correlation.
Queries
Use one of the many predefined queries that come with Guardium to get information about your data. Use the Query Builder to work with queries.
Use queries to ask questions of your data such as, what are all the clients updating a specific database during weekend hours?
Queries are different from reports. A query describes a set of data, whereas a report describes how the data returned by a query is presented.
Once a query is completed, present the results of the query using reports. Reports usually are presented in tabular form, but you can customize the layout of a report as
you like.
To use queries, open the Query Builder by clicking Comply > Custom Reporting > Custom Query Builder. Choose a domain to query, select a main entity, and then use the
query as needed.
You cannot modify the predefined queries, but you can create a clone of a query and modify the clone.
The level of detail for the report. There is one row of data for each occurrence of the main entity included in the report. The location of the main entity within the
hierarchy of entities is important in terms of what values can be displayed. The attributes for any entities under the main entity can be counted, but not displayed
(since there might be many occurrences for each row). To choose this level of detail, check the Sort by Count check box.
The total count is a count of instances of the main entity included on that row of the report, added as the last column of the report. To add or drop the count column
of the report, click the Add Count check box. This can result in the query/report performance boost in some cases.
To add or drop the ability to display one-row-per-value in the report, (which can result in the query/report performance boost in some cases), click the Add Distinct
check box. This selection yields condensed reports.
Use this selection for two-stage execution for Audit tasks of type report.
This applies to reports on queries on specific tables only. This two-stage mechanism applies to running queries as audit processes with columns and conditions
only on the following entities: Access (client/server), Session, Access Period, Construct (SQL), Object, and Sentence (Command).
This two-stage mechanism is not used if the query contains a condition with the Like Group operator or any alias-related operator (such as In Aliases Group)
or the condition uses Having.
In addition to using the query builder, each query can be set to run in two stages. By default queries run using the old method. In order for a query to run in two
stages, a flag must be set in the query builder. In addition, this method of running queries can be disabled (system-wide) to make all audit tasks use the old method
by creating the file: /var/log/guard/DontRunInTwoStages. Existence of this file indicates that the new two stages method should NOT be used.
Note: Fields containing tuples (combined fields) in the Two Stages execution is not supported in this release.
Note: Note: The Main Entity drop-down list includes only primary entities. However, access to secondary entities (for example Session Start and Session End) can be done
through its corresponding primary entity (for example, Session for Session Start and Session End).
Sorting
By default, query data is sorted in ascending order by attribute value, with the sort keys ordered as the attributes appear in the query. Aliases are ignored for sorting
purposes. The actual data values are always used for sorting. Attributes for which values are computed by the query (Count, Min, Max, or Avg) cannot be sorted.
The last column of a tabular report is a count of main entity occurrences. To sort on this count in descending sequence (in other words, listing the greatest number
occurrences first), mark the Sorted by occurrences check box.
Timestamps
A timestamp (lowercase t) is a data type containing a combined date-and-time value, which when printed displays in the format yyyy-mm-dd hh:mm:ss (for example,
2012-07-17 15:40:25). When creating or editing a query, most attributes with a timestamp data type display with a clock icon in the Entity List panel.
A Timestamp (uppercase T) is an attribute defined in many entity types, containing the time that the entity was last updated. For many timestamp attributes, you can print
the date, time, weekday or year components separately, by referencing additional Timestamp attributes (Date, Time Weekday, or Year).
1. Open the Query Builder by clicking Comply > Custom Reporting > Custom Query Builder.
2. Determine the domain you want to query. Select an item from the Domain Finder menu and click Search, or click New to create a custom domain.
3. Choose an existing query using the filter menus in the Query Finder, or click New to create a new query.
4. There are three main components to the Query Builder screen:
The Entity List pane identifies all entities and attributes contained in the domain. Entities are represented as folders, and attributes are the items within the
folders. Click on an entity folder to display its attributes, or click again to hide them. For a description of all entities and attributes, see Entities and Attributes
in the Domains, Entities, and Attributes information.
The Query Fields pane lists all fields to be accessed, what is to be displayed for that field (its value, a count, minimum, maximum, or average), and the sort
order. For more information about using this pane, see Query Fields Overview.
The Query Conditions pane specifies any conditions for selecting these fields (for example, where VERB = UPDATE). For more information about using this
pane, see Query Conditions Overview.
Creating a Query
Modifying a Query
You cannot modify the Guardium predefined queries, but you can clone a query and modify the clone as needed.
1. Choose a domain and main entity to open the Query Builder for the query you want to modify.
2. Click Clone, enter a new name for the query (apostrophes are not allowed), and click Save.
3. Refer to the Query Builder Overview topic to modify any component of the query definition.
Removing a Query
You cannot remove a query that is being used by some other component. To delete such a query, you must first delete all components that use it (reports or correlation
alerts, for example). When attempting to delete a query, the reports and correlation alerts dependent on the query will be listed.
1. Choose a domain and query to open the Query Builder for the query you want to delete.
2. Click Delete.
The Field Mode menus indicate what to print for the field: its Value, Count (number of distinct values), Min, Max, Average (AVG) or Sum for the row. The Value selection is
not available for attributes from entities greater than the main entity in the entity hierarchy for the domain.
There are two ways to add a field to the Query Fields pane:
To move a field up or down in the Query Fields pane, check the field's check box and click the Up or Down icons to move the field up or down one row.
On the other hand, the report may contain no information at all, or many blank columns where you are expecting Full SQL strings. Guardium captures Full SQL only when
directed to do so by policy rules - and the rules may not have been triggered during the reporting period.
Do not confuse the Full SQL attribute with the ability to drill down to the SQL for most queries in the Data Access domain having anything to do with SQL requests.
1. Create a group in the Group Builder by clicking Setup > Tools & Views > Group Builder. Specify a Group Name and choose OBJECTS for Group Type.
2. Create an Access report in the Report Builder by clicking Setup > Reports > Report Builder.
3. Specify a query name and click on the OBJECT folder from the Entity List in order to see more choices.
4. Highlight Object Name and click once in order to get the ADD CONDITION choice. Click Add Condition so that a line is added to the Query conditions section in the
main body of the menu screen. Â
5. Go to the drop-down selection next to the attribute Object name and choose, in the Operator column, IN GROUP or IN DYNAMIC GROUP. In the second drop-down
selection (Run-time Parameter column), choose the group that you created in step 1.
6. Save your work. Click Generate Tabular and then click Add to My New Reports.
7. Go to the My New Reports tab and highlight the report you created.
8. Click Customize next to the report name. This opens a tab called Customize Portlet (Run-time Parameters).
9. Open up the drop-down selection and the groups of the type corresponding to the entity being tested will appear at the beginning of the list, then a double dash
line, and then the rest of the groups. This is where different groups can be selected.
10. Save your work by clicking Update.
Table 1. Buttons
Buttons Steps
Roles Assigning roles to reports while in the Query Builder only assigns the role to the Query, not the report. Assign roles to reports in Report Builder.
See Reports.
Save Click Save when you have finished all the tasks required on the menu screen.
Back Move back between menu screens of a multi-screen Guardium task or function using the Back button. The back arrow in the web browser does
not work for navigation between menu screens.
Set as Data Mart A Data Mart is a subset of a Data Warehouse. A Data Warehouse aggregates and organizes the data in a generic fashion that can be used later for
analysis and reports.
Parent topic: Queries
Related concepts:
Domains, Entities, and Attributes
Query Conditions
Use the AND, OR and HAVING operators with parentheses to create query conditions.
The AND, OR and HAVING operators are located in the Query Conditions title bar in the Query Builder.
Select from the Entity List and use the operators to build query conditions as part of your query.
Add an AND operator or an OR operator to the end or middle of the condition list using the add-condition menu or drag-drop the attribute's icon. Select and remove
conditions by clicking Delete. Save the query. If the generated SQL query is invalid, the query will not save, and an error message results.
Using parentheses:
All conditions are independent. Group conditions together by adding left and right parentheses around the conditions. Use brackets in complicated query
conditions.
When a condition is selected, pressing the left parenthesis button adds one left parenthesis condition before the first selected condition. Pressing the right
parenthesis button will add one right parenthesis condition after the first selected condition. If there is no condition that is selected, pressing the parentheses
buttons has no effect.
When creating a query condition that uses parentheses, the parentheses appear in the UI BEFORE the operator, but are applied AFTER the operator. For example, a
query condition is displayed as, this (AND that OR another). However, the actual logic is, this AND (that OR another).
Escaping backslash (\) characters: To correctly escape a backslash character for use in a query condition, use four backslash characters. For example, to specify
domain\user you would enter domain\\\\user.
There are two parts in the condition display panel: one starts with a WHERE condition and another one starts with a HAVING condition.
In the HAVING part, the aggregate field has options: Count, Min, Max, and AVG. The option SUM also applies to certain entities with ID in name (Session ID, Global ID, Full
SQL ID, Instance ID). If the HAVING button is not checked, the condition is inserted into the WHERE part with the aggregate field as empty string. If the HAVING button
is checked, the condition is inserted into the HAVING part and the aggregate field has options. After adding or removing a condition, the condition option will be updated.
Pressing SAVE generates a SQL. The SQL is validated before saving it. If validation failed (for example, syntax error), it generates an alert error message and puts a more
detailed error description in the log. If adding a condition at the wrong part, (for example, HAVING button is set, and the attribute icon is dropped on the WHERE part, or
vice versa) it generates a not-matched alert message. If the selected condition is in WHERE part, but the HAVING button is set, the adding condition fails because the
setting is not matched.
The attributes Total Access, Failed SQLs, and Successful SQLs can be added only under a HAVING clause (not the WHERE clause).
Allowed queries must have one time stamp column and either at least one column with Mode=Count OR the count flag set (or both). The query column to be evaluated by
the query must be one of the columns with Mode=Count OR the total access column (if the count flag is set).
To add an AND condition, select the AND radio button in the Query Conditions title bar and do one of the following:
Select an entity from the Entity List pane and select Add Condition from the pop-up menu.
Drag the field icon from the Entity List pane, and drop it in the Query Conditions pane.
To add an OR condition, select the OR radio button in the Query Conditions title bar and do one of the following:
Drag the field icon from the Entity List pane, and release it to the start of the condition for which it is an OR condition.
Mark the check box for the condition to which you want to add the OR condition, click the field in the Entity List pane, and then select Add Condition from the
pop-up menu.
3. Optional: Use the Aggregate drop-down to select an aggregate of the attribute to be used for the query condition: Count, Min (minimum value), Max (maximum
value), or AVG (average value). Restrictions apply, as follows:
= Equal to
CATEGORIZED AS Member of a group belonging to the category selected from the drop-down list, which appears when a group
operator is selected.
CLASSIFIED AS Member of a group belonging to the classification selected from the drop-down list, which appears when a
group operator is selected.
IN DYNAMIC GROUP Member of a group that is selected from the drop-down list in the runtime parameter column, which appears
when a group operator is selected.
IN GROUP Member of the group that is selected from the drop-down list in the runtime parameter column, which appears
when a group operator is selected. IN GROUP or IN ALIASES GROUP cannot both be used at the same time.
IN DYNAMIC ALIASES GROUP The operator works on a group of the same type as IN DYNAMIC GROUP, however assumes the members of
that group are aliases.
IN ALIASES GROUP The operator works on a group of the same type as IN GROUP, however assumes the members of that group
are aliases. Note that the IN GROUP/IN ALIASES GROUP operators expect the group to contain actual values or
aliases respectively. An alias provides a synonym that substitutes for a stored value of a specific attribute type.
It is commonly used to display a meaningful or user-friendly name for a data value. For example, Financial
Server might be defined as an alias for IP address 192.168.2.18.
IN PERIOD For a time stamp only, is within the selected time period
LIKE Â
LIKE GROUP Matches a like value that is specified in the boxes. A like value uses the percent sign as a wildcard character,
and matches all or part of the value. Alphabetic characters are not case-sensitive. For example, %tea% would
match tea, TeA, tEam, steam. If no percent signs are included, the comparison operation is an equality
operation (=).
NOT IN DYNAMIC GROUP Not equal to any member of a group, which is selected from the drop-down list in the runtime parameter
column, which appears when a group operator is selected.
NOT IN DYNAMIC ALIASES The operator works on a group of the same type as NOT IN DYNAMIC GROUP, however assumes the members
GROUP of that group are aliases.
NOT IN GROUP Not equal to any member of the specified group, which is selected from the drop-down list in the runtime
parameter column, which appears when a group operator is selected.
NOT IN ALIASES GROUP The operator works on a group of the same type as NOT IN GROUP, however assumes the members of that
group are aliases.
NOT IN PERIOD For a time stamp only, not within the selected time period
NOT LIKE Not like the specified value (see the description of LIKE)
NOT LIKE GROUP Not like the value that is specified in LIKE GROUP
REGEXP Matched by the specified regular expression For detailed information about how to use regular expressions, see
Regular Expressions.
Note: There are four special words that are not allowed as the name of a parameter: user; group; role; page.
An error results if an attempt is made to save a query with any of these words in the parameter. There are two types of conditions where this applies:
When creating a query condition with an operator such as =, <, LIKE, etc, and then selecting Parameter. This field does not allow the special words.
When creating a query condition with a DYNAMIC GROUP type operator (IN, NOT IN, IN ALIAS, etc), this field does not allow the special words.
5. For a group operator, select a group from the list.
For most other operators, you must supply a value for the condition, or indicate that a runtime parameter value (not containing exclamation points) is supplied later
(when the query is run). In these cases, a drop-down with three options appears. Do one of the following:
Use this feature where the user needs to add a condition that is based not on the entire content of the attribute as is, but on part of the attribute, a function of the
attribute, or a function that combines more than one attribute.
An example is: INSTR(:attribute, '150.1') = 5, which returns all instances of Client IP matching the 5 characters listed. Type the character 5 in the entry
box next to the Add Expression icon. Type the INSTR(:attribute, '150.1') expression in the separate Build Expression window. Test the validity of the
expression in the Build Expression window. Another example is: LENGTH(:attribute) >= 40, which returns the length of any SQL statement greater than 40
characters. The expression might or might not contain references to the actual attribute and can also contain references to other attributes.
6. When you are done adding all conditions, remember to save the definition.
Use this feature where the user needs to add a condition that is based not on the entire content of the attribute as is, but on part of the attribute, a function of the
attribute, or a function that combines more than one attribute.
An example:
Return the location of the string 150.1, from the value 192.150.1.x., where the string 150.1 is at the fifth character of the value. The string 150.1 represents all instances
of Client IP matching the 5 characters listed.
When the function is run in the Expression field, it returns a value, and that value should be in the entry box.
Use the function, INSTR(:attribute, '150.1') with a "5" value in the entry box next to the Add Expression icon to return the records with 150.1 in the fifth location.
If the function is INSTR(:attribute, '150.1') = 5, then it becomes a Boolean phrase, and the only values in the entry box are 0 or 1.
Type the INSTR(:attribute, '150.1') expression in the separate Build Expression window.
Another example: LENGTH(:attribute) >= 40, which returns the length of any SQL statement greater than 40 characters. The expression might or might not contain
references to the actual attribute and can also contain references to other attributes.
Each domain contains a set of data related to a specific purpose or function (data access, exceptions, policy violations, and so forth). For a description of all domains, see
Domains.
Each domain contains one or more entities. An entity is a set of related attributes, and an attribute is basically a field value. For a description of all entities and attributes,
see Entities and Attributes.
A Guardium query returns data from one domain only. When the query is defined, one entity within that domain is designated as the main entity of the query. Each row of
data returned by a query will contain a count of occurrences of the main entity matching the values returned for the selected attributes, for the requested time period. This
allows for the creation of two-dimensional reports from entities that do not have a one-to-one relationship.
There is a separate query builder for each domain, and access to each query builder is controlled by security roles. Thus each Guardium role typically has access to a
subset of domains, depending on the function of that role within the company. Guardium admin role users typically have access to all reporting domains.
Some domains are available only when optional components (CAS, or Classification, for example) are installed. Other domains report information pertaining to the
Guardium appliance (archiving activity, for example), and are available by default to Guardium admin role users only.
Some of the attributes described in this appendix are available to users with the admin role only. These are labeled: Reserved for admin role use only.
For users who do not have the admin role, these attributes will not be available from the query builder.
Similarly, not all attributes are available for all database protocols. When using a query builder, if you notice that an entity or attribute described in the documentation is
not listed in the Entities pane, that entity or attribute is not available for the selected database type.
Domains
Entities and Attributes
Building queries
Domains
The following table describes the query builders and associated domains that are provided with your Guardium system. Your company may have defined additional
custom domains.
Custom Domains
Custom domains allow for user defined domains and can define any tables of data uploaded to the appliance.
Entities and Attributes
This topic contains a description of the attributes contained in each entity.
Database Entitlement Reports
You can use database entitlement reports to verify that users have access only to the appropriate data. Your Guardium system includes predefined database
entitlement reports for several database types.
Domains
The following table describes the query builders and associated domains that are provided with your Guardium system. Your company may have defined additional
custom domains.
Each domain contains a set of data related to a specific purpose or function (data access, exceptions, policy violations, and so forth). For a description of all domains, see
Domains.
Each domain contains one or more entities. An entity is a set of related attributes, and an attribute is basically a field value. For a description of all entities and attributes,
see Entities and Attributes.
A Guardium® query returns data from one domain only. When the query is defined, one entity within that domain is designated as the main entity of the query. Each row
of data returned by a query will contain a count of occurrences of the main entity matching the values returned for the selected attributes, for the requested time period.
This allows for the creation of two-dimensional reports from entities that do not have a one-to-one relationship.
There is a separate query builder for each domain, and access to each query builder is controlled by security roles. Thus each Guardium role typically has access to a
subset of domains, depending on the function of that role within the company. Guardium admin role users typically have access to all reporting domains.
Some domains are available only when optional components (CAS, or Classification, for example) are installed. Other domains report information pertaining to the
Guardium appliance (archiving activity, for example), and are available by default to Guardium admin role users only.
Some of the attributes described in this section are available to users with the admin role only. These are labeled: Reserved for admin role use only.
For users who do not have the admin role, these attributes will not be available from the query builder.
Similarly, not all attributes are available for all database protocols. When using a query builder, if you notice that an entity or attribute described in the documentation is
not listed in the Entities pane, that entity or attribute is not available for the selected database type.
Domains
Entities and Attributes
Building queries
Access to the query builder for each domain is controlled by security roles, so each user role typically has access to a separate set of domains. Some domains are
available only when optional components are installed (CAS, for example).
On the default admin portal, all query builders can be opened from the menu of the Tools > Report Building tab. On the default user portal, many query builders can be
opened from the Custom Reporting application: Monitor/Audit > Build Reports.
Following a short description of the domain, the Description column lists the default security role assigned for each domain, and indicates how to access the domain from
the default user portal (if available).
Table 1. Domains
Query Builder (Domain) Description
Access Policy Use this domain to track for all available policies on system. Similar to Installed Policies domain used to track all installed
policies on system.
(Access Policy)
Roles: all. User portal: Not available
Access All of the client/server, session, SQL, and access periods related data. This is the data collected by the inspection engines
every time a request is sent to a server being monitored.
(LOGGER INFO)
Roles: all User portal: Monitor/Audit > Build Reports > Track data access
Aggregation/Archive Aggregation and archiving activity, including the date, time, and status of each operation (archive, send, purge, etc.).
(ALERT) Roles: all User portal: Monitor/Audit > Build Reports > Track sent alerts
Application Connection, session, and application data recorded for special non-Guardium application (Siebel and SAP, for example).
Audit Process The execution of audit processes and the distribution of results.
(AUDIT TRAIL) Roles: all User portal: Monitor/Audit > Build Reports > Audit Process builder
Auto-discovery Database auto-discovery activity, including all processes that have been run, and the hosts and ports discovered.
(AUTODETECT DB DISCOVERY) Roles: all User portal: Discover > DB Discovery > Auto-discovery Query Builder
CAS Changes All changes detected by CAS, including any changed data recorded.
CAS Config CAS instance configurations, describing the use of templates on specific hosts.
CAS Host History History of CAS changes applied to CAS agent hosts.
CAS Templates Reports on the contents of CAS templates (which define the items to monitor).
(COMMENT ) Roles: all User portal: Monitor/Audit > Build Reports > Comment builder
Custom Domain Builder Custom domains have been defined for uploading commonly used tables and products. See Custom Table as a custom
domain contains one or more custom tables. If it contains multiple tables, you define the relationships between tables when
 defining the custom domain.
Custom Query Builder User defined domains can define any tables of data uploaded to the Guardium appliance.
Roles: all User portal: Monitor/Audit > Build Reports > Custom query builder
Custom Table Builder A custom table contains one or more attributes that you want to have available on the Guardium appliance. For example, you
may have an existing database table relating encoded user names to real names. In the network traffic, only the encoded
names will be seen. By defining a custom table on the Guardium appliance, and uploading data for that table from the existing
table, you will be able to relate the encoded and real names.
DB Default Users Enabled Non-credential Scan - A process to scan a list of databases and check whether default users are enabled. The default users
as well as the list of servers to scan are provided as parameters to the API. A default group is provided for each database type
with the default users and passwords created by the database on every installation, customers can add/remove from that list.
The groups are of type DB User/DB Password and the names of the default groups are:
ORACLE Default Users; DB2® Default Users; SYBASE Default Users; MS SQL SERVER Default Users; INFORMIX Default
Users;  MYSQL Default Users; TERADATA Default Users; IBM® ISERIES Default Users; POSTGRESQL Default Users;
NETEZZA Default Users
Enterprise Buffer Usage Shows the aggregate of Sniffer Buffer Usage from all managed units.
Exceptions (see note at the end of the All of the exceptions and exception-related data. These are SQL exceptions sent from a database server and collected by
table) inspection engines, as well as exceptions generated by Guardium itself.
(LOGGER EXCEPTIONS) Roles: all User portal: Monitor/Audit > Build Reports > Track exceptions
Â
(Flat Log) Roles: none User portal: Monitor/Audit > Build Reports > Flat Log builder
(Group ) Roles: all User portal: Monitor/Audit > Build Reports > Group builder
Guardium Activity All modifications performed by Guardium users to any Guardium entity, such as a report or query definition or modification.
Installed Policy Provides description of policy parameters and rules for the installed policy. The Installed Policy domain supports multiple
policies and multiple actions per rule.
(Installed Policy)
Roles: all User portal: Not available
Policy Violations All policy violation data, for all violations of the policy detected by the Guardium inspection engines or STAPs.
(ACCESS RULES VIOLATIONS) Roles: all User portal: Monitor/Audit > Build Reports > Policy violations builder
Policy Violations Summary All policy violation data, for a summary of all violations of the policy detected by the Guardium inspection engines or STAPs.
(Access Rules Violations) Roles: all User portal: Monitor/Audit > Build Reports > Policy violations summary builder
Replay Results Replays the data stream from one datasource by another different datasource.
Roles: none
Rogue Connections Local database server processes that have circumvented S-TAP® to connect to the database via shared memory, named
pipes, or other non-standard means. Applies to Unix S-TAP only, when the TEE monitoring method used.
Roles: all User portal: Monitor/Audit > Build Reports > Rogue connections builder
(HUNTER )
(Assessment Test Result Monitor) Roles: none User portal: Not available
(Sniffer Buffer Usage Monitor) Roles: none User portal: Not available
User/Role/Application Relates Guardium users, roles and applications (to report on who has access to which Guardium applications).
Value Change All changes tracked by the trigger-based value change application.
Custom Domains
Custom domains allow for user defined domains and can define any tables of data uploaded to the appliance.
The usage for these custom entitlement (privileges) domains are for entitlement reports which are found if logged in as a user. To see these reports, go to the user tab DB
Entitlements.
[Custom] Access
This domain contains all of the same entities as the standard Data Access domain. It is provided as a custom domain to allow additional user-defined domains to be built
including information from this domain and any custom tables that have been uploaded by the user. [Custom]Access domain is meant to be cloned. This domain is
updated on each version therefore is not advisable to create reports on this domain. For a description of the entities included in the Access domain, see the Access
domain description in the Domains topic.
S-TAP info is a predefined custom domain which contains the S-TAP Info entity and is not modifiable.
When defining a custom query, go to upload page and click Check/Repair to create the custom table in CUSTOM database, otherwise save query will not validate it. This
table loads automatically from all remote sources. A user cannot select which remote sources are used - it pulls from all of them.
Based on this custom table and custom domain, there are two reports:
Enterprise S-TAP view shows, from the Central Manager, information on an active S-TAP on a collector and/or managed unit (If there are duplicates for the same S-TAP
engine, one being active and one being inactive, then the report will only use the active).
Detailed Enterprise S-TAP view shows, from the Central Manager, information on all active and passive S-TAPs on all collectors and/or managed units.
If the Enterprise S-TAP view and Detailed Enterprise S-TAP view look the same, it is because there only one S-TAP on one managed unit being displayed. The Detailed
Enterprise S-TAP view would look different if there is more S-TAPs and more managed units involved.
These two reports can be chosen from the TAP Monitor tab of a standalone system, but they will display no information.
DB Entitlement Domains
Along with authenticating users and restricting role-based access privileges to data, even for the most privileged database users, there is a need to periodically perform
entitlement reviews, the process of validating and ensuring that users only have the privileges required to perform their duties. This is also known as database user rights
attestation reporting.
Use Guardium’s predefined database entitlement (privilege) reports (for example) to see who has system privileges and who has granted these privileges to other
users and roles. Database entitlement reports are important for auditors tracking changes to database access and to ensure that security holes do not exist from lingering
accounts or ill-granted privileges.
DB Entitlement Reports use the Custom Domain feature to create links between the external data on the selected database with the internal data of the predefined
entitlement reports. See Database Entitlements Reports for further information on how to use predefined database entitlement reports. To see entitlement reports, log on
the user portal, and go to the DB Entitlements tab.
Note: DB Entitlements Reports are optional components enabled by product key. If these components have not been enabled, the choices will not appear in the Custom
Domain Builder/Custom Domain Query/Custom Table Builder selections.
Oracle DB Entitlements
MYSQL DB Entitlements
DB2® DB Entitlements
SYBASE DB Entitlements
Informix® DB Entitlements
Microsoft SQL Server DB Entitlements
Netezza® DB Entitlements
Teradata DB Entitlements
PostgreSQL DB Entitlements
Oracle DB Entitlements
The following domains are provided to facilitate uploading and reporting on Oracle DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
Oracle
ORA Accnts of ALTER SYSTEM - Accounts with ALTER SYSTEM and ALTER SESSION privileges
ORA Accnts with BECOME USER - Accounts with BECOME USER privileges
ORA All Sys Priv and admin opt - Report showing all system privilege and admin option for users and roles
ORA Obj And Columns Priv - Object and columns privileges granted  (with or without grant option)
ORA Object Access By PUBLIC - Object access by PUBLIC
ORA Object privileges - Object privileges by database account not in the SYS and not a DBA role
ORA PUBLIC Exec Priv On SYS Proc - Execute privilege on SYS PL/SQL procedures assigned to PUBL
ORA Roles Granted - Roles granted to users and roles
ORA Sys Priv Granted - Hierarchical report showing system privilege granted to users including recursive definitions (i.e. privileges assigned to roles and then these
roles assigned to users
ORA SYSDBA and SYSOPER Accnts - Accounts with SYSDBA and SYSOPER privileges
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
MYSQL DB Entitlements
The following domains are provided to facilitate uploading and reporting on MYSQL DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
MYSQL: The queries ending in _40 use the most basic version of the mysql schema (for MySQL 4.0 and beyond). The information_schema has not changed since it was
introduced in MySQL 5.0, so there is a set of _50 queries, but no _51 queries. The _50 queries work for MySQL 5.0 and 5.1 and for 6.0 when it comes out, since the
information_schema is not expected to change in 6.0. The queries ending in _502 (MYSQL502) use the new information_schema, which contains much more information
and is much more like a true data dictionary.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to work.
Note: In addition to the privileges required, the user should connect to the MYSQL database to upload the data.
The entitlement queries for all MySQL versions through MySQL 5.0.1 use this set of tables: mysql.db mysql.host mysql.tables_priv mysql.user
Beginning with MySQL 5.0.2, and for all later versions, the entitlement queries use this set of tables: information_schema.SCHEMA_PRIVILEGES mysql.host
information_schema.TABLE_PRIVILEGES information_schema.USER_PRIVILEGES
If a datasource has a MYSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all MYSQL databases the user has access to.
DB2 DB Entitlements
The following domains are provided to facilitate uploading and reporting on DB2 DB Entitlements. Each of the following domains has a single entity (with the same name),
and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
SYBASE DB Entitlements
The following domains are provided to facilitate uploading and reporting on SYBASE DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
SYBASE System Privilege and Roles Granted to User including Grant option
SYBASE Role Granted to User and System Privileges Granted to user and role including Grant option
SYBASE Object Access by Public
SYBASE Execute Privilege on Procedure, function assigned To Public
SYBASE Accounts with System or Security Admin Roles
SYBASE Object and Columns Privilege Granted with Grant option
SYBASE Role Granted To User
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
If a datasource has a SYBASE database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all SYBASE databases the user has access to.
Informix DB Entitlements
The following domains are provided to facilitate uploading and reporting on Informix DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
Informix Object Privileges by database account not including system account and roles
Informix database level privileges, roles and language granted to user including grant option
Informix database level privileges, roles and language granted to user and role including grant option
Informix Object Grant to Public
Informix Execute Privilege on Informix procedure and function granted to Public
Informix Account with DBA Privilege Informix Object and columns privileges granted with Grant option
Informix Role Granted To User and Role
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements). The following list (with comment line heading) details the minimal privileges required, in the database table (or
view of the database table), in order for the entitlement to work.
Since all users have sufficient privileges for system catalog SELECT privileges, there is no need to grant privilege to any user. Informix doesn't seem to like granting system
catalog to users. The grant would normally be used. Â But in this case they are not required.
If a datasource has a Informix database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all Informix databases the user has access to.
MSSQL2000 Object Privilege By database account not including default system user
MSSQL2000 Role/System Privileges Granted to User including grant option
MSSQL2000 Role granted to user and role. System Privileges Granted to User and Role including grant option
MSSQL2000 Object Access by PUBLIC
MSSQL2000 Execute Privilege on System Procedures and functions to PUBLIC
MSSQL2000 Database accounts with db_owner and db_securityadmin role
MSSQL2000 Server account with sysadmin, serveradmin and security admin /* only run this entitlement against MASTER database */
MSSQL2000 Object and columns privileges granted with grant option
MSSQL2000 Role granted to user and role
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
If a datasource has a MSSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all MSSQL databases the user has access to.
Note: Objects in Dynamic query Strings will NOT be shown in xxx_DEPENDENCIES. An object in an EXECUTE IMMEDIATE SQL string called by a stored program unit does
not show dependency. This query exclude schema owner defined in group ID 202 "Dependencies_exclude_schema-MSSQL". User has the ability to add or subtract
schema name from this group for the dependencies query.
MSSQL2005/8 Object privileges by database account not including default system user.
MSSQL2005/8 Role/System privileges granted To User
MSSQL2005/8 Role/System Privilege granted to user and role including grant option
MSSQL2005/8 Object access by PUBLIC
MSSQL2005/8 Execute Privilege on System Procedures and functions to PUBLIC
MSSQL2005/8 Database accounts of db_owner and db_securityadmin Role
MSSQL2005/8 Server account of sysadmin, serveradmin and security admin /* only run against MASTER database */
MSSQL2005/8 Object and columns privileges granted with grant option
MSSQL2005/8 Role granted to user and role.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
Â
If a datasource has a MSSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all MSSQL databases the user has access to.
Netezza DB Entitlements
The following domains are provided to facilitate uploading and reporting on Netezza DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
Note: There is no DB error text translation for Netezza. The error appears in the exception description. Users can clone/add a report with the exception description for
Netezza as needed.
Netezza Obj Privs by DB Username - Object privileges with or without grant option by database username excluding ADMIN account.
Netezza Obj Privs By Group - Object privileges with or without grant option by GROUP excluding PUBLIC.
Netezza Admin Privs By Group - Admin privileges with or without grant option by GROUP excluding PUBLIC.
Netezza Admin Privs By DB Username, Group - Admin privileges with or without grant option by database username, group excluding ADMIN account and PUBLIC
group.
Netezza Obj Privs Granted - Object privileges granted with or without grant option to PUBLIC.
Netezza Admin Privis Granted - Admin privileges granted with or without grant option to PUBLIC.
Netezza Global Admin Priv To Users and Groups - Global admin privilege granted to users and groups excluding ADMIN account.
Netezza Global Obj Priv To Users and Groups - Global object privilege granted to users and groups excluding ADMIN account.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
For Netezza entitlement queries, it is recommended to connect to SYSTEM database, especially when granting the privilege to the user who is going to run these reports.
The granting privilege MUST take place from SYSTEM database or else the granted privilege will only take place on one particular database. When the granted privilege
takes place from SYSTEM database, a special feature will allow the granted privilege to carry through to all the databases.
Teradata DB Entitlements
The following domains are provided to facilitate uploading and reporting on Teradata DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
Teradata Object privileges by database account not including default system users.
Teradata System privileges and roles granted to users including grant option.
Teradata Role granted to users and roles. Â System privileges granted to users and roles including grant option.
Teradata Objects and System privileges granted to public. Note role cannot be granted to public in Teradata.
Teradata System admin, Security admin privileges granted to user and role.
Note: There are no such role as System or Security admin in Teradata. User must create their own roles. These are some important system privileges that would
normally not be granted to normal user: ABORT SESSION, CREATE DATABASE, CREATE PROFILE, CREATE ROLE,CREATE USER, DROP DATABASE, DROP PROFILE,
DROP ROLE, DROP USER, MONITOR RESOURCE, MONITOR SESSION, REPLICATION OVERRIDE, SET SESSION RATE, SET RESOURCE RATE.
Teradata Object privileges granted with granted option to users. Not including DBC and grantee = 'All'.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
There are seven entitlement custom domains/queries/reports for PostgreSQL. They are as follows (each is listed with Report name, description, note):
Â
PostgreSQL Priv On. Databases Granted To Public User Role With Or Without Granted Option. Privilege on databases granted to public, user and role with or without
granted option. Run this on any database, ideally PostgreSQL.
PostgreSQL Priv On Language Granted To Public User Role With Or Without Granted Option. Privilege on Language granted to public, user and role with or without
granted option. Run this per database.
PostgreSQL Priv On Schema Granted To Public User Role With Or Without Granted Option. Privilege on Schema granted to public, user and role with or without
granted option. Run this per database.
PostgreSQL Priv On Tablespace Granted To Public User Role With Or Without Granted Option. Privilege on Tablespace granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
PostgreSQL Role Or User Granted To User Or Role. Role or User granted to user or role including grant option. Run this once in any database. Ideally PostgreSQL.
   Â
PostgreSQL Super User Granted To User Or Role. Super user granted to user or role. Run this once in any database. Ideally PostgreSQL. Â Â Â Â
PostgreSQL Sys Privs Granted To User And Role. System privileges granted to user and role. Run this once in any database. Ideally PostgreSQL. Â Â Â Â
PostgreSQL Table View Sequence and Function privs Granted To Public. Tables, Views, Sequence and Functions privileges granted to public. Run this per database.
Run this per database.
PostgreSQL Table View Sequence and Function Privs Granted With Grant Option. Tables, Views, Sequence and Functions privileges granted to user and role with
grant option only. Exclude PostgreSQL account.
PostgreSQL Table View Sequence Function Privs Granted To Roles. Tables, Views, Sequence and Functions privileges granted to roles. Â Not including public. Run
this per database.
PostgreSQL Table Views Sequence and Functions Privs Granted To Login. Tables, Views, Sequence and Functions privileges granted to logins. Not including postgres
system user. Run this per database.
Note: As of version 8.3.6, PostgreSQL does not support grant admin option to public. There is only function, no store procedure. There is no support for column grant, only
table grant. Public is a group, not user. Public does not show up in pg_roles. The only privileges need to run all these queries is: GRANT CONNECT ON DATABASE
PostgreSQL TO username;
Â
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
Â
/*These are required on every database, including POSTGRES (By default these are already granted to PUBLIC) */
Â
If a datasource has a PostgreSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all PostgreSQL databases the user has access to.
For an overview of domains, entities, and attributes, see Domains, Entities, and Attributes. For a description of all domains, see Domains.
For z/OS data sources (Db2, Data Sets, and IMS), there are data-source-specific attributes and the meaning of existing attributes may differ than what is described here.
For more information on entities and attributes specific to z/OS data sources, see the following:
Entity List for Access Policy- Access Policy Entity; Rule Policy Entity; Rule Action Entity; and, Alert Notification. See Rule Entity for a list of attributes. See Rule Action Entity
for a list of attributes. See Alert Notification Entity for a list of attributes.
Selective Audit Trail Indicates if this is a selective audit trail policy (T/F).
Audit Pattern Test pattern used for a selective audit trail policy.
Timeout values depend on the number of the sessions opened by analyzer thread. For each analyzer thread there are following default values: If number of open sessions
>0 and < 250, then timeout is 60 minutes. If number of open sessions >=250 and < 500, then timeout is 30 minutes. If number of open sessions >= 500 and < 750, then
timeout is 15 minutes, If number of open sessions >= 750 and < 1200, then timeout is 5 minutes. If number of open sessions is >= 1200, then timeout is 2 minutes.
Construct ID Uniquely identifies a command construct (for example, select a from b).
Total Access Total count of construct instances for this access period.
Period Start Date Date only from the period start attribute.
Period Start Weekday Weekday only from the period start attribute.
Period Start Time Time only from the period start attribute.
Timestamp Initially, the Timestamp value is set the first time that a request is observed on a client-server connection during an access period. By
default, an access period is one hour long, but this can be changed by the Guardium administrator in the Inspection Engine
Configuration - see the Guardium Administrator Guide. Thereafter, for each subsequent request, it is updated when the system updates
the average execution time and the command count for this period.
Period End Date and time for the end of the access period.
Period End Date Date only from the period end attribute.
Period End Weekday Weekday only from the period end attribute.
Period End Time Time only from the period end attribute.
Average Execution Time The average command execution time during the period. This is for SQL statements only. It does not apply to FTP or Windows file share
traffic.
Failed Sqls (2) The number of failed SQL requests. See note at the end of the table.
Successful Sqls (2) The number of successful SQL requests. See note at the end of the table.
Total Records Affected (2) The total number of records affected. See note at the end of the table.
Avg Records Affected (2) The average number of records affected. See note at the end of the table.
Total Records Affected (Desc) If the Total Records Affected attribute is a character string instead of a number, that value appears here (for example, Large Results Set,
(2) or N/A.
Records affected - Result set of the number of records which are affected by each execution of SQL statements.
Note: The records affected option is a sniffer operation which requires sniffer to process additional response packets and postpone
logging of impacted data which increases the buffer size and might potentially have a adverse effect on overall sniffer performance.
Significant impact comes from really large responses. To prevent large amount of overhead associated with this operation, Guardium
uses a set of default thresholds that allows sniffer to decide to skip processing operation when exceeded.
You can use the store max_results_set_size, store max_result_set_packet_size, and store max_tds_response_packets CLI commands
to set levels of granularity.
Case 1, record affected value, positive number - this represents correct size of the result set.
Case 2, record affected value, -2 - This means number of records exceeded configurable limit (This could be tuned through CLI
interface).
Case 3, record affected value, -1 - This shows any unsupported cases of packets configurations by Guardium.
Case 4, record affected value, -2 - If the result set is sent by streaming mode.
Case 5, record affected value, -2 - Intermediate result during record count to update user about current value, ends up with
positive number of total records.
Show Seconds If a the number of accesses per second is being tracked, this contains counts for each second in the access period (usually one hour).
This is to point out that a UTC offset should be set so that the time from two different collectors that are in two different time zones
aggregate correctly. If the offset was not set then there would exist a condition where users would not really be able to determine or see
a true representation of when things happened in relation to time.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Session ID, Instance ID, Construct ID, and Total Access are only available to users with the admin role.
Failed Sqls, Successful Sqls, Application Event ID, Total Records Affected, Avg Records Affected, and Total Records Affected (Desc) are attributes that only appear when
the main entity for the query permits this level of detail. These are not available if either Client/Server or Session is the main entity.
Access Rule Description Description from the access policy rule definition.
Timestamp Updated at the start and end of the activity being logged (prepare for archiving, encrypt, send, etc.).
Period Start Starting time for the data being acted upon. Each archiving or aggregation activity operates on one full day of activity.
Period End Ending time for the activity being acted upon.
File Name Name of file used for the activity. Files created by the archive and export operations are named as follows:
<daysequence>-<scp_host>-w<run_datestamp>-d<data_date>.dbdump.enc
For example:
732423-g1.guardium.com-w20050425.040042-d2005-04-22.dbdump.enc
The date of the data contained on the file, in yyyy-mm-dd format is data_date, near the end of the file name (just before .dbdump.enc).
Take care that you do not confuse this date with the run date, which appears earlier in the file name, and is the date that the data was
archived or exported.
Records Purged If the activity type is Purge, the number of records purged. Otherwise, N/A.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Alert Notification Type Type of alert from the policy rule definition.
ALERT_NOTIFICATION_ID and ALERT_ID are only available to users with the admin role.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Event Date Datetime value, set by GuardAppEvent:Start. It displays in the format yyyy-mm-dd hh:mm:ss.
Note: If an attempt is made to set the event date using a format other than yyyy-mm-dd, it will contain all zeroes. The time portion
(hh:mm:ss) is optional, and if omitted will be 00:00:00.
Timestamp Created only once, when the event is logged. Do not confuse this attribute with the Event Date attribute, which can be set using an API
call or from a stored procedure parameter. (See the Guardium Administrator Guide for a description of the Application Events API and
Custom Identification Procedures.)
Event Release Date Datetime value, set by GuardAppEvent:Released. It displays in the format yyyy-mm-dd hh:mm:ss.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
APP User Name Unique identifier for this App User Name entity.
Assessment Log Severity The assessment test severity: Critical, Major, Minor, Cautionary, Informational. This is an ordered list of the level of severity
classifications. Assessment test severity: Critical, Major, Minor, Cautionary, Informational. The highest severity is the first classification
in this list. The lowest severity is the last classification in this list.
Assessment Result data source ID and Assessment Result ID are only available to users with the admin role.
Received By All Indicates whether or not these results have been received by all receivers on the distribution list.
Filter Client IP Clients selected: exact IP address, address with wildcards (*), or empty to select all.
Filter Server IP Servers selected: exact IP address, address with wildcards (*), or empty to select all.
Assessment Result ID, Assessment ID, and Task ID are only available to users with the admin role.
Test Type Type of assessment test (Observed, Predefined, Custom, Query based, CVE)
Datasource Type Type of Datasource (DB2, Informix, MYSQL, ORACLE, SYBASE, etc.)
Threshold User defined threshold, to override the value define upon the test’s creation
Threshold Default Value Default threshold that defines the success/fail criteria
Keep Result Days The number of days the results will be kept by the system.
Keep Results Quantity The number of results sets that will be kept by the system.
 Â
Task Type A numeric value indicates whether the task is a report, security assessment, entity audit trail, privacy set, or classification process.
Aliases are defined for these types, so reports with Aliases on will simplify reading of the report output.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
IBM Guardium Customers with Database Activities Monitoring will have access to InfoSphere CDC.
This Guardium feature uses Java CDC user exit to send value change information to the Guardium collector.
User exits for InfoSphere CDC lets the user define a set of actions the InfoSphere CDC can run before or after a database event occurs on a specified table.
Two files that need to be installed on the Database Server are for the Guardium agent that interfaces with IBM's InfoSphere Change Data Capture (InfoSphere CDC)
application. They are in the sources/apps/GuardCDC/lib/ directory of the build. These files are: protobuf-java-2.4.1.jar; and, GuardCdc.jar
Prerequisites - the InfoSphere Change Data Capture (InfoSphere CDC) application must already be installed on the DB Server.
1. Copy these two files to the RepEngine/lib/ directory of the cdchome directory. An example of the full path would be /cdchome/cdc6.5.2/RepEngine/lib/
2. Unzip each file
3. Edit the guard_cdc_user_exit_config.mxl file to add the Guardium_Host name. An example of where this file would be located is
/cdchome/cdc6.5.2/RepEngine/lib/com/guardium/cdc/userexit/
4. Configure InfoSphere CDC to write to the GuardiumAgent. There are multiple steps to set up and configure the CDC application. These steps can be obtained
from the InfoSphere CDC development/support team at IBM.
Queue DateTime Timestamp when the job was submitted to the classifier/assessment queue.
Client/Server Entity
This entity describes a specific client-server connection. An instance is created each time a unique set of attributes (excluding the Timestamp) is detected.
Timestamp Since all attributes in this entity contain static information, this timestamp is created only once, when Guardium observes a request on
the defined client-server connection for the first time.
Network Protocol Network protocol used (e.g., TCP, UDP, etc. Note that for K-TAP on Oracle, this may display as either IPC or BEQ)
DB User Name Database user name. The DB user name is the person who connected to the database, either local or remote.
Service Name Service name for the interaction. In some cases (AIX® shared memory connections, for example), the service name is an alias that is
used until the actual service is connected. In those cases, once the actual service is connected, a new session is started - so what the
user experiences as a single session will be logged as two sessions.
For Teradata, Service name contains the session logical host id value.
Server OS Server operating system.
For Teradata, as there is no direct information about client/server OS, instead, the data format type is used; indicating how integer data
are stored during db session. This has a close relation to the platform being used and may appear as follows:
For Teradata, as there is no direct information about client/server OS, instead, the data format type is used; indicating how integer data
are stored during db session. This has a close relation to the platform being used and may appear as follows:
ClientIP/DBUser Paired attribute value consisting of the client IP address and database user name.
Analyzed Client IP Applies only to encrypted traffic; when set, client IP is set to zeroes.
Analyzed Client IP has a map for CEF source. If the query used for the CEF does NOT contain the Client IP but contains the analyzed
client IP, the analyzed client IP will be used for the source. If both included in the query, then Client IP takes precedence.
Server IP/DB user Paired attribute value consisting of Server IP address and database user name.
Client/ Server by session Client/Server by session is also a Main Entity. Access this secondary entity by clicking on the Client/Server primary entity.
Note: For Access Tracking only, Client/Server Entity name will appear in the pulldown menu as two possible entities - Client/Server and Client/Server By Session.
Client/Server By Session will get count from Client/Server and date conditions from Session.
Client/Server will get count from Client/Server and date conditions also from Client/Server.
If the user chooses Client/Server, then the query will be populated with ATTRIBUTE_ID = 1. If the user chooses Client/Server By Session, then the query will be populated
with MAIN_ATTRIBUTE_ID = 0.
Sniffer Connections Used Total number of connections currently being monitored since inspection engine was restarted.
Sniffer Packets Throttled Total number of connections that have been ignored due to throttling since inspection engine was restarted.
Sniffer Connections Ended Total number of connections that were monitored and have ended since inspection engine was restarted.
Mysql Is Up Boolean indicator for internal database restart (1=was restarted, 0=not restarted).
Promiscuous Received Rate of received packets through the sniffing network cards (non-interface ports).
SqlGuard Timestamp Is the time the record is inserted into the custom table
Datasource Name Is the name of the data source used to upload the record
Command Entity
For each command, an entity is created for each parent node and position in which the command appears in a command construct.
SQL Verb Main verb in SQL command (e.g., select, insert, delete, etc.).
Command ID and Construct ID are only available to users with the admin role.
Comments Entity
This entity describes a user comment. It is available in the Comments domain only, which is restricted to admin users. This domain includes only sharable comments,
which are all comments except for those that run locally (see the Local Comments entity).
Comment Reference Indicates the element to which the comment is attache - a query, audit process result, or another comment, for example.
Object Description The name of the object from which the comment was defined. For example, a comment defined on a policy has an object description of
ACCESS_RULE_SET.
Database Error Text A database error code followed by a short text description of the error. The error code is taken from the Exception Description attribute
of the Exception entity. Using the error code as a key, the error text is obtained from an internal table on the Guardium appliance, which
contains the most common error messages (about 54,000 of them).
Data source Type Data source type - Oracle, MS-SQL, DB2, Sybase, Informix, etc.
Shared Yes or No
Connection Properties The Connection Property box has information in it only if additional connection properties must be included on the JDBC URL to
establish a JDBC connection with this datasource.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Timestamp A timestamp value created when Guardium records this instance of the entity (every instance has a unique timestamp).
Probe Attempted Indicates if a probe for a supported database service has been attempted on this port. T=yes, F=no.
DB Type If a probe of the port has found a supported database type, indicates the type (DB2, Informix, MS SQL Server etc.)
Probe Timestamp The date and time that this specific port was probed.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Exception Entity
This entity is created for each exception encountered.
Exception Timestamp Date and time created when this Exception entity was logged.
For an S-TAP reconnect or timeout exception, this will contain the IP address or DNS name of the database server.
For a database exception, this is an error code from the database management system. For most common messages (about 54,000 of
them), a longer text description is available in the Database Error Text attribute. That text comes from the internal Guardium database
table of error messages, not from the exception itself.
SQL string that caused the The SQL string that caused the exception.
exception
User Name Database user name. On encrypted traffic, where correlation is required, this value may not be available, but it is always available from
the DB User Name attribute in the Client/Server entity.
Link to more information about Optional link that is sometimes available, depending on the exception source.
the exception1
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Exception ID and Exception Type ID are only available to users with the admin role.
Exception Description A text description of the exception type, from the following list. Most of these should never be seen. See the notes in italic for the most
common exceptions and notes.
For this message, a database error code will be stored in the Exception Description attribute of the Exception entity, and a text version of
the database error message will be available in the Database Error Text attribute of the Database Error Text entity.
DB Protocol Exception
Login Failed
Security Exception
For this message, a custom class exception has been raised when breaching code execution is blocked; such as when users use the
Javaâ„¢ API to define their own alerts or assessments.
For this message, the IP address or DNS name of the database server will be available in the Exception Description attribute of the
Exception entity
For this message, the IP address or DNS name of the database server will be available in the Exception Description attribute of the
Exception entity
TCP ERROR
For this message, additional information about the error will be included in the Exception Description attribute of the Exception entity
Field Entity
Each time Guardium encounters a new field, it creates a field entity.
Command ID Uniquely identifies the main command from the construct in which it was
referenced.
Object ID Uniquely identifies the object from the construct in which it was referenced.
ORDER BY department
Having
FROM table_name
GROUP BY column_name1
Group By
FROM table_name
GROUP BY column_name1
Where
FROM Users
Field ID, Construct ID, Command ID, and Object ID are only available to users with the admin role.
SQL: simple, direct SQL command, for example, typed directly into the CLI
RAW: PREPARE of a SQL statement for later execution, for example, conn.prepareStatement (select a from b where c=:value)
Statement type is part of the FULL SQL entity and is audited only if you have configured Log Full Details for this statement within the
policy.
You can not filter out specific statement types in the policy, for example, audit-only SQL and BIND statements. You can, however, filter
these out in reports.
Bind Variables Values For DB2/zOS, contains a list of comma separated bind variable
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Timestamp The timestamp records the time when the SQL is executed in the database server.
Response Time The response time for the request in milliseconds. When requests are monitored in network traffic, the response times are an accurate
reflection of the time taken to respond to the request (Guardium timestamps both the client request and the server response).
Records Affected The number of records affected for each session. On reports using this attribute, we suggest that you turn on aliases to properly display
special cases such as Large Result Set or N/A.
Returned Data Data returned for this request (if any, and if available).
Records Affected (Desc) When the Records Affected is a string value instead of a number, that string is stored here. For example: Large Result Set or N/A.
Returned Data Count Number of rows returned from the SQL statement used in the policy rule.
SQL: simple, direct SQL command, for example, typed directly into the CLI
RAW: PREPARE of a SQL statement for later execution, for example, conn.prepareStatement (select a from b where c=:value)
Statement type is part of the FULL SQL entity and is only audited if you have configured Log Full Details for this statement within the
policy.
You can not filter out specific statement types in the policy, for example, audit-only SQL and BIND statements. You can, however, filter
these out in reports.
Bind Variables Values For DB2/zOS, contains a list of comma separated bind variable
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Full SQL ID, Instance ID, and Succeeded are only available to users with the admin role.
Timestamp Date and Time Full SQL Values Entity was created.
Event Generator IP address of the client (i.e. DB-Server) which generated the event.
Group Entity
This entity describes a group that has been defined to Guardium.
Timestamp Date and time the group member was created or updated.
Application Guardium application listed (foe example, Query Builder, Policy Builder, etc.).
Modified Entity The Guardium entity modified (a group definition, for example).
User Name Created when the Guardium user logs in or out (there will be one entity per Guardium session).
Login Date And Time Date and time user logged in.
Logout Date And Time Date and time user logged out.
Host Entity
A CAS Host entity is created the first time that CAS is seen on a database server host. It is updated each time that the online/offline status changes. The Host entity is also
available in the CAS Host History domain.
Audit State Label Id Unique numeric identifier for the configuration item
DB Type Database type: Oracle, MS-SQL, DB2, Sybase, Informix, or N/A if the change is to an operating system instance
OS Script or SQL Script: A change triggered by the OS script contained in the monitored item template definition.
File: A specific file. There is no host configuration entity for a file pattern defined in the template set used by the instance. Instead, there
is a separate host configuration entity for each file that matches the pattern.
Monitored Item The name of the changed item, from the Description (if entered), otherwise a default name depending on the Type (a file name, for
example).
 Â
Event Time Date and time that the event was recorded
Failover Off - A server is available (following a disruption), so CAS data is being written to the server
Failover On - The server is not available, so CAS data is being written to the failover file
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Incident Entity
Incident entities are created by incident generation processes, or manually by assigning a policy violation to an incident.
Incident Severity Description The severity code will be one of the following:
Selective Audit Trail Indicates if this is a selective audit trail policy (T/F).
Audit Pattern Test pattern used for a selective audit trail policy.
Sequence Sets the order of sequence when there is multiple installed policies.
DB Type Database type: Oracle, MS-SQL, DB2, Sybase, Informix; or N/A for an operating system instance
User The user name that CAS uses to log onto the database; or N/A for an operating system instance.
Port The port number CAS uses to connect to the database; or empty for an operating system
instance
DB Home Dir The home directory for the database; or empty for an operating system instance
Join Entity
A join table is a way of implementing many-to-many relationships. Use join entity to join tables in a SELECT SQL statement.
Timestamp Date and Time that the Join Entity was created.
Comment Reference Indicates the element to which the comment is attached - a query, audit process result, or another comment, for example.
Object Description The name of the object from which the comment was defined. For example, a comment defined on an incident has an object description
of INCIDENT.
Record Associations A list of records that this local comment is associated with.
 Â
Location View
How to determine what days are not archived
Use a query (Tools tab > Report Building > Report Builder > query Location View) that can be modified to create a report showing the files that are archived. This report
lists all the files with archive dates. Dates not on this report indicate that those dates have not been archived. Run archive for the dates not on the list, if required.
Aggregator The Guardium system where the file was generated on. However this can be a collector, not just a
Aggregator
System Type What protocol was used while archiving - if it was SCP or FTP or Centera or TSM
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Message Originator The module creating the message; for example monitor or GuardiumJetspeedUser.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Timestamp Date and time the change was recorded on the Guardium appliance. This timestamp is created during the data upload operation. It is
not the time that the change was recorded on the audit database. To obtain that time, use the Audit Timestamp entity.
Database Name DB2, Informix, Sybase, MS SQL Server only. Database name.
Audit PK For Sybase and MS SQL Server only. A primary key used to relate old and new values (which must be logged separately for these
database types).
SQL Text Available only with Oracle 9. The complete SQL statement causing the value change.
Triggered ID Unique ID (on this audit database) generated for the change.
Audit Timestamp Date and time that the trigger was executed.
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Sample Time Timestamp (date and time on host) that sample was taken
Saved Data Id Identifies the Saved Data entity for this change
Audit State Label Id Identifies the Host Configuration entity for this change
Timestamp Date and time this change record was created on the server (Guardium appliance server clock)
MD5 Indicates whether or not the comparison is done by calculating a checksum using the MD5 algorithm and comparing that value with the
value calculated the last time the item was checked. The default is to not use MD5. If MD5 is used but the size of the raw data is greater
than the MD5 Size Limit configured for the CAS host, the MD5 calculation and comparison will be skipped. Regardless of whether or not
MD5 is used, both the current value of the last modified timestamp for the item and the size of the item are compared with the values
saved the last time the item was checked.
Owner Unix only. If the item type is a file, the file owner
Permissions Unix only. If the item type is a file, the file permissions
0 (zero) = File does not exist, but this file name is being monitored (it never existed or may have been deleted)
Last Modified Timestamp for the last modification, taken from the file system at the sample time
Group Unix only. If the item type is a file, the group owner
Monitored Item Depending on the Audit Type, this is the OS or SQL script, environment, or registry variable, or file name. Regarding a file pattern defined
in an item template, there will be a separate monitored item detail entity for each file that matches the pattern, but there is no
monitored item details entity for the file pattern itself. If a file pattern is used, it is always available in the Template Content attribute.
Audit Config Set Id Identifies the template set in the host configuration
OS Script or SQL Script: The actual text or the path to an operating system or SQL script, whose output will be compared with the output
produced the next time it runs
In Synch Indicates whether or not the template item definition on the server matches the template item definition on the CAS host
Use MD5 Indicates whether or not the comparison is done by calculating a checksum using the MD5 algorithm and comparing that value with the
value calculated the last time the item was checked. The default is to not use MD5. If MD5 is used but the size of the raw data is greater
than the MD5 Size Limit configured for the CAS host, the MD5 calculation and comparison will be skipped. Regardless of whether or not
MD5 is used, both the current value of the last modified timestamp for the item and the size of the item are compared with the values
saved the last time the item was checked.
Save Data When marked, previous version of the item can be compared with the current version
Template Content The template entry that is the basis for this monitored item, set from the Template entity Access Name attribute when the instance was
created. Typically this will be the same as the monitored item, but in the case where a file pattern was used in the template, this will be
the file pattern
Object Entity
An instance of this entity is created for each object in a unique schema.
Object Id and Construct Id are available to users with the admin role only.
Application User Name Name of the user creating the policy rule violation.
Full SQL String SQL string causing the policy rule violation.
Timestamp Created when the policy rule violation is logged. Not all policy rule violations are logged - see the description of the rule actions in
Chapter 11: Building Policies.
Message Sent The text of the policy rule violation message that was sent.
Application Event Id Application event ID (if any - these are set using the application events API)
Access Rule Description The description of the rule from its definition.
Severity Severity defined for the rule (the severity of an incident to which this is assigned may be different).
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Violation Log Id are available to users with the admin role only.
Qualified Object Tuple - Server IP, Service name, DB name, DB user, Object
Timestamp A timestamp value created when the Guardium appliance records the rogue connection reported by the Hunter.
IPC Type Type of inter-process communications used for the connection, which may be from the following list:
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Rule Entity
Can be used for Installed policy rule entity or access policy rule entity. There is one for each rule of the installed policy/policies or access policy/policies. Apart from the ID
fields (which uniquely identify components on the internal database), all of these fields are described in the Policies help topic.
Note: GDM_INSTALLED_POLICY_RULES_ID and ACCESS_RULE_ID are available to users with the admin role only.
Timestamp Timestamp for when the saved data entity was recorded in the server database
Change Identifier Identifies the monitored changes entity for this saved data entity
Server IP/Server Port A server IP value combined with a server port value.
Session Entity
This entity is created for each Client/Server database session.
Timestamp Initially, a timestamp created for the first request on a client-server connection where there is not an active session in progress. Later, it
is updated when the session is closed, or when it is marked inactive following an extended period of time with no observed activity.
When tracking Session information, you will probably be more interested in the Session Start and Session End attributes than the
Timestamp attribute.
Session Start Date and time session started. Session Start is also a Main Entity. Access this secondary entity by clicking on the Session primary entity.
Session End Date and time the session ended. Session End is also a Main Entity. Access this secondary entity by clicking on the Session primary
entity.
Database Name Name of database for the session (MSSQL or Sybase only).
Note: For Oracle, Database Name may contain additional and application specific information such as the currently executing module
for a session that has been set in the MODULE column of the V$SESSION view
Session Ignored Indicates whether or not some part of the session was ignored (beginning at some point in time).
Uid Chain For a session reported by Unix S-TAP (K-Tap mode only), this shows the chain of OS users, when users su with a different user name.
The values that appear here vary by OS platform - for example, under AIX the string IBM IBM IBM may appear as a prefix.
Note: For Solaris Zones, user ids may be reported instead of user names in the Uid Chain.
Old Session ID Points to the session from which this session was created. Zero if this is the first session of the connection.
Process ID The process ID of the client that initiated the connection (not always available).
Duration (secs) Indicates the length of time between the Session Start and the Session End (in seconds).
Original Timezone The UTC offset. This is done in particular for aggregators that have collectors in different time zones and so that activities that happened
hours apart do not seem as if they happened at the same time when imported to the aggregator.
For instance, on an aggregator that aggregates data from different time zones, you can see session start of one record that is 21:00 with
original timezone UTC-02:00 and another record where session start is 21:00 with original timezone UTC-05:00, This means that these
events occurred 3 hours apart, but at the same respective local time (9 PM).
Global ID, Session ID, and Access ID are only available to users with the admin role.
Severity Entity
The incident severity for an incident or policy violation
Mysql Is Up Boolean indicator for internal database restart (1=was restarted, 0=not restarted).
Promiscuous Received Rate of received packets through the sniffing network cards (non-interface ports).
Sniffer Connections Ended Total number of connections that were monitored and have ended since inspection engine was restarted.
Sniffer Connections Used Total number of connections currently being monitored since inspection engine was restarted.
Sniffer Packets Throttled Total number of connections that have been ignored due to throttling since inspection engine was restarted.
Di Rate Â
Di Queue Length Â
Di Total Â
Di Lost Packets Â
Bind Out Var Optional. Determines if the entered text in SQL statement is a procedural block of code that will return a value that should be bound to
an internal Guardium variable that will be used in the comparison to the Compare to value.
Compare To Value Compare value that will be used to compare against the return value from the SQL statement using the compare operator.
External Reference Reference to the Center for Internet Security (CIS) or Common Vulnerabilities and Exposures (CVE).
Recommendation Text Fail The Recommended text for fail that will be displayed when the test fails.
Recommendation Text Pass The Recommended text for pass that will be displayed when the test passes.
Result Text Fail The Result text for fail that will be displayed when the test fails.
Result Text Pass The Result text for pass that will be displayed when the test passes.
Return Type The Return type that will be returned from the SQL statement.
SQL For Details A SQL Statement for Detail, a SQL statement that retrieves a list of strings to generate a detail string of Detail prefix + list of strings.
SQL The SQL statement that will be executed for the test.
SQL Entity
SQL Entity
This entity is created for each unique string of SQL. Values are replaced by question marks - only the format of the string is stored.
Truncated SQL Indicates if the SQL has been truncated or not where:
1 - true/yes, truncated
Template Entity
A CAS template entity is created for each item template within a template set. An item is a specific file or file pattern, an environment or registry variable, the output of an
OS or SQL script, or the list of logged-in users.
Template ID A unique identifier for the item template within the set of all item templates
Access Name Depending on the Audit Type, this is the OS or SQL script, environment or registry value, or a file name or a file name pattern
Audit Frequency (Min) The maximum interval (in minutes) between tests
Use MD5 Indicates whether or not the comparison is done by calculating a checksum using the MD5 algorithm and comparing that value with the
value calculated the last time the item was checked. The default is to not use MD5. If MD5 is used but the size of the raw data is greater
than the MD5 Size Limit configured for the CAS host, the MD5 calculation and comparison will be skipped. Regardless of whether or not
MD5 is used, both the current value of the last modified timestamp for the item and the size of the item are compared with the values
saved the last time the item was checked.
Save Data Indicates if the Keep data checkbox has been marked. If so, previous versions of the item can be compared with the current version
Editable Indicates whether or not this template can be modified. The default Guardium templates cannot be modified. In addition once a
template set has been used in a CAS instance, it cannot be modified. In any case, a template set can always be cloned and the cloned
set can be modified
Template ID and Template Set ID are only available to users with the admin role.
Template Set Id A unique identifier for the template set, numbered sequentially
DB Type Database Type: Oracle, MS-SQL, DB2, Sybase, Informix, or N/A for an operating system template
IsDefault Indicates whether or not this template is the default for the specified OS Type and DB Type combination
Editable Indicates whether or not this template can be modified. The default Guardium templates cannot be modified. In addition once a
template set has been used in a CAS instance, it cannot be modified. In any case, a template set can always be cloned and the cloned
set can be modified
Parameter Modified Flag Indicates if parameters were modified since the last test.
Threshold String The threshold prompt for the test (e.g. Maximum Number of Different IP's Allowed per user)
Exceptions Group Desc Exceptions Group Description. Populated when test is executed.
Test Result ID, Assessment Result ID, and Assessment Test ID are only available to users with the admin role.
Checked From Date The starting date and time checked for by the alert condition.
Checked To Date The ending date and time checked for by the alert condition.
Unit Utilization: Displays the maximum unit utilization level for each unit in the given timeframe. There is a drill-down that displays details for a unit across all
periods within the timeframe of the report.
Unit Utilization Distribution: Per-unit, this report displays the percent of periods in the report timeframe with utilization levels of low, medium, and high.
Utilization Thresholds: This predefined report displays all low and high threshold values for all unit utilization parameters.
Unit Utilization Daily Summary - Provides a daily summary of unit utilization data.
In addition, Units Utilization Levels tracking enables users to create custom queries and reports.
Tip: Enable aliases for all custom and pre-defined reports using unit utilization data to ensure that unit utilization levels are displayed as meaningful strings instead of
numbers. For example, low, medium, and high instead of 1, 2, or 3.
The list of attributes includes:
Host name
Period start
Number of restarts
Number of restarts level
Sniffer memory
Sniffer memory Level
Percent MySQL memory
Percent MySQL memory level
Free buffer space
Free buffer space level
Analyzer queue
Note: Each parameter has a value and a level which is calculated based on the value and the thresholds.
User Entity
Identifies the Guardium user defined as an audit process results receiver.
Note: DB Entitlements Reports are optional components enabled by product key. If these components have not been enabled, the choices listed below do not appear in
the Custom Domain Builder/Custom Domain Query/Custom Table Builder selections.
The predefined entitlement reports are listed as follows. They appear as domain names in the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections:
Oracle DB Entitlements
The following domains are provided to facilitate uploading and reporting on Oracle DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
Oracle
ORA Accnts of ALTER SYSTEM - Accounts with ALTER SYSTEM and ALTER SESSION privileges
ORA Accnts with BECOME USER - Accounts with BECOME USER privileges
ORA All Sys Priv and admin opt - Report showing all system privilege and admin option for users and roles
ORA Obj And Columns Priv - Object and columns privileges granted  (with or without grant option)
ORA Object Access By PUBLIC - Object access by PUBLIC
ORA Object privileges - Object privileges by database account not in the SYS and not a DBA role
ORA PUBLIC Exec Priv On SYS Proc - Execute privilege on SYS PL/SQL procedures assigned to PUBL
ORA Roles Granted - Roles granted to users and roles
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
MYSQL DB Entitlements
The following domains are provided to facilitate uploading and reporting on MYSQL DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
MYSQL: The queries ending in "_40" use the most basic version of the mysql schema (for MySQL 4.0 and beyond). The information_schema has not changed since it was
introduced in MySQL 5.0, so there is a set of _50 queries, but no _51 queries. The _50 queries work for MySQL 5.0 and 5.1 and for 6.0 when it comes out, since the
information_schema is not expected to change in 6.0. The queries ending in "_502" (MYSQL502) use the new information_schema, which contains much more
information and is much more like a true data dictionary.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to work.
Note: In addition to the privileges required, the user should connect to the MYSQL database to upload the data.
The entitlement queries for all MySQL versions through MySQL 5.0.1 use this set of tables: mysql.db mysql.host mysql.tables_priv mysql.user
Beginning with MySQL 5.0.2, and for all later versions, the entitlement queries use this set of tables: information_schema.SCHEMA_PRIVILEGES mysql.host
information_schema.TABLE_PRIVILEGES information_schema.USER_PRIVILEGES
If a datasource has a MYSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all MYSQL databases the user has access to.
DB2 DB Entitlements
The following domains are provided to facilitate uploading and reporting on DB2 DB Entitlements. Each of the following domains has a single entity (with the same name),
and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Use the script, gdmmonitor-db2-IBMi.sql, to detail the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Object privileges granted to grantee (Object type: Schema, Table, View, Package, Routine, sequence, column, global variable, and XML schema)
Executable Objects privileges granted to PUBLIC (Object type: package and Routine)
Object privileges granted to grantee with GRANT OPTION (Object type: Schema, Table, View, Package, Routine, sequence, column, global variable, and XML schema)
All of the object privileges exclude default system schemas from a predefined Guardium group called "DB2 for i exclude system schemas - entitlement report". Please add
to this group for schema that should be excluded.
SYBASE DB Entitlements
The following domains are provided to facilitate uploading and reporting on SYBASE DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
SYBASE System Privilege and Roles Granted to User including Grant option
SYBASE Role Granted to User and System Privileges Granted to user and role including Grant option
SYBASE Object Access by Public
SYBASE Execute Privilege on Procedure, function assigned To Public
SYBASE Accounts with System or Security Admin Roles
SYBASE Object and Columns Privilege Granted with Grant option
SYBASE Role Granted To User
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
If a datasource has a SYBASE database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all SYBASE databases the user has access to.
SYBASE IQ Entitlements
Supported version: sybase IQ 15 and above.
The following custom table definition are created to upload data: (you can ignore the id.)
142 | SybaseIQ15 System Authority And Group Granted To User And Group
603 | SybaseIQ15 User Group With DBA/Perms Admin/User Admin/Remote DBA database authority
606 | SybaseIQ15 Login Policy For User And Group With Login Option Setting
=========================================================================================
Description of each - some of them are self explained. some may need a few extra words:
1 /*
These are privilege granted to users only, not including group or membership in group.
*/
2. /*
*/
*/
*/
*/
*/
7 /* Users and groups with DBA, Perms Admin, User Admin or Remote DBA database authority.
*/
8 /* Tables and Views privileges granted with grant option to users and groups.
Note, this is the only grant option type allow in Sybase IQ. Routines cannot be grant with grant option.
*/
*/
10 /* Login policy assigned to user and group with login option setting */
See the examles below on how to add a datasource to each of the new reports and then execute each report.
Informix DB Entitlements
The following domains are provided to facilitate uploading and reporting on Informix DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
Informix Object Privileges by database account not including system account and roles
Informix database level privileges, roles and language granted to user including grant option
Informix database level privileges, roles and language granted to user and role including grant option
Informix Object Grant to Public
Informix Execute Privilege on Informix procedure and function granted to Public
Informix Account with DBA Privilege Informix Object and columns privileges granted with Grant option
Informix Role Granted To User and Role
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements). The following list (with comment line heading) details the minimal privileges required, in the database table (or
view of the database table), in order for the entitlement to work.
Since all users have sufficient privileges for system catalog SELECT privileges, there is no need to grant privilege to any user. Informix doesn't seem to like granting system
catalog to users. The grant below would normally be used. Â But in this case they are not required.
If a datasource has a Informix database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all Informix databases the user has access to.
MSSQL2000 Object Privilege By database account not including default system user
MSSQL2000 Role/System Privileges Granted to User including grant option
MSSQL2000 Role granted to user and role. System Privileges Granted to User and Role including grant option
MSSQL2000 Object Access by PUBLIC
MSSQL2000 Execute Privilege on System Procedures and functions to PUBLIC
MSSQL2000 Database accounts with db_owner and db_securityadmin role
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
Â
If a datasource has a MSSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all MSSQL databases the user has access to.
Note: Objects in Dynamic query Strings will NOT be shown in xxx_DEPENDENCIES. An object in an EXECUTE IMMEDIATE SQL string called by a stored program unit does
not show dependency. This query exclude schema owner defined in group ID 202 "Dependencies_exclude_schema-MSSQL". User has the ability to add or subtract
schema name from this group for the dependencies query.
MSSQL2005/8 Object privileges by database account not including default system user.
MSSQL2005/8 Role/System privileges granted To User
MSSQL2005/8 Role/System Privilege granted to user and role including grant option
MSSQL2005/8 Object access by PUBLIC
MSSQL2005/8 Execute Privilege on System Procedures and functions to PUBLIC
MSSQL2005/8 Database accounts of db_owner and db_securityadmin Role
MSSQL2005/8 Server account of sysadmin, serveradmin and security admin /* only run against MASTER database */
MSSQL2005/8 Object and columns privileges granted with grant option
MSSQL2005/8 Role granted to user and role.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
Â
If a datasource has a MSSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the uploading
data will loop through all MSSQL databases the user has access to.
Netezza DB Entitlements
Note: There is no DB error text translation for Netezza. The error appears in the exception description. Users can clone/add a report with the exception description for
Netezza as needed.
Netezza Obj Privs by DB Username - Object privileges with or without grant option by database username excluding ADMIN account.
Netezza Admin Privs by DB Username - Admin privileges with or without grant option by database username excluding ADMIN account.
Netezza Obj Privs By Group - Object privileges with or without grant option by GROUP excluding PUBLIC.
Netezza Admin Privs By Group - Admin privileges with or without grant option by GROUP excluding PUBLIC.
Netezza Admin Privs By DB Username, Group - Admin privileges with or without grant option by database username, group excluding ADMIN account and PUBLIC
group.
Netezza Obj Privs Granted - Object privileges granted with or without grant option to PUBLIC.
Netezza Admin Privis Granted - Admin privileges granted with or without grant option to PUBLIC.
Netezza Global Admin Priv To Users and Groups - Global admin privilege granted to users and groups excluding ADMIN account.
Netezza Global Obj Priv To Users and Groups - Global object privilege granted to users and groups excluding ADMIN account.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
For Netezza entitlement queries, it is recommended to connect to SYSTEM database, especially when granting the privilege to the user who is going to run these reports.
The granting privilege MUST take place from SYSTEM database or else the granted privilege will only take place on one particular database. When the granted privilege
takes place from SYSTEM database, a special feature will allow the granted privilege to carry through to all the databases.
Teradata DB Entitlements
The following domains are provided to facilitate uploading and reporting on Teradata DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
Teradata Object privileges by database account not including default system users.
Teradata System privileges and roles granted to users including grant option.
Teradata Role granted to users and roles. Â System privileges granted to users and roles including grant option.
Teradata Objects and System privileges granted to public. Note role cannot be granted to public in Teradata.
Teradata System admin, Security admin privileges granted to user and role.
Note: There are no such role as System or Security admin in Teradata. User must create their own roles. These are some important system privileges that would
normally not be granted to normal user: ABORT SESSION, CREATE DATABASE, CREATE PROFILE, CREATE ROLE,CREATE USER, DROP DATABASE, DROP PROFILE,
DROP ROLE, DROP USER, MONITOR RESOURCE, MONITOR SESSION, REPLICATION OVERRIDE, SET SESSION RATE, SET RESOURCE RATE.
Teradata Object privileges granted with granted option to users. Not including DBC and grantee = 'All'.
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
PostgreSQL DB Entitlements
The following domains are provided to facilitate uploading and reporting on PostgreSQL DB Entitlements. Each of the following domains has a single entity (with the same
name), and there is a predefined report for each domain. All of these domains are available from the Custom Domain Builder/Custom Domain Query/ Custom Table Builder
selections. As with other predefined entities and reports, these cannot be modified, but you can clone and then customize your own versions of any of these domains or
reports. To see entitlement reports, log on the user portal, and go to the DB Entitlements tab.
There are seven entitlement custom domains/queries/reports for PostgreSQL. They are as follows (each is listed with Report name, description, note):
Â
PostgreSQL Priv On. Databases Granted To Public User Role With Or Without Granted Option. Privilege on databases granted to public, user and role with or without
granted option. Run this on any database, ideally PostgreSQL.
PostgreSQL Priv On Language Granted To Public User Role With Or Without Granted Option. Privilege on Language granted to public, user and role with or without
granted option. Run this per database.
PostgreSQL Priv On Schema Granted To Public User Role With Or Without Granted Option. Privilege on Schema granted to public, user and role with or without
granted option. Run this per database.
PostgreSQL Priv On Tablespace Granted To Public User Role With Or Without Granted Option. Privilege on Tablespace granted to public, user and role with or
without granted option. Run this on any database, ideally PostgreSQL.
PostgreSQL Role Or User Granted To User Or Role. Role or User granted to user or role including grant option. Run this once in any database. Ideally PostgreSQL.
   Â
PostgreSQL Super User Granted To User Or Role. Super user granted to user or role. Run this once in any database. Ideally PostgreSQL. Â Â Â Â
PostgreSQL Sys Privs Granted To User And Role. System privileges granted to user and role. Run this once in any database. Ideally PostgreSQL. Â Â Â Â
PostgreSQL Table View Sequence and Function privs Granted To Public. Tables, Views, Sequence and Functions privileges granted to public. Run this per database.
Run this per database.
PostgreSQL Table View Sequence and Function Privs Granted With Grant Option. Tables, Views, Sequence and Functions privileges granted to user and role with
grant option only. Exclude PostgreSQL account.
PostgreSQL Table View Sequence Function Privs Granted To Roles. Tables, Views, Sequence and Functions privileges granted to roles. Â Not including public. Run
this per database.
PostgreSQL Table Views Sequence and Functions Privs Granted To Login. Tables, Views, Sequence and Functions privileges granted to logins. Not including postgres
system user. Run this per database.
Note: As of version 8.3.6, PostgreSQL does not support grant admin option to public. There is only function, no store procedure. There is no support for column grant, only
table grant. Public is a group, not user. Public does not show up in pg_roles. The only privileges need to run all these queries is: GRANT CONNECT ON DATABASE
PostgreSQL TO username;
Â
For entitlements to be able to upload data from various datasources, the general requirement is that the login, used to access the database, be able to read the tables
used in the query (which is hidden for all entitlements).
The following list (with comment line heading) details the minimal privileges required, in the database table (or view of the database table), in order for the entitlement to
work.
Â
Â
/*These are required on every database, including POSTGRES (By default these are already granted to PUBLIC) */
Â
If a datasource has a PostgreSQL database type, but does not have a DB name (see Datasource Definitions, the database name under Location is blank), then the
uploading data will loop through all PostgreSQL databases the user has access to.
Get the information that you seek faster, by accessing predefined reports available from the Guardium application. These predefined reports can be cloned and
customized to the needs of the user.
Using the Guardium predefined reports is a best practice recommendation, enabling organizations to quickly and easily identify security risks, such as inappropriately
exposed objects, users with excessive rights, and unauthorized administrative actions. Examples of the many predefined reports include: accounts with system privileges;
all system and administrator privileges, which are shown by user and role; object privileges by user; and all objects with PUBLIC access.
All parameters and values are displayed on all reports. The parameters and values can be edited using Customize in any report screen.
Logins to Guardium
All values for this report are from the Guardium Logins entity. For the reporting period, each row of the report lists the User Name, Login Succeeded (1= Successful,
0=Failed, -1 =password expired, -2 = login from different IP), Login Date And Time, Logout Date And Time (which is blank if the user has not yet logged out), Host Name,
Remote Address (of the user), and count of logins for the row.
Report Location: Reports > Monitoring of Guardium System > Guardium Logins
Report Location: Reports > Guardium Operational Reports > Enterprise Buffer Usage
Report Location: Reports > Monitoring of Guardium System > Groups Usage Report
Guardium Applications
For each Guardium application, each row lists a security role that is assigned, or the word all, indicating that all roles are assigned.
Report Location: Reports > Real-Time Guardium Operational Reports > All Guardium Applications - Role
Guardium Roles
This menu pane displays two reports: All Roles - Application Access - and All Roles; User.
All Roles - Application Access For each role, this report lists the number of applications to which it is assigned.
To list the applications to which a role is assigned, click the role and drill down to the Record Details report.
Report Location: Reports > Monitoring of Guardium System > All Roles - Application Access
For each role, this report lists the number of users to which it is assigned. To list the users to which a role is assigned, click the role and drill down to the Record Details
report.
Guardium Users
Lists each user, date of last activity, and number of roles assigned. For each user, you can drill down to the Record Details report to see the roles that are assigned to that
user.
Unit Utilization: Displays the maximum unit utilization level for each unit in the given timeframe. There is a drill-down that displays details for a unit across all
periods within the timeframe of the report.
Unit Utilization Distribution: Per-unit, this report displays the percent of periods in the report timeframe with utilization levels of low, medium, and high.
Utilization Thresholds: This predefined report displays all low and high threshold values for all unit utilization parameters.
Unit Utilization Daily Summary - Provides a daily summary of unit utilization data.
Predefined Reports
At installation time, the Guardium® appliance is configured with a number of predefined reports.
Predefined admin reports
This section provides a short description of all predefined reports on the default administrator layout.
Predefined user Reports
This section provides a short description of all predefined reports on the default user layout.
Predefined Reports Common
This section provides a short description of all predefined reports on both the default user and default administrator layouts.
Predefined Reports
At installation time, the Guardium® appliance is configured with a number of predefined reports.
All parameters and values are displayed on all reports. The parameters and values can be edited from the Customize button in any report screen.
Use the search function of help to go to the specific report directly. Use quotation marks around words or phrases to precisely define search terms.
Predefined admin Reports Predefined admin reports These are the predefined reports available to the admin user.
Predefined Reports from Accessmgr (see Access Management overview topic): User and Role Reports; Allowed Datasources; Allowed Servers; Databases Not
Associated; Datasources Not Associated.
1 - If new process, one or a number of email receivers can be created in the list (if any) with a content type as indicated in the emailContentType parameter. It will also
create a user receiver for the user logged in (invoking the API) if the includeUserReceiver parameter is true.
2 - If existing process, Â all email receivers will be removed and replaced with the emails from the new list (if any) with the content type as defined in the
emailContentType parameter. If the list is empty, it will remove all email address receivers. If there is already a receiver for the user it will NOT be removed even if the
includeUserreceiver is false, however if the parameter is true and there is no such receiver then it will be added.
Once the audit process is generated it will be automatically executed (similar to a Run Once Now) and users should expect an item on their to-do list for that audit process.
The GuardAPI that creates ad hoc audit process will keep results to 7 days (instead of 1 day). Results will be deleted after 7 days.
For further information on parameters, see the GuardAPI command, create_ad_hoc_audit_and_run_once, in the GuardAPI Input Generation help topic.
SQL Errors - An increase in SQL errors may indicate a SQL injection attack.
DDL (verify schema changes) - This report displays the client IP from which the DDL was requested, the main SQL verb (a specific DDL command), and the
total objects accessed for that record.
Failed logins - This report indicates attempts to access the database with expired login credentials.
Failed logins - People with proper credentials trying to access the database.
Terminated users - Terminated users trying to access the database.
Policy violations - Users and issues that violate security policies.
Auditors
Note: If data level security at the observed data level has been enabled (see Global Profile settings), then audit process output will be filtered so users will see only the
information of their databases.
Aggregation/Archive Log
This report lists Guardium® aggregation activity by Activity Type. Each row of the report contains the Activity Type, Start Time, File Name, Status, Comment, Guardium
Host Name, Records Purged, Period Start, Period End, and count of log records for the row. You can limit the output by setting the Guardium Host Name run-time
parameter, which is set to % by default (to select all servers). The Records Purged column contains a count of records purged only when the activity type is Purge.
For each role, this report lists the number of applications to which it is assigned. To list the applications to which a role is assigned, click on the role and drill down to the
Record Details report.
For each role, this report lists the number of users to which it is assigned. To list the users to which a role is assigned, click on the role and drill down to the Record Details
report.
Appliance Settings
This report displays configuration settings from a Guardium system. Use the appliance settings report to quickly review and validate Guardium settings.
Note: This report presents metadata and as such is not filtered through the Data Level Security mechanism. This metadata could include database related information
such as Oracle SIDs.
ObjectNameLike % %
ObjectTypeNameLike % %
This report shows a detailed activity log for all tasks including start and end times. This report is available for admin users via the Guardium Monitor tab. Audit tasks show
start and end times, however the start and end of Security Assessments and Classifications (which go to a queue) is the same.
The Audit Process has been expanded to the signoff of specific rows beyond a user signing off on the entire audit process. Displays a list of what has been signed off and
what is the status of specific rows.
Use this Audit Process Log to stop audit processes. Tasks can be stopped only if the tasks have not been run or are running. Any more tasks that have not started will not
execute. Partial results will not be delivered. If tasks are complete, stopping the audit process will not stop the sending of the results. Stopping the audit process is done
through a GrdAPI command, invoke api, from the Audit process Log report. For any user it only shows the line belonging to the user (but without all the details - just the
tasks). Admin users get to see all the details and can stop anyone's runs. Users can only stop their own runs.
Note:
Stopping the audit process will not cancel queries running using a remote source. Neither will such online reports using a remote source.
Not supported for Privacy sets and External Feed. This means that if the Privacy set task was started or the External Feed has started - it will finish even if the process is
stopped (as opposed to a query which will be killed).
Login Name
Run ID
Audit Process ID
Audit Task ID
Event Type
Detail
Available Patches
Displays a list of available patches. There are no run-time parameters, and this reporting domain is system-only.
CAS Deployment
This CAS reports details the Database type, OS name, Hostname and OS type.
DB Type Like %
OS_Name Like %
Hostname Like %
OS_Type Like %
Changes (CAS)
CAS Change Details
For each monitored item, the changes are listed in order by owner.
DB_Type Like %
Host_Name Like %
Instance_Name Like %
Monitored_Item Like %
OS_Type Like %
Type Like %
This report lists the data saved for each change detected. This report is sorted by host name, and then by the most recent modification time.
Host_Name Like %
Monitored_Item Like %
Saved_Data_Id Like %
Configuration (CAS)
CAS Instances
This report lists CAS instance definitions (a CAS instance applies a template set to a specific CAS host). The default sort order for this report is non-standard. The sort keys
are, from major to minor: Host Name (ascending), Instance (ascending) and Last Status Change (descending).
Host_Name Like %
OS_Type Like %
DB_Type Like %
Instance Like %
This report lists CAS instance configuration changes. The default sort order for this report is non-standard. The sort keys are, from major to minor: Host Name (ascending),
Instance (ascending) and Last Status Change (descending). You can limit the output by using any of the following runtime parameters, which select all values by default.
Host_Name Like %
OS_Type Like %
Template_Id Like %
Connections Quarantined
Guardium policies can be used to terminate or quarantine connections in real time. Use threshold alerts, based on queries. See Quarantine under the Policies topic for
configuration instructions.
Server IP LIKE %
DB User LIKE %
CPU Tracker
Lists the Software TAP Host and number of CPUs on machines running S-TAPs.
CPU Usage
By default, displays the CPU usage for the last two hours. This graphical report is intended to display recent activity only. If you alter the From and To run-time parameters
to include a larger timeframe, you may receive a message indicating that there is too much data. Use a tabular report to display a larger time period.
Databases Discovered
For the reporting period, for each Discovered Port entity where the DB Type attribute value is NOT LIKE Unknown, this report lists the Probe Timestamp, Server IP, Sever
Host Name, DB Type, Port, Port Type, and count of Discovered Ports for the row.
Data Sources
Lists all datasources defined: Data -Source Type, Data-Source Name , Data-Source Description, Host, Port, Service Name, User Name, Database Name, Last Connect,
Shared, and Connection Properties..
You can restrict the output of this report using the Data Source Name run time parameter, which by default is set to “%†to select all datasources.
Discovered Instances
This S-TAP report details the following information:
Timestamp, Host, Protocol, Port Min, Port Max, KTAP DB Port, Instance Name, Client, Exclude Client, Proc name, Named Pipe, DB Instance Dir, DB2® Shared Mem Adjust,
DB2 Shared Mem Client Position, DB2 Shared Mem Size.
The Data Mart extraction program runs in a batch according to the specified schedule. It summarizes the data to hours, days, weeks or months according to the granularity
requested and then it saves the results in a new table in Guardium Analytic database.
The data is then accessible to the users via the standard Reports and Audit Process utilities, likewise any other traditional Domain/ Entity. The Data Mart extraction data
are available under DM domain and the Entity name is set according to the new table name specified for the data mart data. Using the standard Query Builder and Report
Builder, users can clone the default query and edit the Query and report, generate Portlet and add to a Pane.
The extraction log consists of the following - Data Mart Name, Collector IP, Server IP, from-time, to-time, ID, run started, run ended, number of records, status, error code.
Dropped Requests
Tracks requests dropped by an inspection engine (Exception Description = Dropped database request). Under extremely rare, high-volume situations some requests may
be lost. When this happens, the sessions from which the requests were lost are listed in the Dropped Requests report.
A bidirectional interface is provided to transfer the identified sensitive data from Guardium to InfoSphere Discovery and from InfoSphere Discovery to Guardium.
This data will be transferred via CSV files. See External Data Correlation (Bidirectional Interface) for further information.
Internal - not available Export Sensitive Data to Discovery Classification Process Results
Schema LIKE Â
Assessments and Classifications run in their own separate process called the job queue. Jobs are queued and have their status maintained while a listener periodically
polls the queue looking for waiting jobs to run.
Running jobs, when right-clicked for drill-down, there is an option to stop the running job and cancel it. The job can not be restarted at this point.
Halting
Running jobs are monitored to reduce the number of hung jobs that might cause the job queue to be come overloaded. If a job is inactive for 30 minutes, the listener is
terminated and restarted, effectively stopping the operation of a job. Before the listener is restarted, a process called the cleaner runs, the status is set from RUNNING to
HALTED, and then the listener is restarted. A status of HALTED means the job was not able to run to completion.
Resubmitting
Sometimes the listener gets restarted for reasons other than a job hanging, for example rebooting the machine. When the cleaner halts the running jobs, it will see if the
job has responded in the past 8 minutes. If it has, the job will be copied and that copy will be resubmitted onto the job queue. The original halted will still display on the
queue, and still have the results it was able to process available.
Monitoring
The mechanism by which jobs maintain their active status is by touching the timestamp on the job queue record. It is important to note that the job queue record is used
for the entire job. Each individual classifier rule or assessment test interacts with the timestamp for its parent process, and they do not have individual timestamps that
are monitored.
The classifier will update its timestamp before every rule is tested and after every SQL operation. For example, if the classifier is scanning the data in a database that
supports paging, it will touch the timestamp after each batch of data is brought back from the database. This is because, depending on the state of the target database,
the classifier has the potential to invoke some long-running queries that will be limited to 30 minutes of execution.
Assessments touch the timestamp after each test in the assessment is evaluated. Most assessment tests run in a few seconds or less.
Observed Tests
The exception to the relatively quick-running assessment tests is the category of observed assessment tests. These tests are based on queries and reports that use the
internal sniffing data on the Guardium appliance and can run for longer periods of time and are unable to update the timestamp while they are in process. Therefore,
observed assessment tests have their timestamps set two hours into the future when they are started, essentially giving them two hours and thirty minutes to run to
conclusion. This can be confusing when looking at the job queue and seeing the timestamp set to a time in the future. Just like any other assessment test, when the
observed test ends, the timestamp will be touched. If the next test is an observed test, the timestamp will once again be set two hours into the future. Otherwise, the
timestamp will be set to the current time.
Client OS % N/A
Guardium Applications
For each Guardium application, each row lists a security role assigned, or the word all, indicating that all roles are assigned.
You can restrict the output of this report using the run-time parameters, both of which are used with the LIKE operator and a default value of %, which selects all values.
Guardium Users
Lists each user, date of last activity, and number of roles assigned. For each user, you can drill down to the Record Details report to see the roles assigned to that user.
This report lists CAS host events. The default sort order for this report is non-standard. The sort keys are, from major to minor: Host Name (ascending), Instance and Event
Time (descending).
Host_Name Like %
OS_Type Like %
Event_Type Like %
Installed Patches
Displays a list of installed patches. There are no run-time parameters, and this reporting domain is system-only.
Logins to Guardium
All values for this report are from the Guardium Logins entity. For the reporting period, each row of the report lists the User Name, Login Succeeded (1= Successful,
0=Failed), Login Date And Time, Logout Date And Time (which will be blank if the user has not yet logged out), Host Name, Remote Address (of the user) and count of
logins for the row.
internal - not available Primary SGuard host change log not available
Use this report to also invoke Use this report to also invoke create_constant_attribute, create_api_parameter_mapping, delete_api_parameter_mapping, or
list_param_mapping_for_function.
Any of Guardium reporting domains Any of the entities for the reporting domain Any of the attributes within the entity
Replay Statistics
This report shows Replay Statistics for Execution Start/End Date; Configuration Name; Schedule Setup Name; Job Status; Statistic Description; Session ID; Successful
Queries; Failed Queries; Total Queries; Type; Active/Waiting/Completed Tasks.
Replay Summary
For the reporting period, a measure of what query failed or succeeded. Checkmark required in Replay Configuration for Query Failed or Query Succeeded.
Restored Data
This report has two columns: RESTORED_DAY and EXPIRATION_DATE. When the user restores data from archive, this table is populated according to the data restored
and the duration specified for keeping this data. The purge process looks at this table to determine what data can be purged and cleans up records that expired.
RESTORED_DAY is the date of the data that was restored and is in the past. EXPIRATION_DATE is the date when this data will be purged and is a date in the
future.
Request Rate
By default, displays the request rate for the last two hours. This graphical report is intended to display recent activity only. If you alter the run-time parameters to include a
larger timeframe, you may receive a message indicating that there is too much data. Use a tabular report to display a larger time period.
Scheduled Jobs
Displays the list of currently scheduled jobs.
Session Count
For the reporting period, the total number of different sessions open.
SQL Count
For the reporting period, the total number of different SQL commands issued.
S-TAP Status
Displays status information about each inspection engine defined on each S-TAP Host. This report has no From and To date parameters, since it is reporting current status.
Each row of the report lists the S-TAP Host, DB Server Type, Status, Last Response, Primary Host Name, Yes/No indicators for the following attributes: KTAP Installed, TEE
Installed, Shared Memory Driver Installed, DB2 Shared Memory Driver Installed, Named Pipes Driver Installed, and App Server Installed. In addition, it lists the Hunter
DBS.
Note: The DB2 shared memory driver has been superseded by the DB2 Tap feature.
S-TAP Verification
List all results of S-TAP verifications.
S-TAP Events
Use this report for information on the S-TAP (from SOFTWARE_TAP_EVENT table in internal database).
S-TAP info is a predefined custom domain which contains the S-TAP Info entity and is not modifiable like the entitlement domain.
When defining a custom query, go to upload page and click Check/Repair to create the custom table in CUSTOM database, otherwise save query will not validate it. This
table loads automatically from all remote sources. A user cannot select which remote sources are used - it pulls from all of them.
Based on this custom table and custom domain, there are two reports:
Enterprise S-TAP view shows, from the Central Manager, information on an active S-TAP on a collector and/or managed unit (If there are duplicates for the same S-TAP
engine, one being active and one being inactive, then the report will only use the active).
Detailed Enterprise S-TAP view shows, from the Central Manager, information on all active and inactive S-TAPs on all collectors and/or managed units.
If the Enterprise S-STAP view and Detailed Enterprise S-TAP view look the same, it is because there only one S-TAP on one managed unit being displayed. The Detailed
Enterprise S-TAP view would look different if there is more S-TAPs and more managed units involved.
These two reports can be chosen from the TAP Monitor tab of a standalone system, but they will display no information.
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP - alert on any activity related to inspection engine and S-TAP configuration
The query/report displays All S-TAP Hosts and the last response (heartbeat) sent by each host.
The input parameters are: Last response From, and, Last Response To.
For example, when executed with Last response From = NOW -5 DAYS and Last Response To = NOW - 3 HOURS, it will display the host name and the last response time
for those hosts that sent the last response in the last 5 days, but had no response in the last 3 hours.
This report has no run-time parameters, and is based on a system-only query that cannot be modified.
STAP/Z Files
STAP/Z provides files with raw data collected from DB2 (on z/OS®) containing DB2 events, SQL statements, etc. This report lists an Interface ID, UA file name (Un-
normalized Audit Event), UT file name (Un-normalized Audit Event text), UH file name (Un-normalized Audit Event host variables), File Status, Total Number of Events
Processed, Number of Events Failed, and Timestamp. The Run-time parameters are FileName Like % and FileStatus Like %.
This report has two run-time parameters, FileName Like % and FileStatus Like %. It is based on a system-only query that cannot be modified.
TCP Exceptions
For the reporting period, for each exception where the Exception Description of the Exception Type entity is TCP/IP Protocol Exception, a row of this report lists the
following attribute values from the Exception entity: Exception Timestamp, Exception Description, Source Address, Destination Address, Source Port, Destination Port, and
count of Exceptions for that row.
Templates (CAS)
CAS Templates
This report lists CAS templates. By default, all template items are listed.
Access_Name Like %
Template_Set_Name Like %
Audit_Type Like %
Tests Exceptions
Indicate pairs of test/datasource that are exempted temporarily. See create_test_exception for more information on the use of Test Exceptions.
Throughput
For each Access Period in the reporting period, each row lists the Period Start time, the count of Server IP addresses, and the total number of accesses (Access Period
entities).
You can restrict the output of this report using the Server IP run time parameter, which by default is set to % to select all IP addresses.
Server IP LIKE %
Throughput (graphical)
This report is a Distributed Label Line chart version of the tabular Throughput report. It plots the total number of accesses over the reporting period, one data point per
Period Start time.
You can restrict the output of this report using the Server IP run time parameter, which by default is set to % to select all IP addresses.
Server IP LIKE %
For the reporting period, for each User Name seen on a Guardium User Activity Audit entity, each row displays the Guardium User Name, an Activity Type Description (from
the Guardium Activity Types entity), a Count of Modified Entity values, the Host Name, and the total number of Guardium Activity Audits entities for that row.
From any row of the this report, the Detailed Guardium User Activity report is available as a drill-down report.
Guardium Activity User Activity Audit Trail Guardium User Activity Audit
System/Security Activities
For the reporting period, for each User Name seen on a Guardium User Activity Audit entity, each row displays the Guardium User Name, an Activity Type Description (from
the Guardium Activity Types entity), a Count of Modified Entity values, the Host Name, and the total number of Guardium Activity Audits entities for that row.
From any row of the this report, the Detailed Guardium User Activity report is available as a drill-down report.
Guardium Activity User Activity Audit Trail Guardium User Activity Audit
This report is not available from the menu, but can be opened for any row of the User Activity Audit Trail report, or the System/Security Activities report. For the selected
row of the report, based on the User Name and Activity Type Description, this report lists the following attribute values, all of which are from the Guardium User Activity
Audit entity, except for the Activity Type Description, which is from the Guardium Activity Types entity: User Name, Timestamp, Modified Entity, Object Description, All
Values, and a count of Guardium User Activity Audits entities for the row.
Guardium Activity Detailed Guardium User Activity Guardium User Activity Audit
Warning: Users should be aware that activities of the root user, and other sensitive system accounts, are logged. Drilling down into the activity of these users may show
sensitive commands and passwords that have been entered on the command line. Therefore users, whenever possible, should not enter sensitive command line
information that they would not like to show on this drill-down report.
Note: Comments defined for inspection engines, installed policies, or audit process results can be viewed from the individual definitions, but they cannot be displayed on a
report.
Unit Utilization: Displays the maximum unit utilization level for each unit in the given timeframe. There is a drill-down that displays details for a unit across all
periods within the timeframe of the report.
Unit Utilization Distribution: Per-unit, this report displays the percent of periods in the report timeframe with utilization levels of low, medium, and high.
Utilization Thresholds: This predefined report displays all low and high threshold values for all unit utilization parameters.
Unit Utilization Daily Summary - Provides a daily summary of unit utilization data.
Values Changed
For the reporting period, this report provides detailed information about monitored value changes. All attribute values displayed are from the Monitor Values entity. The
query this report is based upon has a non-standard sorting sequence, as follows:
Server IP
DB Type
Audit Timestamp
Audit Table Name
Audit Owner
The query this report is based upon has a number of run-time parameters, all of which use the LIKE operator and default to the value %, meaning all values will be
selected.
DB Type LIKE %
Server IP LIKE %
For a description of the reports on the default administrator layout, see Predefined admin reports.
Note: If data level security at the observed data level has been enabled (see Global Profile settings), then audit process output will be filtered so users will see only the
information of their databases.
Request Rate
By default, displays the request rate for the last two hours. This graphical report is intended to display recent activity only. If you alter the From and To run-time
parameters to include a larger timeframe, you may receive a message indicating that there is too much data. (Use a tabular report to display a larger time period.)
The Sensitive Objects group is empty at installation time. Someone at your company must populate the group with the appropriate set of members.
Activity By Client IP
For each Client IP address seen during the reporting period, a row counts the number of SQL Verbs, Object Names, and the total number of sessions.
Database Servers
For each Server IP address accessed during the reporting period, a row of the report displays the Server Type, Database Name, Service Name, a count of source programs
accessing that server, and the total number of sessions for that row.
Client IP LIKE Â
DBUserName LIKE Â
ServerIP LIKE Â
Policy Violations
For every policy rule violation logged during the reporting period, this report provides the Timestamp from the Policy Rule Violation entity, Access Rule Description, Client
IP, Server IP, DB User Name, Full SQL String from the Policy Rule Violation entity, Severity Description, and a count of violations for that row. You cannot access the query
that this report is based upon (Policy Violations List with Severity), but you can clone the report.
Exceptions Distribution
Each wedge of the pie chart represents the proportion of exceptions for each Exception Description attribute value (from the Exception Type entity) that was logged during
the reporting period.
As with any chart, you can drill down on the pie chart to display the tabular version of the query on which the chart is based. There are several exceptions reports that are
accessible from this tabular report (or drill-downs from it) that are available here, but are not included on any menu.
Exceptions Monitor
A count of exceptions logged during the reporting period. One datapoint is created each time that you refresh the report on your portal.
SQL Errors
Exception Count
The total number of exceptions (Exception entities) logged during the reporting period.
The Terminated DB Users group is empty at installation time. It must be populated by someone at your location. The query that this report is based upon (Terminated
Users Logins) cannot be accessed from any query builder.
Each row lists a DB User Name, Client IP, Server IP, Server Type, Source Program, last login time (the maximum value of the Session Start attribute), and the count of
sessions for the row.
The Active Users group is empty at installation time. It must be populated by someone at your location. The query that this report is based upon (Active Users Last Logins)
cannot be accessed from any query builder.
The Active Users group is pre-defined, but empty at installation time. It must be populated by someone at your location. The query that this report is based upon (Active
Users with no Activity) cannot be accessed from any query builder.
The Terminated DB Users group is pre-defined, but empty at installation time. It must be populated by someone at your location. The built-in query for this report cannot
be accessed. The query that this report is based upon (Terminated Users Failed Login Attempts) cannot be accessed from any query builder.
DDL Commands
All DDL commands sent to the database. The report displays the client IP from which the DDL was requested, the main SQL verb (a specific DDL command), and the total
objects accessed for that record.
For each SQL Verb from the DDL Commands group seen during the reporting period, this report displays the Client IP, Server IP, Server Type, SQL Verb, and Count of
Commands referenced in the row.
For each SQL Verb from the ALTER Commands group seen during the reporting period, this report displays the Client IP, Server IP, Service Name, DB User Name, Source
Program, Database Name, Object Name, SQL Verb, and Count of Objects referenced in the row.
DDL Distribution
This bar graph displays the distribution of commands seen from the DDL Commands group during the reporting period. For each command seen, a single bar represents
the total number of objects affected.
Sessions List
As with most reports, drill-down reports are available. There are a number of session reports that are accessible from this report, but are not included on any menu. This
includes the following reports, with the run time parameters for those reports set by using values from the selected row of the report:
Commands List
This report lists all SQL Verbs seen during the reporting period. At the outermost level, commands are grouped by the Period Start time from the Access Period entity,
which is usually one hour, on the hour. Your Guardium® administrator can modify the access period length by changing the logging granularity, which is one hour by
default. For each Access Period in the reporting period, each row lists the access Period Start time, a SQL Verb, Depth of the verb in the SQL statement, Parent (a pointer to
the owning verb), and a count of occurrences for the row.
Objects List
This report lists all objects seen during the reporting period. At the outermost level, objects are grouped by the Period Start time from the Access Period entity, which is
usually one hour, on the hour. Your SQL Guard administrator can modify the access period length by changing the logging granularity, which is one hour by default. For
each Access Period in the reporting period, each row lists the access Period Start time, an Object Name, and the count of occurrences for that row.
Archive Candidates
This report lists objects (database tables or stored procedures, for example) that have not been accessed for an extended period of time. You cannot access the query this
report is based upon.
DW Dormant Objects
Shows all the members of one group that are not members in a second group, with a focus on dormant tables. For example, this report shows objects that are in the all
objects group, but have not been used in a Select.
Throughput
This report produces a count of all Server IPs seen, and total accesses, during the reporting period. At the outermost level, accesses are grouped by the Period Start time
from the Access Period entity, which is usually one hour, on the hour. Your Guardium administrator can modify the access period length by changing the logging
granularity, which is one hour by default. Each row lists the Period Start time, the count of Server IPs seen, and a total count of accesses for the row.
You can restrict the output of this report using the Server IP run time parameter, which by default is set to “%†to select all IP addresses.
Throughput (Graphical)
This report is a Distributed Label Line chart version of the tabular Throughput report, plotting the total number of accesses over the reporting period, one data point per
Period Start time.
You can restrict the output of this report using the Server IP run time parameter, which by default is set to “%†to select all IP addresses.
Databases Discovered
For the reporting period, for each Discovered Port entity where the DB Type attribute value is NOT LIKE Unknown, this report lists the Probe Timestamp, Server IP, Sever
Host Name, DB Type, Port, Port Type, and count of Discovered Ports for the row.
Data Sources
Violations/Incidents
See the Incident Management topic.
Status Monitor
The Status Monitor graphical report displays the current state of the Guardium® appliance: how many packets per second and requests per second it is processing, how
much disk space and memory is being used, and so forth. Each field is described in the following table.
The box displays the output of the Linux VMSTAT command. If you are familiar with that command, these statistics should be familiar to you.
system System:
(n)pps / (m)rps In the arrow next to the Analysis Engine, two averages are calculated for the last five seconds: n is the average number of network
packets per second, and m is the average number of network database requests per second.
Analysis Engine For the Analysis Engine, the first line lists the total number of messages queued for processing (q), followed by the number of
messages dropped (d) because the buffer was in danger of becoming filled. The second line lists the total number of messages
(q-d) ------ (p) processed (p). The number processed will be reset to zero whenever the inspection engine is restarted.
Server Type For each server type, the number of messages awaiting processing (q) is listed and the number of messages processed (p) is listed.
Files/Other The Files/Other portion of Status Monitor represents the data accumulated in nondb-sql logger.
Nondb-sql logger logs close session events arriving to the Analyzer from “ignored†sessions that have been internally closed
by the Analyzer (INACTIVE_FLAG=-1). The Analyzer has the ability to close connections by timeout (if session has been inactive for
a long time). If close session data arrives to the Analyzer from “ignored†session that has been closed by timeout, it is
recorded in the nondb-sql-logger section.
Analyzer never records data directly to database. This section also represents number of DB requests (like inserts into
GDM_SECURE_PARAMS) sent by Analyzer to Logger, as well as other supported protocols such as FTP
Data Sources
Lists all datasources defined: Data -Source Type, Data-Source Name , Data-Source Description, Host, Port, Service Name, User Name, Database Name, Last Connect,
Shared, and Connection Properties..
You can restrict the output of this report using the Data Source Name run time parameter, which by default is set to “%†to select all datasources.
Note: When scheduling this audit process, check that the FROM/TO dates for each report make sense for the process interval being defined (for example, it doesn’t
make sense to have a reporting period of one day if the audit process runs only once a week - you will miss six days of activity).
The Appliance Monitoring audit process contains the following reports:
A query describes a set of information to be obtained from the collected data; for example, find all clients updating a specific database during weekend hours or
what unauthorized users have attempted to access sensitive data (Social Security numbers or credit card number).
A report describes how the data returned by a query is presented.
There is a separate Query Builder for each domain, and it is opened from the Query Finder for that domain (see Open the Query Finder section). Click Reports > Report
Configuration Tools > Query Builder.
The Entity List pane identifies all entities and attributes that are contained in the domain. Entities are represented as folders, and attributes are the items within.
Click an entity folder to display its attributes, or click again to hide them. For a description of all entities and attributes, see Entities and Attributes in the Domains,
Entities, and Attributes help topic.
The Query Field pane lists all fields to be accessed, what is to be displayed for that field (its value, a count, minimum, maximum, or average), and the sort order. For
more information about using this pane, see the Query Fields Overview.
The Query Conditions pane specifies any conditions for selecting the fields that are listed (for example, “where VERB = UPDATE†). For more information
about using this pane, see the Query Conditions Overview in the Queries help topic.
2. From a report that is based on the query, click Edit this Report's Query in the toolbar of the report.
2. Optional. If you know the Main Entity for the query, select it from the list.
3. Click Search.
If there is only one query that is defined for the selected Main Entity, that query opens immediately in the query definition panel.
If there are multiple queries that are defined for the selected Main Entity, or if no Main Entity was selected, a list of queries display in the Query List panel.
If a Main Entity was selected for which no queries have been defined, you will be informed.
To open the Query Builder panel for one of the listed queries, click on it. To define a new query, click New.
Create a Query
1. Open the Query Finder for the appropriate domain (see Open the Query Finder).
2. Click New to open the New Query – Overall Details panel.
3. Type a unique query name in the Query Name box. Do not include apostrophe characters in the query name.
4. Select the main entity for the query from the Main Entity list. The main entity controls the level of detail that is available for the query, and that it cannot be changed.
Basically, each row of data that is returned by the query represents a unique instance of the main entity, and a count of occurrences for that instance.
5. Click Next. The new query opens in the Query Builder panel. To complete the definition, see next section on Query Fields.
There are two ways to add a field to the Query Fields pane:
Drag-and-Drop Method:
2. Drag the icon to the Query Fields list and release it.
Regardless of the method that is used, the field is added to the end of the list.
2. Use the following buttons to move the field to the desired location:
Modify a Query
1. Open the Query Finder for the appropriate domain (see Open the Query Finder).
3. Refer to the Query Builder Overview topic to modify any component of the query definition.
Create a Report
Once a query has been defined, there are several options for adding a tabular report that is based on that query to an existing menu layout, quickly. These options apply
only for tabular reports.
1. Open the Query Finder for the appropriate domain (see Open the Query Finder).
2. Use the Query Finder to open the query to use for the report.
To create a report, click Create a Report. To redo an existing tabular report, click Regenerate.
To add a tabular report to the My Custom Reports tab, click Add to My Custom Reports in the panel. (If no tabular report has been generated yet for the query, you
need to click the Create a Report first.)
In order to see meaningful data in the tabular report, click in order to access the run-time parameters (change the time from and now).
Next, you use a report that uses monitored data to show all object names that have participated in a SELECT statement. There are predefined reports for this in Guardium
8, all starting with the prefix DW (Data Warehouse). Then, use the output to populate one of the predefined groups.
Finally, use a predefined report that shows all members in the first group that are not members in the second group.
There are two sets of such reports and groups – one which focuses on tables and one which focuses on tables and columns. The only difference is that in the later case
groups are of a 2-tuple type (members that are a composite of a pair of value attributes, referred to as tuple).
1. Upload all table names and/or all table/column combinations from the set of system catalog tables (definitions of the database objects).
2. Use monitored data to determine which tables and/or table/columns have been accessed over a period of time.
3. Create a report of all items of step 1 that are not in step 2.
External Data Correlation for uploading table names and columns names
Populate groups from queries
Reporting
Procedure
1. Upload all the tables from the system catalog. Do this by creating a custom table.
Prerequisites
The following example is available from Comply > Custom Reporting > Custom Table Builder > Upload Definition > Import Table Structure.
Upload the data so that it is in the Guardium system (as a custom table) and if desired, schedule this upload. This data will be used to determine the superset of all
tables defined in the system.
In this example, dormant data based on table names is used. But the analysis can include columns, provided the upload tasks are defined to bring back pairs of
<object,field> and use tuple groups to compare with an observed tuple of object+field.
For instances of Object-Field, replace the DW Dormant Objects report with the DW Dormant Objects-Fields report. For instances of Object-Field, replace the DW
Select Object Access report with the DW Object-Field Access report.
Once you complete the upload, define a custom domain based on this single custom table and define a report that retrieves the table names.
Next, populate the group DW All Objects group from this report and schedule this Import from Query action if desired. This creates a group that has all the tables as
defined by the system catalog.
Note: When populating the group DW All Objects group, it should include the information to click Run Once Now -> Select All -> Click on Import button. Do the
same for Group Name "DW SELECT Accessed Objects". It needs to import all scheduled definitions.
Use monitored data to determine which tables and/or table/columns have been accessed over a period of time.
Look at some additional predefined reports. The DW SELECT Object Access report shows all object names that have been accessed through a SELECT statement.
Now, populate the group DW SELECT Accessed Objects group from the report, filling in the filtering attributes that you require.
Note: When populating the group DW All Objects group, it should include the information to click in 'Run Once Now' -> Select All -> Click on Import button. Do the
same for Group Name "DW SELECT Accessed Objects". It needs to import all scheduled definitions.
The following example is available from Setup > Tools and Views > Group Builder > Chose DW All Objects > Populate from Query > DW Select Object Access.
Use the DW Dormant Objects report to view objects that are in the all objects group, but have not been used in a Select.
Contrast this report with the earlier Report – Table Names. Notice that EMP is not in this report because it was used in a SELECT statement.
Note: Because group members are centrally managed and synchronized between the Central Manager and managed units, the content of this report may be
delayed by up to 30-minutes. If you need access to the information that is most up-to-date, run this report on the Central Manager or ask your Guardium
administrator to synchronize the managed unit from the Central Manager.
In addition to direct SELECT access, tables may be accessed through stored procedures and functions. In this case, you will need to do a bit more mapping to allow
Guardium to calculate such SELECTs.
First, use the report DW EXECUTE Object Access to fill in the group called DW EXECUTE Objects with a set of stored procedure names that are being executed.
Then, use indirect mapping to generate all the objects being used from within these procedures.
In the Group Builder select the DW EXECUTE Objects from the list and click on Auto Generate Calling Prox. Select either Using Reverse Dependencies, which is
supported only for Oracle in Guardium 8, or Generate Selected Objects.
If you choose to use dependencies then you will need to choose a database that has access to DBA_DEPENDENCIES and what type of dependencies to follow.
The following example is available from Setup > Tools and Views > Group Builder > Chose DW EXECUTE Accessed Objects > Auto Generated Calling Prox > Using
Reverse Dependences > Analyze Stored Procedures.
This will add the dependent objects to the group DW EXECUTE Accessed Objects.
Value-added: Through a GUI, by using existing data on the system that is displayed in reports as parameters for API calls, quickly and easily generate and populate API
calls without having to perform system level commands or type lengthy API calls to quickly perform operations such as create datasources, define inspection engines,
For this scenario, we will generate API function calls to populate the Data Security User Hierarchy.
1. To begin, let's show the current Data Security User Hierarchy for the user scott
2. To invoke an API function we must find a report that currently has the desired API functions linked to it. Since creating a user hierarchy is related to users, selection
of a user report should yield good results. For this scenario we've selected the User - Role report.
4. Click on the Invoke... option to display a list of API functions that are mapped to this report
5. Click on the API you'd like to invoke; bringing up the API Call Form for the Report and Invoked API Function
6. Fill in the Required Parameters and any non-Required Parameters for the selected API call. Many of the parameters are pre-filled from the report but may be
changed to build a unique API call. For specific help in filling out required or non-required parameters please see the individual API function calls within the
GuardAPI Reference guide.
a. If Invoke Now is selected the API call will run immediately and display an API Call Output screen showing the status of the API call.
b. If Generate Script is selected: Open the generated script with your favorite editor or optionally save to disk to edit and execute at a later time -- replacing
any of the empty parameter values (denoted by '< >') if contained within the script.
Note: Empty parameters may remain in the script as the API call will ignore them
Example Script
Example Call
$ ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
10. Validate. For this scenario it is a redisplay of the Data Security User Hierarchy.
This scenario uses a custom report with mapped parameters to report fields. Please see additional scenarios further in this section for additional information.
1. To begin, let's show the current Data Security User Hierarchy for the user scott
2. Click on the Invoke... icon to display a list of APIs that are mapped to this report
4. Use the check boxes to select / de-select the rows that will be targeted for the API call.
5. Fill in the Required Parameters and any non-Required Parameters for the selected API call. Many of the parameters are pre-filled from the report but may be
changed to build a unique API call. For specific help in filling out required or non-required parameters please see the individual API function calls within the
GuardAPI Reference guide. Additionally, use the set of parameters for the API to enter a value for a parameter and upon clicking the down arrow button populate
that parameter for all records.
6. Use the drop-down list to select the Log level, where Log level represents the following (0 - returns ID=identifier and ERR=error_code as defined in Return Codes, 1
- displays additional information to screen, 2 - puts information into the Guardium application debug logs, 3 - will do both)
a. If Invoke Now is selected the API call will run immediately and display an API Call Output screen showing the status of the API call. In this scenario the last
two API calls will fail since we can not have a cyclical relationship in the hierarchy.
b. If Generate Script is selected: Open the generated script with your favorite editor or optionally save to disk to edit and execute at a later time -- replacing
any of the empty parameter values (denoted by '< >') if contained within the script. With this scenario, we could easily delete the last two lines of the script --
knowing they would create cyclical errors.
Note: Empty parameters may remain in the script as the API call will ignore them.
Example Script
Example Call
$ ssh cli@a1.corp.com<create_user_hierarchy_api_call.txt
Value-added: Through a GUI, create a user-defined constant that can be used for filling in a parameter in an API function call .
1. From our report, we can modify it to have a field that we could use for parameter mappings.
2. Go to the Query Entities & Attributes report for the Client/Server entity within the ACCESS RULES VIOLATIONS domain. Double-click on a row and select the
Invoke... option.
4. Fill in the constant value to use ('SCOTT'), fill in the attributeLabel you like to name it ('OracleTopParent'), and then click on the Invoke now button to create the
constant.
5. Clicking on the Invoke now button will produce a API Call Output status showing the constant was created.
6. A re-display of the Query Entities & Attributes report will show the new attribute created.
9. Fill in the functionName and the parameterName and click on the Invoke now button.
11. Now when the report is displayed the new attribute is displayed.
12. To validate the new constant's usage, double-click on a row and select the Invoke... option.
14. Now the parentUserName is populated from the newly added constant. Click the Invoke now button.
Value-added: Through a GUI, quickly and easily map API parameters to custom report fields to be used in API function calls.
1. By default, a newly created custom report will not have any API functions linked to it. This can be seen by the proceeding custom report where double-clicking on a
row will only produce a list of additional drill-down reports to run but lacks the Invoke option.
3. The API Assignment panel shows all the API functions assigned to the selected report. Notice for our scenario the report selected has no API functions assigned to
it.
4. To assign an API function to a report, find an API you'd like to link to the report, click the greater than arrow, and then click the apply button. For our scenario we
selected create_uer_hierarchy. When selected a pop-up window will appear that shows the report parameter mappings (which report fields will be used when
calling the API function). Notice there are no mapped report fields to parameter names.
6. Double-click on the attribute you'd like to assign to a parameter name and click on the Invoke... option.
9. Now, when we go back to the Report Builder for our report and look at the API Assignment; clicking on the create_user_hierarchy API function displays the API -
Report Parameter Mapping with our mapping of userName to the Report field Client/Server.DB User Name.
10. Click on the greater-than arrow '>' and click the Apply button
13. Notice that the userName is now populated from the report field.
14. Fill in the parentUserName and click the Invoke now button.
15. Verify that the new Data Security User Hierarchy has been added.
Sending reporting data to an external database is useful in several scenarios, for example when combining or correlating Guardium data with non-Guardium data, when
using Guardium data with external reporting tools, or when machine-parsing records in especially large reports.
Map a feed between Guardium and an external database. External feeds currently support relational databases and may not function with other database types.
Create a report defining the data to send via the external feed. Predefined reports will not work with external feeds. If you want to use a predefined reports, make a
copy with the report and use the copy for the external feed.
Define an audit process that will use the external feed.
The first time that an optional external feed task runs, the necessary internal representation of the audit sources will be created. One limitation  is that data that is time-
stamped with a date earlier than the audit source creation date cannot be stored. This means that the first time the task runs, it will only export data for the current date.
On subsequent executions of the task following that date, any data from that date forward can be exported. (In other words, the next day, you will be able to export that
day's data plus the prior day's data.)
1. If the Add New Task pane is not open, click Add Audit Task.
2. Click External Feed.
3. Select the feed type from the Feed Type list. (The controls that appear next depend on the feed type selected.) One predefined feed type is Object Last Referenced.
Note: You must map an external feed before attempting to use this feature.
4. Select an event type from the Event Type list.
5. Select a report from the Report list. Depending on the report selected, a variable number of parameters appear in the Task Parameters pane.
6. In the Extract Lag box, enter the number of hours by which the feed is to lag, and mark the Continuous box to include data up to the time that the audit task runs.
Extract Log only works when the Continuous box is marked.
7. In the Datasources pane, identify one or more datasources for the external feed. For instructions on how to define or select datasources, see Datasources.
8. Enter all parameter values in the Task Parameters pane. The parameters will vary depending on the report selected. Count column is not supported in External
Feed.
9. Click Apply.
Identify the external database that will receive data from the feed, and gather the connection information required for that database (ip address, port number,
username, password, etc.). External feeds currently support relational databases and may not function with other database types.
Identify the Guardium report that will provide data to the external feed.
Procedure
1. Generate a report with the data you would like to transfer using an external feed. You can do this from a central manager, aggregator, or stand-alone Guardium
instance, provided that system can access the report data you require.
2. From the CLI, run grdapi create_ef_mapping reportName="My report". In addition to establishing the mapping, the grdapi_create_ef_mapping function
also generates a sample create table statement to be used in subsequent steps.
This capability alleviates an issue that can arise in complex enterprise environments when users do not always know the exact managed unit that has the data that is
required to for a particular report. This can happen because the link between Guardium collectors and databases can change over time that is based on configuration
options such as load balancing. This is further complicated by considerations such as the time period and data retention policy on the Aggregator and Collectors.
It is easy to create a Distributed Report. Simply define it via the Distributed Report screen, add to a Pane and it is ready for your use.
Furthermore, this feature optionally makes use of data marts on the Central Manager to enable scheduled collection of aggregated data over time. In essence, the
distributed report data is stored on the Central manager as a flat table, so no complex joins are required to create the report you want, which can significantly improve
response time for these enterprise reports.
Distributed report data can be gathered from Collectors, Aggregators, and even Central Managers. The default distributed versions of the reports includes the host name
of the unit responsible for that data.
Aggregation/Archive Log
When you define a distributed report, run it immediately or schedule it to run in the background and gather the results to the Central Manager:
Immediate: This mode gathers data on demand (upon execution via the GUI) and displays results while gathering the results from the relevant managed
units. The distributed report includes a status indicator that data is still in transit or that all data has been received from a particular managed unit. In this
mode, data is not saved on the Central Manager. As soon as the report is closed, the data is gone.
Scheduled: This mode gathers data in advance in order to enable instant response. On the time interval you specify in the scheduler, all relevant, aggregated
data from the specified managed units is sent to a designated data mart table on the Central Manager machine and creates a default report against this table.
This table also has its own domain and entity to enable creation of additional queries and reports using the query builder. Those reports can be added to an
audit process in order to run the process periodically and assign the results of the process to a Role, User and/or User Group for review or sign-off.
In a mixed environment where the Central Manager is 32-bit and managed units are 64-bit, the Distributed Report will not show information from the 64-bit
systems. To see information in this situation, the Central Manager needs to be upgraded to 64-bit.
Because of the coordination of data to be sent to the Central Manager, it is critical that the clock time on all managed units is set to the real-time at the time
zone where the managed units are located. Even a difference of ten minutes between the Central Manager and the managed units impact the performance
and reliability of the distributed reports.
Scheduled Distributed report definitions can be exported and imported, however immediate Distributed Report definitions cannot be exported or imported.
The schedule itself is not included in the exported and imported definition. It is recommended that you keep a record of the definitions and scheduling if
needed to re-create on another system such as a backup or test Central Manager. System backup does include distributed report configurations.
If you specify that report data is collected from both aggregators and collectors, it is conceivable that the default distributed report includes duplicate data
(although the Guardium host name is different). In this case, it is best to specify only collectors or only aggregators for the distributed report configuration.
Distributed reports are based on existing non-distributed reports. When defining a distributed report in scheduled mode, if the original query includes run-
time parameters, then you will be asked to provide those values (or wildcards, %).
Plan for the fact that now you will have data residing on your Central Manager in a database that you did not before. So you will need to plan for operational
changes for purging, for upgrades, and for backup.
Distributed report building is available only from an appliance that is configured as a Central Manager. To access the distributed report builder when logged in as an
administrator, go to Reports > Report Configuration Tools > Distributed Report Builder.
From the Distributed Report Builder, you can select from a list of existing reports to modify the configuration or add to a pane, or click New to create a new
distributed report. In general, any existing report on the Central Manager can be distributed immediately or run on a schedule (or both).
From the Report Builder, select New, which clears any existing data in the report builder, in the Based on Report pulldown, select one of the existing reports that are
available for distribution. Each report from the list can be distributed once as immediate and once as scheduled. Those that are defined to be distributed
In the Gather Data From section of the builder, choose All Managed Units (that the Central Manager is managing) or specify certain Groups and/or specific Managed
Units.
Note: You can define managed unit groups from the Central Manager. Examples of groups are: Group of collectors versus aggregators; groups that are based on
application, responsibility, or geography.
In the Operation Mode section of the builder, choose the report operation mode:
Immediate: Run the report when the user requests it. When you select this option there are no additional options to consider. You can click Apply to save the
changes and then optionally click Add to Pane to add the report to the GUI.
Schedule: Run in a batch that prepares and gathers the data in advance.
With the Scheduled Report option, you specify the following additional values:
Time Granularity: Specify the time period for which the Data Mart is captured. The Data Mart extract is done at the next Time Granularity interval boundary
and covers the time interval specified. The Data Mart extract for a DAYS Granularity starts at Midnight and runs every X days. The Data Mart extract for a
HOURS Granularity starts at the next hour boundary and runs every X hours. The Data Mart extract for a MINUTES Granularity starts at the next X minute
boundary and runs every X minutes. For example, if you specify a Time Granularity of 1-hour for the Count Of Failed Logins report, the count is based on an
hourly aggregation of failed logins.
Purge After: Specify how long to keep the report data in the data mart before it is automatically purged.
Runtime parameters: Depending on what report you are basing the distributed report on, you must specify the runtime parameters. To see valid values for
these fields, examine the query for the original report, or specify the wildcard, %.
Click Apply. When the system is done saving the distributed report configuration, Modify Schedule and Roles are activated.
To create the schedule, click Modify Schedule, which takes you into the general-purpose scheduler.
The schedule definition is pushed down to the managed units and tells each managed unit when and how often to send the aggregated data to the Central Manager.
To specify which roles can see this distributed report, click Roles.
Change the configuration, including managed units, schedule details, or runtime parameters
Create a scheduled report that is based on an existing immediate report. This option replaces the immediate report. You cannot create an immediate report
from an existing scheduled report.
To select an existing report, use the text search box or scroll through the list of existing reports and select the one you want to modify.
Source: The Guardium system where the data was gathered from.
TZ: Time Zone - because the Guardium system might be located in a different time zone from the Central Manager.
Date: This column shows the Start Period time for scheduled reports and enable grouping results according to hour/day. For Immediate mode, this column
shows start time and will not be meaningful.
Note: Only a maximum of three date fields are permitted.
For distributed reports, edit and update the base report and update the distributed report based on the updated report structure.
If a user changes the columns in a base report, or adds or removes the where clause in the base report, and then saves and re-generates the report, then to update
the distributed report based on this updated report, the user only needs to click on "Save report changes" on the existing distributed report for changes to take
effect.
Should the user choose to update the existing report parameter, user should first click on "Apply report changes", then update the parameter value, then click on
"Save report changes" for the updates to take effect.
When running a report, the report customizer lets you specify an absolute time window for the query (from 3-31-2014 8:00am to 3-31-2014 11:00am) or a relative
time window (NOW -3 HOUR).
For absolute time, each Guardium system will run in its local time. For example, if a distributed report gathers data from Guardium systems in Eastern Standard
Time (EST) and Pacific Standard Time (PST), then each system will execute the query based on local time. In the example (useful for checking morning peak hours,
midnight or any specific absolute time), a system in New York will gather the results from 08:00 - 11:00 EST and a system in California will gather the results from
08:00 – 11:00 PST.
For a relative time specification, each system will run NOW –N according to the current time on that system. This is important for real-time reports. Absolute Time
cannot be used for real-time or near real-time reports. Use the Immediate mode for real-time monitoring.
Every distributed report is accompanied by a status report that show the user what machines succeed in bringing in the results and what did not. The link to access
the status report is highlighted when you navigate to the report in the GUI.
For scheduled reports, clicking on a line on the Status Report enables execution of API to rerun the report on the specific unit(s).
If the specific run for Distributed Report in Scheduled mode comes back with an error, you can rerun the report from the status report as follows:
1. Double click on one of the rows in the status report to bring up the Invoke menu. Click on Invoke.
3. This will open up a pop-up screen that lets you choose the specific run to rerun. Any row of the report can be opened, but only rows with ERROR status can
be rerun.
The retry command described in the GUI, for invoking the status report, can also be accessed via GuardAPI command.
Syntax
grdapi rerun_distributed_report
The Distributed Reports distributes the query request to the specified Guardium systems, it gathers the data into the Target system, consolidates the results and
provides views on the consolidated results. The results are available via the Query Builder for additional queries definition.
The Distributed Report feature can now set the Target system to any Guardium system. The previous version does not allow setting the Target system and it always
goes to the Central Manager (CM).
Requirement justification
In many cases the CM is overloaded (regardless of the Distributed Report) and the CM is sometime used as an Aggregator which adds load to the CM.
In those cases it will be much more efficient to enable the user to determine the target system.
Solution
A target System can be set for each Distributed Report. A CLI command is available to set the optional Target systems. The list set via the CLI is shown in the
Distributed Report builder GUI.
Important note: This change affects the Distributed Report Scheduled mode only. The Immediate mode is not included in this change! This means that the
ad-hoc distributed report result viewer is accessible via the CM only.
GUI Change
A new field "Send Data To" is added to the Distributed Report Builder screen to enable the user to set the target System(s) (either Collector(s) or
Aggregator(s)) for the Distributed Report.
The list of available Target Systems is limited to the Systems that were set via the CLI (see CLI list below).
The Distributed Report definition is editable via the CM and View-Only via the target.
The "Add To Pane" of the report (adding the report viewer to the menu) is available from the definition screen on the Target System and CM.
This option is available on CM even if the CM is not the Target System for that report. It's done to give a possibility to view Distributed Report Status on CM
but no data will be displayed in the report itself.
If there are still distributed reports with this unit as target then returns error and the list of such reports
grdapi get_distributed_report_target_info
For scheduled distribute reports, store or show the value of a maximum number of rows per unit.
show scheduled_distributed
store scheduled_distributed
The Store command has one parameter, maximum_rows_per_unit. If the value of that parameter is greater than 15,000 or equals 0 (no limit), the user will see a
warning message:
"Depending on number of collectors, setting maximum number of rows per unit to a high value might have negative impact on performance".
About this task - In this example, we see how to get a broader view and correlation insight for Exceptions (for example, SQL Errors) that are recorded on specific
collectors.
Summary of steps
Prerequisites – create group of Managed Units via the Central Management screen.
Procedure
1. Click Reports > Report Configuration Tools > Distributed Report Builder.
2. Click New.
3. Select Based on Report from the list (the list shows the User-Defined Reports). For this example, choose Exceptions Details.
4. Move down the screen to specify the Managed Units to be included in this distributed report. For this example, choose two groups from the Group list and in
addition a few managed units from the Managed units list. In this example, leave the ‘Central Manager’ unchecked (in the case the Central Manager is also an
Aggregator, it might need to be included).
5. The next screen capture shows the setting for the Operation Mode. The Immediate mode is mainly for online / real-time monitoring, such as, view the recent Failed
Login Attempts, view recent Excessive Exception, or view real-time alerts. The Scheduled mode is an ongoing data-gathering that runs periodically based on the
Schedule defined. This example summarizes the exceptions every hour. There is a requirement for filling in values for Exception Description and Destination
Address.
8. The next step is to schedule it by clicking Modify Schedule (this is mandatory to activate the process).
11. The data is gathered from all the specified Managed Units and stored in new designated entity (table). This entity is now available via the Query Builder and Report
Builder to create additional Queries and Reports against this new table. The option to build additional Queries and Report are available via the Distributed Report
result screen as well. Click Edit the query for this report.
This default Report cannot be changed, click Clone, name it, remove all attributes and leave the Date, User Name, Exception Type Description, and Sum Of Count Of
Exceptions.
The following screen capture shows an example of the Correlate Total Exceptions By User (Distributed). This view sum the total exceptions per user from all databases
that are associated to the Guardium Managed Units selected for this Distributed Report. Likewise, you can view the Total Failed Login Attempts system wide, or the Total
Exceptions per Source Programs.
Database Vulnerability Assessment is used to scan the database infrastructure for vulnerabilities and provide evaluation of database and data security health, with real
time and historical measurements.
Test
A test checks the database environment for vulnerabilities for a particular threat or area of concern.
Assessment
An assessment is a job that includes a set of tests that are run together.
Data source
The source of data itself, such as a database or XML file, and the connection information necessary for accessing the data.
The Guardium® Vulnerability Assessment application enables organizations to identify and address database vulnerabilities in a consistent and automated fashion.
Guardium’s assessment process evaluates the health of your database environment and recommends improvement by:
Assessing system configuration against best practices and finding vulnerabilities or potential threats to database resources, including configuration and behavioral
risks. For example, identifying all default accounts that haven’t been disabled; checking public privileges and authentication methods chosen, etc.
Finding any inherent vulnerabilities present in the IT environment, like missing security patches,
Recommending and prioritizing an action plan based on discovered areas of most critical risks and vulnerabilities. The generation of reports and recommendations
provide guidelines on how to meet compliance changes and elevate security of the evaluated database environment
Guardium’s Database Vulnerability Assessment combines two essential testing methods to guarantee full depth and breadth of coverage. It leverages multiple sources
of information to compile a full picture of the security health of the database and data environment.
1. Agent-based-Using software installed on each endpoint (e.g. database server). They can determine aspects of the endpoint that cannot be determined remotely,
such as administrator’s access to sensitive data directly from the database console.
2. Scanning-Interrogating an endpoint over the network through credentialed access.
Database Auto-Discovery performs a network auto-discovery of the database environment and creates graphical representation of interactions among database
clients and servers.
Database Content Classifier automatically discovers and classifies sensitive data, such as 16-digit credit card numbers and 9-digit Social Security
numbers—helping organizations quickly identify faulty business or IT processes that store confidential data.
Database Vulnerability Assessment scans the database infrastructure for vulnerabilities and provides evaluation of database and data security health, with real
time and historical measurements.
CAS (Configuration Auditing System) tracks all changes to items such as database structures, security and access controls, critical data values, and database
configuration files.
Compliance Workflow Automation automates the entire compliance process through starting with assessment and hardening, activity monitoring to audit reporting,
report distribution, and sign-off by key stakeholders.
CAS (Configuration Auditing System) plays an important role in the identification of vulnerabilities and threats. Guardium pre-configured and user-defined CAS templates
can be used in the Assessment test and bring a holistic view of the customer’s database environment; With CAS, Guardium can identify vulnerabilities to the database
in the OS level such as file permissions, ownership and environment variables. These tests can be seen through the CAS Template Set Definition panel and have the word
Assessment in their name.
Note: Vulnerability Assessment (VA) and Configuration Auditing System (CAS) are only supported in English.
Common Vulnerabilities and Exposures (CVE®) is a dictionary of common names (i.e., CVE Identifiers) for publicly known information security vulnerabilities. CVE’s
common identifiers makes it easier to share data across separate network security databases and tools, and provide a baseline for evaluating coverage such that, if a
report incorporates CVE Identifiers, users may quickly and accurately access fix information in one or more separate CVE-compatible databases to remediate the problem.
Numerous organizations have made their information security products and services CVE compatible by incorporating CVE Identifiers. Guardium constantly monitors the
common vulnerabilities and exposures (CVE) from the MITRE Corporation and add these tests for the relevant database related vulnerabilities.
To keep CVEs current within the Guardium solution, Guardium will download and use the most current CVE database to populate a database table with all current CVE
entries and candidates. Guardium the programmatically compares the downloaded CVE data with the CVE data already in the Guardium Vulnerability Assessment
repository; producing a list of new CVEs for review. Guardium Database Security Team then manually reviews these candidates for the Guardium Vulnerability
Knowledgebase, tests them and adds the relevant ones to the GA Guardium Vulnerability Assessment Knowledgebase. These tests are tagged with the appropriate CVE
number, and once in the GA repository, these tests can automatically run using the Guardium Vulnerability Assessment application.
Note:
For both Vulnerability Assessments and Entitlements Reporting, when looking for scripts to grant privileges for entitlement reporting, use scripts in the
gdmmonitor_scripts directory. Do not use the entitlement_monitor_role folder, which is no longer updated.
When using an expiring product license key, or license with a limited number of datasources, the following message may appear: Cannot add datasource. The
maximum number of datasources allowed by license has been reached. The License valid until date and Number of datasources can be seen on the
System Configuration panel of the Administrator Console. A Vulnerability or Classification process with N datasources are counted as N scans every time they run.
Guardium Vulnerability Assessments requires access to the databases it evaluates. To do this, Guardium provides a set of SQL scripts (one script for each database
type) that creates users and roles in the database to be used by Guardium.
The template scripts are available on the Guardium system once it is built and can be found and downloaded via fileserver at the following path: /log/debug-
logs/gdmmonitor_scripts/. More information is available in the README.txt file.
115 DB2 Allowed Grants to Public No Public Object Privileges 105 DB2 LUW
144 DB2 Allowed Grants to Public non- No Public Object Privileges 105 DB2 LUW
restrictive
116 Teradata Allowed Grants to Public Object privileges granted to public 2029 TERADATA
118 Netezza Allowed Grants to Public Object privileges granted to public 2053 NETEZZA
(Netezza)
65 MS-SQL Database Administrators Only DBAs In Fixed Server Roles 159 MSSQL
165 Oracle Only DBA Access To Only DBA Access To SYS.USER$ 222 ORACLE
SYS.USER$
166 MS-SQL DDL granted to user DDL granted to user 321 MSSQL
170 Sybase IQ Procedures and Procedures and functions granted 2230 SYBASE IQ
functions granted to PUBLIC to PUBLIC.
173 MS-SQL Role granted to role Role granted to role 323 MSSQL
185 MS-SQL Access to server level Access to server level permissions 2289 MSSQL
permissions granted to non- granted to non-Database
Database Administrators Administrators
186 MS-SQL MSDB database Role MSDB database Role Members 2296 MSSQL
Members Privilege Privilege
109 Teradata PDE Version+Patches Teradata PDE Patch level 286 TERADATA
110 Teradata TDBMS Version+Patches Teradata TDBMS Patch level 287 TERADATA
111 Teradata TDGSS Version+Patches Teradata TDGSS Patch Level 288 TERADATA
112 Teradata TGTW Version+Patches Teradata TGTW Patch Level 289 TERADATA
MongoDB
Developed in 2007, MongoDB is a NoSQL, document-oriented database. MongoDB uses JSON documents with dynamic schemas (this format is called BSON). In
MongoDB, a collection is the equivalent of a RDBMS table while documents are equivalent to records in an RDBMS table.
MongoDB is the largest and fastest growing NoSQL database system. It tends to be used as an operational system and as a backend for web applications due to an ease of
programming for non-relationally formatted data like JSON documents which are often found in web applications.
MongoDB data sources support SSL server and client/server connections with SSL client certificates.
Guardium's VA solution for MongoDB Clusters can be run on mongos, a primary node and all secondary nodes for replica sets.
Entitlement reports and Query Based Builder are not supported for MongoDB.
You can import server cert which we do behind the scene for self signed. Customer can also import their certificate. Certificates also work on central manager and push
down to collectors.
The Mongo CAS Assessment template allows you to specify multiple paths in the datasource to scan various components of the file system.
Teradata Aster
Aster Data
Acquired by Teradata in 2011, typically used for data warehousing and analytic applications (OLAP). Aster Data created a framework called SQL-MapReduce that allows
the Structured Query Language (SQL) to be used with Map Reduce. Most often associated with clickstream kinds of applications.
A security assessment should be created to execute all tests on the queen node. All database connections for Aster Data goes through the queen node only.
Testing on worker and loader nodes are only required when performing CAS tests (File permission and File ownership).
Privilege tests loop through all the databases in a given Aster’s instance.
SAP HANA
Scripts are provided to support most database types and are designed to be run in the database tool itself. Each script includes detailed instructions in the script header.
The privileges granted for each database type can be seen in the script looking at each grants.
Important: Before running any scripts, database administrators should read the instructions in the script headers and review the database actions that will be taken by the
script.
Procedure
1. On a Guardium system, enable the file server using the fileserver CLI command. For example, to enable the file server for one hour and download the scripts to a
system with IP address 10.0.0.1, use the following command:
When successfully initiated, the file server should display output similar to the following:
The upload will only be accessible from the IP you are logged in from: 10.0.0.1
2. On the machine where you will download the scripts, use a web browser to access the file server. For example, for a Guardium system running at
https://guardium.host.com:8445, access the scripts for vulnerability assessment and classification at the following URLs:
https://guardium.host.com:8445/log/debug-logs/gdmmonitor_scripts/
https://guardium.host.com:8445/log/debug-logs/classification_role/
Important: Discovery processes of the Guardium classifier require a higher level of database access than is required for vulnerability assessment tests. It is
recommended to use the scripts in gdmmonitor_scripts for vulnerability assessment and the scripts in classification_role for the classifier. Before running
any scripts, database administrators should read the instructions in the script headers and review the database actions that will be taken by the script. Before
running any scripts, database administrators should read the instructions in the script headers and review the database actions that will be taken by the script.
3. Download the required scripts using the web browser's Right-click > Save link as... action or a similar function. Review the README.txt files to identify the correct
scripts to use for specific database types.
Tip: The following scripts are for Microsoft SQL Server:
gdmmonitor-mss2000-only.sql is for Microsoft SQL Server 2000
gdmmonitor-mss.sql is for Microsoft SQL Server 2005 and newer
gdmmonitor-mss-SA.sql provides administrative privileges required for six of the Microsoft SQL Server vulnerability assessment tests. If you do not allow
these privileges, the tests will return errors indicating inadequate privileges. These six tests represent no more than 5% of the available tests.
What to do next
Once you have downloaded the scripts required for your database servers, closely review and follow the instructions in the script headers.
2. User runs a Guardium-supplied script against the target database to create a role with the appropriate privileges. User then creates a datasource connection to the
database.
3. Create a security assessment, then select your datasources and desired tests to execute.
4. Once the execution is done, a report is created, showing what tests have passed and/or failed along with detailed hardening recommendations.
Password policies
Security APARs
Entitlement Reports:
Procedure
1. Use the Group Builder to create a group of users that you want to use VA. Open the Group Builder by clicking Setup > Tools and View > Group Builder. The next step
uses a script for a group named gdmmonitor.
2. Run the following script on your DB2 for i system to grant privileges needed for executing VA to the group. This is done outside the Guardium system using a
database native client.
For IBM DB2 for i v7.1 and higher, also include the scripts:
3. Create a JDBC connection to your DB2 for i system . Open Datasource Finder by clicking Setup > Tools and Views > Datasource Definitions, and then Security
Assessment from the Application Selection menu.
a. Click New and enter the appropriate information. For Connection Property, enter "property1=com.ibm.as400.access.AS400JDBCDriver;translate
binary=true".
4. Create an assessment using the Assessment Builder. Open the Assessment Builder by clicking Harden > Vulnerability Assessment > Assessment Builder.
a. Enter a description for the assessment.
b. Add the datasource created in the previous step by clicking Add Datasource, selecting the datasource from the Datasource Finder, and clicking Add.
Note: You must click Apply to save the assessment before you can configure tests.
5. Add tests to the assessment by clicking Configure Tests. Click the IBM for i tab, select the tests that you want to add, and click Add Selections.
6. Click Return to go back to the Security Assessment Finder. Run the test by clicking Run Once Now, or schedule the test using Audit Process Builder. Open the Audit
Process Builder by clicking, Discover > Classifications > Audit Process Builder.
7. Click View Results to view the details of all the executed tests, including recommendations for improving your score.
Results
What to do when a test fails?
Cloudera Manager
Datasource Setup
The Cloudera Manager datasource uses the Cloudera Manager Java API for a connection. It does not use JDBC.
The Cluster Name must be defined in the datasource GUI. The Cluster name is the Cluster display name in the Cloudera manager GUI on the left-hand side.
To execute Vulnerability Assessment tests for Cloudera Manager, you need to define a datasource user with the Read-Only role for most the Vulnerability
Assessment tests. Then, there are a small number of Vulnerability Assessment tests which require the datasource user have the Cluster Administrator role as the
minimum privilege to run the tests.
The following Vulnerability Assessment tests require the datasource user to have the Cluster Administrator role:
This information is also available in the Cloudera Manager gdmmonitor script (/log/var-log-guard/gdmmonitor_scripts/gdmmonitor-Cloudera-Manager.sql).
If SSL is enabled, check “Use SSL†and check Import “server ssl certificateâ€
The Directory will need to be defined as the Cloudera manager install path. For example: installpath=/opt/cloudera
Datasource Setup
Kerberos - The User Name and Password must be a valid Kerberos User ID and Password. It is also used for CA. Test to make sure your Kerberos User ID and
Password can be used to login to the Hive beeline command line.
Make sure you have already created a Kerberos Configuration that defines your KDC and Realm for your appliance. On the Guardium GUI, go to Setup > Tools and
Views > Kerberos Configuration. If no Kerberos Configuration has been created, then click on + icon to create a new Kerberos Configuration.
After you have created a Kerberos Configuration, you can select it to configure your datasource setup.
If SSL is enabled, check “Use SSL†box and check the “Import server ssl certificate†box.
1. The Directory will need to be defined as the Cloudera manager install path. For example: installpath=/opt/cloudera
2. If HDFS is enabled for Kerberos, the Datasource User Name and Password must be a valid Kerberos User ID and Password. CAS scripts uses it for obtaining a
Kerberos ticket.
3. The Account must be root. For certain parameter tests that require CAS, it is important that the CAS user is root in order to access the real-time configuration
under the Cloudera agent process directory (/var/run/cloudera-scm-agent/process/).
Note: Guardium does not in any way modify or alter your configuration data.
For Hive
For the Privilege tests, the datasource account must be a member of the Sentry Admin group. See the Hive gdmmonitor script for steps to check the Sentry Admin
group.
When setting up Hive datasources, you can only perform a JDBC test connection when the datasource is pointing to your Hive server2. For all other Hive
datasources, you can clone this specific datasource using nodename where the Cloudera service is installed. Make sure the cloned datasource has a valid
Username and Password just like the Hive server2 datasource. For these datasources, you cannot perform a datasource test connection. However, Guardium relies
on the accuracy of the Username and Password from the datasource to perform a Kerberos connection using CAS when Kerberos enabled.
The Hive Privilege tests require Sentry Services to be installed and configured. Without Sentry, there is no security. Everyone can connect to Hive and access data.
The Vulnerability Assessment CAS test for HDFS parameters are from configuration files under the Cloudera agent process directory (/var/run/cloudera-scm-
agent/process/). The folder names inside these process directories change every time the Cloudera agent services is started.
Some of the HDFS parameter CAS tests require the datasource system to be a specific node configuration (for example, NameNode or DataNode). Some CAS tests
require Yarn, Mapreduce or Hive Server to be installed on the datasource system. Please select the tests carefully for your assessment based upon on your
datasource system configuration. If the requirements are not met for the test, then the test will error with the recommendation to execute these tests on the
correct Cloudera services. The requirements are also mentioned in the test description.
Regardless of the number of nodes in your cluster, if you have Guardium Hive datasource that cover all of these services, you then have properly setup your
environment to run Vulnerability Assessment.
For example
A Vulnerability Assessment may contain one or more of the following types of tests.
Predefined Tests
Predefined tests are designed to illustrate common vulnerability issues that may be encountered in database environments. Because of the highly variable nature of
database applications and the differences in what is deemed acceptable in various companies or situations, some of these tests may be suitable for certain databases but
totally inappropriate for others (even within the same company). Most of the predefined tests are customizable to meet requirement of your organization. Additionally, to
keep your assessments current with industry best practices and protect against newly discovered vulnerabilities, Guardium distributes new assessment tests and updates
on a quarterly basis as part of its Database Protection Subscription Service. Please refer to Guardium Administration Guide for more details.
Behavioral Tests
Configuration Tests
Behavioral Tests
This set of tests assesses the security health of the database environment by observing database traffic in real-time and discovering vulnerabilities in the way information
is being access and manipulated.
Configuration Tests
This set of assessments checks security-related configuration settings of target databases, looking for common mistakes or flaws in configuration create vulnerabilities.
As an example, the current categories, with some high-level tests, for configuration vulnerabilities include:
Privilege
Object creation / usage rights
Privilege grants to DBA and individual users
System level rights
Authentication
User account usage
Remote login usage
Password regulations
Configuration
Database specific parameter settings
System level parameter settings
Version
Database versions
Database patch levels
Object
Installed sample databases
Query-based Tests
A query based tests is either a pre-defined or user-defined test that can be quickly and easy created by defining or modifying a SQL query, which will be run against
database datasource and results compared to a predefined test value. See Define a Query-based Test for additional information on building a user defined query-based
test.
CAS-based Tests
A CAS-based test is either a pre-defined or user-defined test that is based on a CAS template item of type OS Script command and uses CAS collected data.
Users can specify which template item and test against the content of the CAS results. See Create a New Template Set Item for assistance on creating an OS Script type
CAS template.
Guardium also comes pre-configured with some CAS template items of type OS Script that can be used for creating a CAS-based test. These tests can be see through the
CAS Template Set Definition panel and have a name which contains the word Assessment. For instance, the Unix/Oracle set for assessments is named Guardium
Unix/Oracle Assessment. Additionally, any template that is added that involves file permissions will also be used for permission and ownership checking. See Modify a
Template Set Item for viewing these template sets and seeing those items with type OS Script.
Whether using a Guardium pre-configured or defining your own, once defined, these tests will appear for selection during the creation or modification of CAS-based tests.
See Define a CAS-based Test for additional information.
CVE Tests
Guardium constantly monitors the common vulnerabilities and exposures (CVE) from the MITRE Corporation and add these tests for the relevant database related
vulnerabilities.
New
Start from the beginning and define all the fields.
Clone
Clone an existing query-based test.
Modify
Modify an existing query-based test.
Procedure
1. Open the Assessment Builder by clicking Harden > Vulnerability Assessment > Assessment Builder.
2. From the User-defined tests, click Query-based Tests.
3. Click New, Clone or Modify to open the Query-based Test Builder.
4. Enter a unique Test Name.
5. Select a Database Type.
6. Select a Category.
7. Select a Severity.
8. Optional: Enter a Short Description for the test.
9. Optional: Enter an External Reference for the test.
10. Enter the Result text for pass that will be displayed when the test passes.
11. Enter the Result text for fail that will be displayed when the test fails.
12. Enter the SQL statement that will be run for the test.
Use the following convention to add and reference group members within a SQL statement:
For example:
To reference a group of users defined for the group MyUsersGroup and replace it with the actual members of the group use:
Select ... from DBA_GRANTS where ... AND USER in (~~G~MyUsersGroup~~) and ...
This will result in a SQL Statement such as the following where U1, U2, etc are the members of the MyUsersGroup group:
Select ... from DBA_GRANTS where ... AND USER in ('U1','U2','U3',...) and ...
If the group has no members, the database returns an error. In this case the reference is replaced with a single pair of quotation marks, like this:
Select ... from DBA_GRANTS where ... AND USER in ('') and ...
Use the following convention to replace a reference to a specific alias (of a specific group type) with the actual alias:
If there is an alias to TYPE of group type GrouptType it will replace the string and the resulting SQL will look like:
13. Optional: Enter a SQL Statement for Detail, a SQL statement that retrieves a list of strings to generate a detail string of Detail prefix + list of strings. See the example
in Detail prefix.
Note: The detail generated is only displayed when the query-based test fails; allowing the user to enter a SQL statement that can retrieve the information that
caused the test to fail and help identify the cause of failure.
Note: Detail string can be seen within a Security Assessment Results by clicking on the Assessment Test Name and also queried through the Result Details attribute
of the Test Result Entity.
14. Optional: Enter a Pre-test check SQL statement. This statement is run before running the test. If the statement returns 0, the test is not run. If the test returns 1 or
an error, the test is run.
15. Optional: Enter a Pre-test fail message. This message is inserted into the assessment results if the test is not run due to the SQL statement returning 0.
16. Optional: In Loop databases, enter a list of databases through which the test should loop. The test returns the union or sum of the results returned from all the
specified databases. You can use this function only when the test returns an integer value, and only with these database types: Informix, SQL Server, Sybase SE,
PostgreSQL and MySQL. The looping is performed if the DB loop flag box is checked. One or more of the specified databases might be unavailable when the test is
run. In that case the test will either skip that database and continue, or stop and issue a failure message, depending on whether the Skip on error box is checked.
17. Optional: Enter a Detail prefix that will appear at the beginning of the detail string.
18. Optional: Check the Bind output variable check box if the entered text in SQL statement is a procedural block of code that will return a value that should be bound
to an internal Guardium® variable that will be used in the comparison to the Compare to value.
Example (Oracle):
declare
retval integer := 0;
strval varchar2(255) := '';
nver number;
sver varchar2(255) := '';
begin
select VERSION
into sver
from V$INSTANCE;
nver := to_number(substr(sver,1,(instr(sver,'.',1,2) - 1)));
if nver >= 11.1 then
select VALUE
into strval
from V$PARAMETER
where NAME = 'sec_case_sensitive_logon';
end if;
if (nver < 11.1 or strval = 'TRUE') then
retval := 0;
else
retval := 1;
end if;
? := retval;
end;
19. Select the Return type that will be returned from the SQL statement.
20. Select the operator that will be used for the condition.
21. Enter in a Compare to value that will be used to compare against the return value from the SQL statement using the compare operator. It is this comparison that
determines whether this test have passed or failed. You may also click on the RE (regex) to define a regular expression for the compare value.
22. Do one of the following:
Click Back to cancel changes and return to the previous screen.
Click Apply to save the query-based test.
Results
You can add this newly created query-based test to an assessment.
What to do next
Parent topic: Vulnerability Assessment tests
Procedure
1. Open the Assessment Builder by clicking Harden > Vunerability Assessment > Assessment Builder.
2. From the User-defined tests, click CAS-based Tests to open the CAS-based Test Finder panel.
3. Click New or Modify to create a new test.
4. Enter a unique Test name.
5. Select a database from the Database Type menu.
6. Select a category from the Category menu.
7. Select a category from the Severity menu.
8. Optional: Enter a Short Description for the test.
9. Optional: Enter an External reference for the test.
10. Enter a Result text for pass that will be displayed when the test passes.
11. Enter a Result text for fail that will be displayed when the test fails.
12. Enter a Recommendation text for pass that will be displayed when the test passes.
13. Enter a Recommendation text for fail that will be displayed when the test fails. Recommendation text for fail - To prevent cross site hacking, any name from this list,
used in the Recommendation text for fail text box, will be rewritten: expression; function; javascript; script; alert; eval; <img; ContentType
14. Select a template to use from the CAS Template menu.
15. Select an operator to use from the operator menu.
16. Enter a Search string that will be used with the operator to compare what is returned from the CAS template. This comparison that determines whether this test
passes or fails. You may also click on the RE icon to define a regular expression for the search string.
17. Optional: Check the Fail if match check box if the test should fail when a match is made with the search string.
18. Click Apply to save the CAS-based test.
Results
You can add this newly created CAS-based test to an assessment.
Parent topic: Vulnerability Assessment tests
Assessments
Assessments are a group of tests that scan database infrastructures for vulnerabilities and provide an evaluation of database and data security health with real-time and
historical measurements.
Creating an assessment
Create an assessment, or modify or clone an existing assessment.
How to create a security assessment
Run security assessments against selected datasources to proactively identify and address vulnerabilities, improve configurations, and harden infrastructures.
Running an assessment
To get the results of an assessment, it must be run once it is created.
Viewing assessment results
You can take various actions while you view the results of an assessment.
Creating a VA test exception
Use a test exception to exclude specific members of a group from a security assessment. Run the security assessment against the exception group to see if a
specific member of a group is affecting your assessment results. This is useful if you do not want to or are not authorized to change group settings.
VA summary
The following table list information per test and database key displayed in the VA summary table: test result by unique identifier; cumulative failed age; first failed
date/ last failed date; last passed date; and, last scanned date. This information is tracked and users can create a report on this information.
Creating an assessment
Create an assessment, or modify or clone an existing assessment.
Procedure
1. In the Security Assessment Finder panel, click New to create an entirely new assessment. Click Clone or Modify to work with an existing assessment. Clicking any of
these buttons opens the Security Assessment Builder panel. If you are creating an entirely new assessment, complete all of the following steps. If you are cloning
or modifying an existing assessment, enter a new Description and then modify only the fields that you want to change.
2. Enter a unique Description for the assessment
3. Add a datasource by clicking Add Datasource, entering the required information, and clicking Add.
4. Add tests to the assessment by clicking Configure Tests.
a. From the Tests available for addition pane, select the appropriate tab for the datasource you added previously.
b. Select the tests you want, and click Add Selections to add them to the assessment. Once added, your selections will appear in the Assessment Test
Selections pane.
c. Use the Assessment Test Selections to manage tests for your assessment. Delete any selected test, or click Adjust this test's tuning for any test to customize
the test's parameters.
5. Add Roles to the Assessment.
Note: You cannot assign roles to an assessment until you have assigned roles to the datasources it is based on.
6. Click Apply to save the assessment.
You can also Add Comments to any assessment to document or log what changes were made to assessments and why.
Results
Your new assessment is ready to be run.
Parent topic: Assessments
Procedure
1. Create or modify an assessment by opening the Assessment Builder. Open the Assessment Builder by clicking Harden > Vulnerability Assessment > Assessment
Builder.
4. Add a datasource to the assessment by clicking Add Datasource. Select a datasource from the Datasource Finder and click Add. Add a new datasource by clicking
, filling in the information in the Database Definition window, and clicking Apply. See Datasources for assistance.
Running an assessment
To get the results of an assessment, it must be run once it is created.
Assessments run in a serialized mode one after the other. If more than one assessment is scheduled to run they will have to be queued. This queue can be viewed through
the Guardium Job Queue report.
Clicking the Run Once Now button will enter the assessment into the queue and immediately run it. A short period of time is required for the job to be executed and
become viewable. See Viewing assessment results for more information on the results of an assessment.
You can optionally define and schedule an automated process for running of an assessment definition. The Audit Process finder panel is the starting point for creating or
modifying an audit process schedule. create a schedule to automatically run your assessments by going to the Audit Process finder panel. See Compliance Workflow
Automation for assistance in defining an audit process
Assessment Identity
The Assessment results identifies:
Assessment Selection
Use the drop-down menu to select and display past results for an assessment. The latest result is displayed by default.
View log
When clicked, the Execution Log will be displayed in a new window that shows the runtime execution of the assessment test. A timestamp, along with events, and
messages can aid in the debugging of issues that might have caused certain tests to fail.
Results Summary
A tabular graph summarizes all the tests that were executed within this assessment. The X-axis represents the test’s severity (CRITICAL, MAJOR, MINOR, CAUTION, or
INFO). The Y-axis represents the type of test (Privilege, Authentication, Configuration, Version, or Other). Within the grid is the representation of the number of tests that
have either Passed, Failed, or had an Error when trying to execute. These numbers are directly related to the detail for the assessment tests that is given under the
Assessment Test Results section.
Reset Filtering - Removes all filtering options selected through the Filter / Sort Controls options.
Filter / Sort Controls - Use this to open a filter/sort options for the report. Options allow you to filter by Severities, Datasource Severity Classification (DS sev. class), Scores
(pass, fail, or error), and Test Types (Observed/Database type). The sort option allows you to sort across combinations of severity, score, and datasource. Click Apply when
you would like the chosen filter/sort options to take effect.
The assessment results include a count of the number of tests and the number of passed tests in each of these categories:
CIS tests
CVE tests
STIG tests
These values are displayed in the assessment result viewer and available for reporting as part of the VA results domain.
Datasource Details
When expanded, the Datasource Details section will show all of the datasources that were referenced within this assessment including the datasource's specific
environmental information.
The reference links are clickable (opens new window). Either section will be absent when there is no corresponding record for a result.
Use the Download XML button to open two menu choices: Download as SCAP xml and Download as AXIS xml. Choose one of these selections in order to download to your
workstation an XML file representing the displayed assessment results. The file can be formatted for Security Content Automation Protocol (SCAP) XML or Apache
EXtensible Interaction System (AXIS) XML, which is used by QRadar.
Procedure
1. Open the Group Builder by clicking Setup > Tools and Views > Group Builder.
2. Select VA Tests Exception from the Group Type menu to view the list of predefined exception groups.
3. Select a group from the Modify Existing Groups menu and click Modify.
4. Add the group members that you want to exclude from the VA test.
5. Open the Assessment Builder by clicking Harden > Vulnerability Assessment > Assessment Builder. Select an assessment from the Security Assessment Finder and
click Configure Tests.
6. Find the test you want add the exception to, and click the test's Adjust this test's tuning button from the Tuning column.
7. Select your exception group from the menu, and click Save. Run your assessment again to see if the exception group affects the outcome of the test.
Note: By default, Guardium includes an exception group called IBM iSeries Profile User Exclusions. You can clone and modify this group to suit your needs.
All the Database Objects privilege tests exclude default system schemas from Guardium groups.
VA summary
The following table list information per test and database key displayed in the VA summary table: test result by unique identifier; cumulative failed age; first failed date/
last failed date; last passed date; and, last scanned date. This information is tracked and users can create a report on this information.
VA Summary
The key may include, in addition to the three original elements, the datasource Name. The default is Host, port and Instance Name.
This table can be exported/imported. Import Data will override existing data on the Guardium system (per key).
Table 1. VA Summary
Table Column Type Description
SERVICE_NAME Varchar Database instance Name (if part of the key, "N/A" otherwise)
DB_PORT Varchar Database Port (if part of the key, "N/A" otherwise)
The CLI commands are: store va_test_show_query and show va_test_show_query. Use export va_summary to export this information.
The GuardAPI commands to change or display the key are: grdapi modify_va_summary_key and grdapi reset_va_summary_by_key. The GuardAPI command to reset
cumulative ages, both pass and fail, is grdapi reset_va_summary_by_id. Use grdapi export_va_summary to export this information.
An additional parameter, datasourceName, has been added to grdapi reset_va_summary_by_key and grdapi modify_va_summary_key.
The VA Summary entity has an additional attribute, Datasource Name, that is populated ONLY if the datasource name is part of the key.
Note: The GrdAPI command, modify_va_summary_key, will allow the key to be empty by calling the GrdAPI with all four parameters: useHost, usePort, useServiceName,
useDatasourceName, equal to false. In this case, when the key is empty, the VA Summary calculation is disabled (no summary data will be calculated, updated or saved).
Parent topic: Assessments
GDMMONITOR.OS_GROUP
GDMMONITOR.OS_USER
CKADBVA.CKA_OS_GROUP
CKADBVA.CKA_OS_USER
Procedure
1. Install Guardium 10.x
2. Copy create_CKADBVA-schema_tables_zOS.sql from the /var/log/guard/gdmmonitor_scripts directory on your Guardium system to your database server. Run the
fileserver command on your database server to retrieve the file.
3. The script contains instructions that describe steps to be performed before and after running the script. Read these instructions and run the script.
4. Populate the new tables with data similar to the data that was stored in the old tables.
Results
Your system is now configured to use current vulnerability assessment tests.
What to do next
Parent topic: Assess and harden
In order to use these tests, you must obtain and install IBM Security zSecure Audit, Version 2.1. This product enables the commands that are used in these tests to
interact with RACF.
Tests that examine entitlements do not return a pass/fail grade; they return a list of entitled users. Examples of these reports include table and view privileges granted to
grantees and package privileges granted to grantees. In a large environment that includes very large numbers of users and applications, these reports generate an
overwhelming amount of data. When you run these reports in such a large environment, the process can run for a long time and consume large amounts of resources, and
it might eventually time out.
Procedure
1. Upgrade the database schema used to support vulnerability assessment on your database server.
2. Install zSecure Audit on your database server. Use the instructions and tools that are provided with zSecure Audit to learn how to populate approximately 24 tables
in the CKADBVA schema to support the new zSecure tests.
3. The zSecure team will issue a PTF that enables zSecure Audit to work with Guardium vulnerability assessment. Obtain this PTF and apply it according to the
accompanying instructions.
Results
Your system is now configured to take advantage of the new zSecure tests.
What to do next
Choose the new tests that you want to run to assess your RACF vulnerabilities. Configure and run the tests.
CAS Agent
CAS is an agent installed on the database server and reports to the Guardium system whenever a monitored entity have changed, either in content or in ownership or
permissions. You install a CAS client on the database server system, using the same utility that is used to install S-TAP®. CAS shares configuration information with S-TAP,
though each component runs independently of the other. Once the CAS client has been installed on the host, you configure the actual change auditing functions from the
Guardium® portal.
CAS Server
The CAS server is a component of Guardium and runs on the Guardium system. It runs as a standalone process, independent of the Tomcat application server. It is
controlled through the innittab file.
The CAS server is configured to use only a few of the available processors on the Guardium system. The number of processors that CAS uses is determined by using the
parameter divide_num_of_processors_by. This parameter is stored in the cas.server.config.properties file and its default value is 2. The number of available processors on
the Guardium system is divided by this value. This ensures that even when CAS uses 100% of the CPU on the allocated processors, the rest of the processors are available
for use by other applications.
When configured, when the CAS server starts it will load a signed certificate as well as a private key and assigns them to a server socket on which it accepts connections.
On the database server side the CAS client will support the following connection modes:
ca.cert.pem - is a file containing Root Certificate Authorities certificates (which are self signed). In a browser equivalent those would be trusted CA certificates, such
as VeriSign's, etc.
All gmachine certificates are issued/signed by the root authority - that's how they are validate and how the chain of trust is established.
It is possible to set guardium_ca_path with either the full path including the actual public key file name , or just the directory name
(<install_dir>/etc/pki/certs/trusted), in which all the public keys within this directory will be used in order to authenticate the server. If guardium_ca_path is set
with a file or directory that doesn't contain the public key, the connection attempt will fail.
4. Secure connection with server authentication and common name verification. This mode has an additional check in which the certificate CN from the server is
compared with the one set in the parameter sqlguard_cert_cn. If sqlguard_cert_cn is NULL or empty this check will be disabled. Otherwise it needs to be set with
the same CN Guardium's self signed certificate has ('gmachine').
If you attempt to use an older CAS agent to communicate with the updated CAS server using SSL, you will see this message in the log file on the CAS agent system:
You might also see this message in the CAS log file on the Guardium system
If you want to use a non-SSL connection between the CAS agents and the CAS server, you can continue to use your existing CAS agents.
Template Set
A CAS template set contains a list of item templates, bundled together, share a common purpose such as monitoring a particular type of database (Oracle on Unix, for
example), and is one of two types:
A database template set is always specific to both the database type and the operating system type.
A template item is a specific file or file pattern, an environment or registry variable, the output of an OS or SQL script, or the list of logged-in users. The state of any of
these items is reflected by raw data, i.e. the contents of a file or the value of a registry variable. CAS detects changes by checking the size of the raw data, or computing a
checksum of the raw data. For files, CAS can also check for system level changes such as ownership, access permission, and path for a file.
In a federated environment where all units (collectors and aggregators) are managed by one manager, all templates are shared by both collectors and aggregators and
CAS data can be used in reporting or vulnerability assessments. When the collector and aggregator (or host where archived data is restored) are not part of the same
management cluster the templates are not shared and therefore CAS data cannot be used by vulnerability assessments even when the data is present, to remedy this use
export/import of definitions to copy the templates from the collector to the aggregator (or restore target).
Note: CAS should not be asked to monitor more than 10,000 files per client.
Note: It is recommended to configure CAS to handle no more than 1,000 monitored files per hour.
Monitored Entity
The actual entity being monitored, can be A File (its content and properties), Value of an Environment Variable or Windows Registry, Output of an OS command or Script or
SQL statement
CAS Instance
Application of a CAS Template Set on a specific Host (creating an Instance of that Template Set and applying it on a specific host)
CAS Configuration
A CAS configuration defines one or more CAS instances, each of which identifies a template set to be used to monitor a set of items on that host.
You cannot modify a Guardium default template set, but you can clone it and modify the cloned version. Each of the Guardium default template sets defines a set of items
to be monitored. Make sure that you understand the function and use of each of the items monitored by that default template set and use the ones that are relevant to
your environment. After defining a template set of your own, you can designate that template set as the default template set for that template-set type. After that, any
new template sets defined for that operating system and database type will be defined using your new default template set as a starting point. The Guardium default
template set for that type will not be removed; it will remain defined, but will not be marked as the default.
For example, the predefined CAS template set for Oracle contains these templates, among others:
As you can see, these file-pattern templates all start with the same root, $ORACLE_HOME (NOTE: This is not necessarily the $ORACLE_HOME environment variable
defined on your database server; by preference, CAS uses the datasource field “Database Instance Directory†as the value for $ORACLE_HOME).
It is possible that in a production environment your Oracle data files will not be in the same directory tree, or even on the same device, as your log files, and your Oracle
configuration files might be in still another location.
You might create additional CAS templates using absolute paths to allow CAS to find and monitor all of your Oracle files, for example:
/u01/oradata/mydb/*.dbf
/u02/oradata/mydb/*.dbf
/u03/oradata/mydb/*.dbf
/u01/oradata/mydb/*.ctl
/u02/oradata/mydb/*.ctl
/u03/oradata/mydb/*.ctl
/home/oracle11/admin/mydb/bdump/*.log
/home/oracle11/product/11.1/db_1/dbs/init*.ora
You can even use additional environment variables that are defined in your Oracle instance account. As an example, if you have variables defined as $ORA_DATA1,
$ORA_DATA2 and $ORA_SOFT you can use:
$ORA_DATA1/mydb/*.dbf
$ORA_DATA2/mydb/*.dbf
$ORA_DATA1/mydb/*.ctl
$ORA_DATA2/mydb/*.ctl
$ORA_SOFT/admin/mydb/bdump/*.log
$ORA_SOFT/product/11.1/db_1/dbs/init*.ora
For example, suppose that you want to find .profile files in any DB2 user’s home directory. For this example we assume that the names of all of these home directories
include the string "db2." Add this line to the properties file:
user_profile_files=.*db2.*=.profile
If you need to specify more than one pattern, use the bar symbol (|) to separate patterns. If you want to add the profiles of your mysql users to the previous entry, replace
the previous example with this:
user_profile_files=.*db2.*=.profile|.*mysql.*=.profile
Prerequisites on Windows
Table 1. Disk Space requirements for Windows servers
Disk Space Description
Install CAS
Use the Windows installer, which is self explanatory.
Microsoft's solution to the problem is to partition the registry. A special key, labelled WOW6432Node, is added to the Registry tree within the key
HKEY_LOCAL_MACHINE\SOFTWARE. When a 32-bit application tries to access the Registry through a path within the key HKEY_LOCAL_MACHINE\SOFTWARE, Windows
inserts the special key WOW6432Node into the path. This way the 32-bit application deals with the Windows Registry just as it would on a 32-bit machine, and Windows
takes care of redirecting to the correct partition.
CAS is a 32-bit Java application, so it would not normally have access to the 64-bit software configuration parameters. CAS has been enhanced to detect a 64-bit
environment and handle the partitioned Registry. CAS interest in the Registry is to retrieve values of Registry keys to detect changes or to compare against recommended
values.
As an example, suppose that CAS is to retrieve the value of HKEY_LOCAL_MACHINE\SOFTWARE\MyApp\Parameter1. That value could be in either, both, or neither
partition. If it is in neither partition, CAS will retrieve null. Otherwise, it returns a string which is the concatenation of the two values separated by the string
WOW6432Node. If the value is in the 64 but not the 32-bit partition, the string retrieved would look like Value64WOW6432Nodenull. Conversely, if the value is in the 32
but not the 64-bit partition, the string is nullWOW6432NodeValue32. Finally, if the value is in both partitions, the string returned is Value64WOW6432NodeValue32. This
new Registry value pattern search will search both Registry partitions when appropriate.
If for any reason (for example, you install a new Java version after installing the Guardium® CAS product), you need to change the location of JAVA_HOME, follow the
following procedure.
1. Locate and open the CAS configuration file for editing. Its full path name is: <installation directory>/case/conf/wrapper.conf
2. Locate the following entry:wrapper.java.command=<value>
3. Replace value with the JAVA_HOME directory
4. Save the file.
CAS Program files including Javaâ„¢ AIX®: 309 MB; HP-UX: 630 MB; Linux: 405 MB; Solaris: 390 MB
Java is required for CAS. You must obtain and install Java yourself (due to licensing
constraints).
cas:<nnnn>::respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/guard_stap/cas/bin
If for any reason (for example, you install a new Java version after installing the Guardium® CAS product), you need to change the location of JAVA_HOME, follow the
following procedure.
1. Locate and open the CAS configuration file for editing. Its full path name is: <installation directory>/case/conf/wrapper.conf
2. Locate the following entry:wrapper.java.command=<value>
3. Replace value with the JAVA_HOME directory
4. Save the file.
Identify the JAVA_HOME directory. You will be prompted for its location during the CAS installation.
Verify that a supported version of Javaâ„¢ is installed. If a supported version is not installed, you must install it before installing CAS.
Note: To use CAS over SSL in a FIPS-compliant environment, you must install IBM Java on the server where the CAS agent runs.
Procedure
1. Enter the which java command. For example:
4. To check the version number, from the java directory, run the java -version command. For example:
5. Note the Java version that is returned. You will not be prompted for this information, but in the event that an issue arises later, you will be able to eliminate the
possibility of an unsupported Java version.
When connectivity is lost between the CAS client and Guardium system, it may take the CAS client and Guardium system up to five minutes (the wait time for a CAS client
to expect a message from the Guardium system) to discover that it has lost contact with the primary Guardium system, but may happen sooner if the communication error
is detected.
If the CAS client loses its connection to the Guardium system or cannot make an initial connection, it opens a failover file and begins writing the messages that it would
have sent to the Guardium system, to the failover file. The path to this fail over file is stored in guard_tap.ini with the name cas_fail_over_file. When communication is
reestablished the CAS client shuts down and restarts, sends all messages stored in the failover file to the Guardium system, and deletes the file. If the CAS client was
unable to make the initial connection, it will use the checkpoint file to determine what to monitor, and continues doing what it was doing before communication failed.
When communication is lost, the client also starts a thread which periodically tries to reconnect with the primary Guardium system. The number of times CAS will attempt
to reconnect, and the average time interval between reconnect attempts, are configurable parameters. It will try to reconnect for a period of time set in guard_tap.ini with
the name cas_server_failover_delay. After that time has passed, the client will also try to connect to any secondary servers identified in guard_tap.ini. The secondaries will
be tried in the order of the value of the primary attribute listed in the SQL_Guard sections of guard_tap.ini. When primary is not 1, it is a secondary. While the client is
connected to a secondary server it will continue to try to reconnect to the primary server.
If the reconnect attempt limit is met, the CAS client stops trying to reconnect, but continues to write data to a failover file. To cap disk space requirements on the database
server, there are actually two failover files. CAS writes to one file until it reaches its maximum failover file size (which is configurable), and then switches to the other,
overwriting any previous data on that file. The default failover file size is 50MB (for each of the files).
You can specify one or more secondary Guardium systems when configuring the CAS client. In failover mode, CAS only tries to reconnect to its primary server until the
time specified by cas_server_failover_delay in guard_tap.ini is exceeded. At that time, CAS begins trying to connect to any of the secondary servers, as well as its primary
server (which is always the first server it tries to connect with during any reconnect attempt). While it is connected to a secondary server, CAS continues to try to reconnect
to its primary server.
Changes to the CAS client configuration can only be made from the primary server and only while the host is online. Whenever the configuration of the CAS client is
changed on the primary server and Guardium system is in standalone configuration, an export file is saved on the host. If the CAS client connects to a secondary server,
the saved export file is imported from the host to the secondary server.
There is no need to separately maintain configurations on both primary and secondary servers. However, if on the primary server, the parameters for an individual
monitored item have been changed from those defined in the template, then these changes will not be transferred to the secondary server. For example, even if the test
interval on a particular file was changed from the template default of 1 hr to 10 min, the test interval on the secondary server will again be 1 hr. Essentially, monitored
items are regenerated from the templates of the imported configuration. The delay before searching for secondary servers is based directly on time rather than failover file
size. The delay is set with the cas_server_failover_delay parameter in guard_tap.ini and has a default of 60 minutes.
Various failover and connect parameters can be modified through S-TAP Control Change Auditing.
As with S-TAP, CAS connectivity outages create exceptions on the Guardium system, so alerts can be issued within moments of detecting the outage.
Rules of Failover
Be sure to perform this procedure only while the selected CAS host is connected to its primary server.
1. Export the definition of the CAS host (see the previous section).
2. On each secondary server:
Delete the old CAS host definition that you want to replace.
Import the definitions that were exported from the primary server (see Importing CAS Hosts, previous).
The CAS client agent will now look for a new parameter ignore_change_alerts in the CAS client agent's cas.client.config.properties configuration file.
If the parameter is not found or not set, the CAS client will work without any changes and the Ignore change alerts functionality will not be enabled (for example, the CAS
client will alert on any file change).
If the new parameter is set, CAS client agent will ignore sending change notifications based on the change-types specified in the parameter value.
Ignoring multiple change-types can be set by + delimited concatenation of any of the specified change-type.
For example:
In order to avoid sending change notification on OWNER and GROUP changes, set up the parameter as follows:
ignore_change_alerts=OWNER+GROUP
Note: In the initial installation or when defining a new template, the FIRST scan of the files will be performed and these files will appear in the CAS changes report
regardless to settings of Ignore change alerts.
If the scenario happens, the user will have to delete the datasource and change the tap_ip parameter to the correct database server hostname/ip.
CAS Templates
Guardium provides a set of CAS templates, one for each type of data repository.
Designates an OS script to be executed. Must begin with the variable $SCRIPTS, which refers to the scripts directory beneath the CAS home directory, and identify the
script to be executed, e.g., $HOME/ db2_spm_log_path_group_test.sh". The script itself must, of course, reside in the CAS $SCRIPTS directory. Output from the
script is stored in the Guardium® database to be used by security assessments. This can be either a shell/batch script to be run, or a set of commands that could be
entered on the command line. Because of the fickle nature of Java's parsing it is suggested that any but the simplest commands be put into a script rather than run
directly. On Unix the script is run in the environment of the OS user entered. Three environment variables will be defined for the run environment which the user could use
in writing scripts: $UCAS is the DB username, $PCAS is the DB password, and $ICAS is the DB instance name. For Windows these three values will be appended as the last
three arguments to the batch file execution. For example, if you had an OS Script template %SCRIPTS%\MyScript.bat my-arg1 my-arg2, then %3, %4 and %5 would
be the DB username, password, and instance name respectively.
File
Designates a file to be tracked and monitored by security assessments. The path to the file can be absolute, or relative to the $INSTHOME variable. Set the value of the
$INSTHOME variable in Database Instance Directory on the Datasource Definition panel. This is assumed to name a single file. Environment variables from the OS user
environment can be used in the file name and will be expanded. For example, $HOME/START.sh will name the startup script in the DB2® user's home directory.
File Pattern
Designates a group of files to be tracked and monitored by security assessments. The path to the files can be absolute, or relative to the $INSTHOME variable. Set the
value of the $INSTHOME variable in Database Instance Directory on the Datasource Definition panel. A .. in the path indicates one or more directories between the portion
of the path before it and the portion of the path after it. A .+ in the path indicates exactly one directory between the portion of the path before it and the portion of the path
after it. For example: $INSTHOME/sqllib/../db2.* is just a short-hand for creating many single file identifications from a single identification string, a file pattern which
will match all files in the directory. A file pattern can be viewed as a series of regular expressions separated by /'s. A file is matched if each element of its full path can be
Additionally, the Guardium Unix/DB2 Assessment: UNIX - DB2 for Unix set includes the following templates:
This test monitors that the SETUID bit on DB2GOVD has been disabled
This test monitors that the SETUID bit on DB2START has been disabled
This test monitors that the SETUID bit on DB2STOP has been disabled
File ownership
This test monitors file ownership, and changes thereto, of DB2 files.
File permissions
This test monitors file permissions, and changes thereto, of DB2 files.
Designates an OS script to be executed. Must begin with the variable $SCRIPTS, which refers to the scripts directory beneath the CAS home directory, and identify the
script to be executed, e.g., $HOME/ informix_rootpath_owner.sh". The script itself must, of course, reside in the CAS $SCRIPTS directory. Output from the script is
stored in the Guardium database to be used by security assessments. This can be either a shell/batch script to be run, or a set of commands that could be entered on the
command line. Because of the fickle nature of Java's parsing it is suggested that any but the simplest commands be put into a script rather than run directly. On Unix the
script is run in the environment of the OS user entered. Three environment variables will be defined for the run environment which the user could use in writing scripts:
$UCAS is the DB username, $PCAS is the DB password, and $ICAS is the DB instance name. For Windows these three values will be appended as the last three arguments
to the batch file execution. For example, if you had an OS Script template %SCRIPTS%\MyScript.bat my-arg1 my-arg2, then %3, %4 and %5 would be the DB
username, password, and instance name respectively.
File
Designates a file to be tracked and monitored by security assessments. The path to the file can be absolute, or relative to the $ INFORMIXDIR variable. Set the value of the
$INFORMIXDIR variable in Database Instance Directory on the Datasource Definition panel. This is assumed to name a single file. Environment variables from the OS user
environment can be used in the file name and will be expanded. For example, $HOME/START.sh will name the startup script in the Informix® user's home directory.
Additionally, the Guardium Unix/Informix Assessment for Unix set includes the following templates:
File ownership
This test monitors file ownership, and changes thereto, of Informix files.
File permissions
This test monitors file permissions, and changes thereto, of Informix files.
Designates an OS script to be executed. Must begin with the variable $SCRIPTS, which refers to the scripts directory beneath the CAS home directory, and identify the
script to be executed, e.g., $SCRIPTS/oracle_user.sh. The script itself must, of course, reside in the CAS $SCRIPTS directory. Output from the script is stored in the
Guardium database to be used by security assessments. (This can be either a shell/batch script to be run, or a set of commands that could be entered on the command
line. Because of the fickle nature of Java's parsing it is suggested that any but the simplest commands be put into a script rather than run directly. On Unix the script is run
in the environment of the OS user entered. Three environment variables will be defined for the run environment which the user could use in writing scripts: $UCAS is the
DB username, $PCAS is the DB password, and $ICAS is the DB instance name. For Windows these three values will be appended as the last three arguments to the batch
file execution. For example, if you had an OS Script template $SCRIPTS/mysql_mysqld_user.sh, then %3, %4 and %5 would be the DB username, password, and instance
name respectively. )
File
Designates a file to be tracked and monitored. The path to the file can be absolute, or relative to the $ORACLE_HOME variable. The value of the $ORACLE_HOME variable
is the value you set in the Database Instance Directory field of the Datasource Definition panel. (This is assumed to name a single file. Environment variables from the OS
user environment can be used in the file name and will be expanded. For example, $HOME/START.sh will name the startup script in the Oracle user's home directory.)
File Pattern
Designates a group of files to be tracked and monitored. The path to the files can be absolute, or relative to the $ORACLE_HOME variable. Set the value of the
$ORACLE_HOME variable in Database Instance Directory on the Datasource Definition panel. A .. in the path indicates one or more directories between the portion of the
path before it and the portion of the path after it. A .+ in the path indicates exactly one directory between the portion of the path before it and the portion of the path after
it. For example: $ORACLE_HOME/oradata/../*.dbf (This is just a short-hand for creating many single file identifications from a single identification string, a file pattern. A
file pattern can be viewed as a series of regular expressions separated by /'s. A file is matched if each element of its full path can be matched by one of the regular
Additionally, the default Guardium Unix/Oracle template set includes the following templates:
ADMIN_RESTRICTIONS Is On
This test monitors that the listener.ora parameter ADMIN_RESTRICTIONS is set properly.
File ownership
This test monitors file ownership, and changes thereto, of the Oracle data files, logs, executables, etc.
File permissions
This test monitors file permissions, and changes thereto, on the Oracle data files, logs, executables, etc.
This test scans the Oracle log files for occurrences of error strings.
Use the Unix/MongoDB template to specify multiple paths and multiple directories in the datasource to scan various components as specified in the MongoDB datasource
definition.
If the template item is not specified as part of the Database Instance Directory in the MongoDB datasource definition, the item will be skipped over and not scanned.
Note: For CAS scripts to work, you must enable log in for the MongoDB account on the Mongo DB server. To enable log in, log in as root, run the command chsh mongod,
and when prompted for new shell, enter /bin/bash.
Note: You can create your own template with multiple file paths for any type of datasource. When creating your own template, we recommend that you use the
Unix/MongoDB as a reference. To create a new template for a MongoDB datasource, you can clone and modify the Unix/MongoDB template.
Note: MongoDB datasources support SSL server and client/server connections with SSL client certificates. MongoDB connections use a Java driver, instead of a JDBC
database connection.
Note: The VA solution for MongoDB clusters can be run on mongos, a primary node and all secondary nodes for replica sets.
This test checks whether the files are owned and belongs to the correct group according to the definition within the CAS template.
File Permission
This test checks whether the file permission is properly set according to the definition within the CAS template.
This test checks for these events (FATAL, ERROR, DEBUG, ABORT and PANIC) in these two log files. /nz/kit/log/postgres/pg.log and /nz/kit/log/startupsvr/startupsvr.log
File Ownership
This test checks whether the files are owned and belongs to the correct group according to the definition within the CAS template.
File Permission
This test checks whether the file permission is properly set according to the definition within the CAS template.
This test check if the $PostgreSQL_BIN environment variable is defined in your database server. This variable need to be defined under the root account for Unix/Linux or
you can add to .profile for root login. For Windows OS, it needed to be defined for the Administrator login. For Red Hat Linux, PostgreSQL BIN folder is usually in /usr/bin.
For Solaris, it is usually something like /data/postgres/postgres/8.3-community/bin/64. Setting this environment variable is very important as other assessment tests
relied on the location of this folder.
This test check if the $PostgreSQL_DATA environment variable is defined in your database server. This variable need to be defined under the root account for Unix/Linux or
you can add to .profile for root login. For Windows OS, it needed to be defined for the Administrator login. For Red Hat Linux, the default for DATA folder is usually in
/var/lib/pgsql/data. For Solaris, there is no consistent location. Setting this environment variable is very important as other assessment tests relied on the location of this
folder to find the correct configuration files.
Designates an OS script to be executed. Output from the script is stored in the Guardium database. This can be either a shell/batch script to be run, or a set of commands
that could be entered on the command.
Registry Variable
Search Windows registry for specific key value that are required by security assessments test.
Designates an OS script to be executed. Must begin with the variable $SCRIPTS, which refers to the scripts directory beneath the CAS home directory, and identify the
script to be executed, e.g., $HOME/sybase_sysdevice_type_test.sh. The script itself must, of course, reside in the CAS $SCRIPTS directory. Output from the script is stored
in the Guardium database to be used by security assessments. This can be either a shell/batch script to be run, or a set of commands that could be entered on the
command line. Because of the fickle nature of Java's parsing it is suggested that any but the simplest commands be put into a script rather than run directly. On Unix the
script is run in the environment of the OS user entered. Three environment variables will be defined for the run environment which the user could use in writing scripts:
$UCAS is the DB username, $PCAS is the DB password, and $ICAS is the DB instance name. For Windows these three values will be appended as the last three arguments
to the batch file execution. For example, if you had an OS Script template %SCRIPTS%\MyScript.bat my-arg1 my-arg2, then %3, %4 and %5 would be the DB username,
password, and instance name respectively.
File
Designates a file to be tracked and monitored by security assessments. The path to the file can be absolute, or relative to the $SYBASE variable. The value of the $SYBASE
variable is the value you set in the Database Instance Directory field of the Datasource Definition panel. This is assumed to name a single file. Environment variables from
the OS user environment can be used in the file name and will be expanded. For example, $HOME/START.sh will name the startup script in the Sybase user's home
directory.
File Pattern
Designates a group of files to be tracked and monitored by security assessments. The path to the files can be absolute, or relative to the $SYBASE variable. The value of
the $SYBASE variable is the value you set in the Database Instance Directory field of the Datasource Definition panel. A .. in the path indicates one or more directories
between the portion of the path before it and the portion of the path after it. A .+ in the path indicates exactly one directory between the portion of the path before it and
the portion of the path after it. For example: $SYBASE/../.*dat" This is just a short-hand for creating many single file identifications from a single identification string, a file
pattern. A file pattern can be viewed as a series of regular expressions separated by /'s. A file is matched if each element of its full path can be matched by one of the
regular expressions in order. If an element of the pattern is an environment variable, it is expanded before the match begins. If .. is one of the elements of the pattern, it
will match zero or more directory levels. For example, /usr/local/../foo will match /usr/local/foo and /usr/local/gunk/junk/bunk/foo. Using more than one .. element in a file
pattern should not be necessary and is discouraged because it makes the pattern very slow to expand. Because of the confusion with its use in regular expressions
\cannot be used as a separator as it might be in Windows.
Additionally, the Guardium Unix/Sybase Assessment : UNX - SYBASE set includes the following templates :
File ownership
This test monitors file ownership, and changes thereto, of Sybase files.
File permissions
This test monitors file permissions, and changes thereto, of Sybase files.
This test checks whether the files are owned and belongs to the correct group according to the definition within the CAS template.
File permission
This test checks whether the file permission is properly set according to the definition within the CAS template.
Aster Data
Aster Data was acquired by Teradata in 2011, typically used for data warehousing and analytic application (OLAP). Aster Data created a framework called SQL-
MapReduce that allows the Structured Query Language (SQL) to be used with Map Reduce. Aster Data is most often associated with clickstream kinds of
applications.
An Aster nCluster includes a Queen Node Group, a Worker Node Group, and a Loader Node Group. A CAS agent is installed on all three node groups.
A security assessment should be created to execute all tests on the queen node. All database connections for Aster Data go through the queen node only.
Testing on worker and loader nodes are only required when performing CAS tests (File permission and File ownership).
When running VA tests that require CAS access, and filling in the CAS datasource configuration choices, specify the usernname that Aster is installed under for
Database Instance Account. This username typically is called beehive.
For Database Instance Directory, this is the home directory of the beehive user. The default typically is /home/beehive.
When running VA tests that are do not use CAS, the customer should create their datasource, pointing to the QUEEN node within the cluster.
When running VA tests that are CAS dependent, if the node you are testing is one of the worker, then you would have to setup "Custom URL" in the datasource to
point to the Queen node as that is how it is listening.
Example
Host Name/IP = Worker.guard.xxx.xxx..com or 1xx.1xx.111.111 (This is the actual worker host even though worker is not listening to this. CAS needs this so it can
send and receive data from the Worker's node)
Database = beehive
Custom URL= jdbc:ncluster://aster6q:2406/beehive (This JDBC example shows that we are actually connecting to the aster6q which is the queen node on port
2406 and beehive database)
Click Harden. The list of CAS functions is listed within the Configuration Change Control (CAS Application) header.
1. Open the CAS Configuration Navigator panel by clicking Harden > Configuration Change Control (CAS Application) > CAS Template Set Configuration.
2. Filter the template set list by OS Type or DB Type.
3. Select the Template Set that you want to modify and click Modify to open the CAS Template Set Definition panel.
4. Make your desired changes and click Apply to save them.
Note: Predefined templates cannot be edited. They have the same restrictions as those that are in use by a CAS host. The customer must clone it, then edit the cloned
copy if they wish to make changes to it.
Com
pon
ent Description
OS The operating system type: Windows or Unix. You can change this selection when the template set is empty, but you cannot change it if the template set contains
Type one or more items.
DB The database type (Oracle, MS-Sql, DB2®, Sybase, Informix®, etc.) or N/A for an operating system template set. You can change this selection when the
Type template set is empty but you cannot change it if the template set contains one or more items.
Desc An optional name for the item used in reports and to identify the item in other CAS panels (the CAS Template Set Definition for example). If omitted, the item
ripti name defaults to the file name or pattern, variable name, or script (as appropriate for the type).
on
Type One of the following: SQL Query, OS Script, Environment Variable, Registry Variable, Registry Variable Pattern, File, and File Pattern.
Note: If being used with CAS-based assessment tests this must be of type OS Script.
Note: For an OS script CAS will wait for a script to complete. To limit the time allowed for an OS script to run and allowing CAS to terminate the script, use the
cas_command_wait guard_tap.ini parameter. The default wait time is 300 seconds or 5 minutes. When changing this parameter there is no need to restart CAS.
File For File and File Pattern Type only. The owner of the file(s).
Own
er
File For File and File Pattern Type only. The group owner of the file(s).
Grou
p
Peri The maximum interval between tests, specified as a number of minutes(m), hours(h), or days(d). Data becomes available after the initial period is realized and up
od to and before the next period begins.
Kee If selected, a copy of the actual data is saved with each change. For example: for a file item, a copy of the file is saved. If selected, but the size of the raw data for
p the item is greater than the Raw Data Limit configured for this CAS host, no data will be saved.
Data
Use Indicates whether or not an additional comparison is done by calculating a checksum of the raw data using the MD5 algorithm. Computing the MD5 checksum is
MD5 time consuming for large character objects. However, it is a better indicator of change than just the size. The default is not to use MD5. If MD5 is used, but the size
of the raw data is greater than the MD5 Size Limit configured for the CAS host, the MD5 calculation and comparison will be skipped.
Ena Selected by default; indicates whether or not the item will be checked for changes.
bled
Typ
e Description
SQL The content should be a valid SQL statement. The result returned by the statement will be compared to the result returned the last time the query was run. The
Que query will be run with the parameters specified in the datasource that is being used: username, password, DB port, and so forth. Care should be taken when filling
ry out these parameters in the datasource or the query will fail to return a result.
OS The content can be a valid command line entry, or the name of a file containing an OS executable script. The script is executed in the environment of the OS user
Scri specified in the Database Instance Account field of the datasource definition.
pt
Envi The content should name an environment variable that is defined in the context of the OS user specified in the Database Instance Account field of the datasource
ron definition.
me
nt
Vari
able
Reg The content is interpreted as the path to a variable in the Windows Registry of the host. The value found on that path is compared to the value found the last time
istr the path was traced.
y
Vari
able
Reg The content is a sequence of regular expressions that is used to match the components of paths in the Windows Registry. The pattern is used to develop registry
istr variable type monitored items which will be treated as described previously.
y
Vari The regular expressions are joined by / so that the pattern resembles a registry path. The more familiar \ character cannot be used, since that is a special character
able in the syntax of Javaâ„¢ regular expressions. If a / is needed in one of the regular expressions, it must be escaped with a \. (e.g. U\/235 would be used to match
Patt U/235).
ern
The pattern .. can be used to match zero or more components within a path. For example, HKLM/Software/../buzz will match HKLM\Software\buzz, or
HKLM\Software\one\two\three\buzz. This type of pattern can lead to a computationally expensive registry search, so use it carefully.
Other than these exceptions, the regular expressions follow the syntax of Java regular expressions.
File The content is interpreted as an absolute file path on the host. The characteristics of the file found on the path will be compared to the characteristic found the last
time the path was traced. The path may include environment variables which will be expanded in the context of the OS user specified in the datasource. The path
may also begin with a substitution variable, like "$SYBASE_HOME", which will be replaced by the value entered in the Database Instance Directory field of the
datasource definition.
File The content is a sequence of regular expressions that is used to match the components of file paths and to generate File type monitored items. The regular
Patt expressions are joined by / so that the pattern resembles an actual file path. As with registry patterns, the \ cannot be used for Windows files because of the
ern regular expression syntax. If the pattern begins with ?: on a Windows machine, the pattern match will be started on each of the drives of a multi-drive machine. The
.. construction described with registry patterns can also be carefully used in a file pattern. Environment variables from the context of the OS user can be used in a
file pattern and will be expanded before the expansion of the regular expressions.
GuardAPI commands
create_cas_template_set
create_datasource
create_cas_host_instance
CAS Hosts
A Configuration Auditing System (CAS) host configuration defines one or more CAS instances.
Once you have defined one or more CAS template sets, and have installed CAS on a database server, you are ready to configure CAS on that host. A CAS host configuration
defines one or more CAS instances. Each CAS instance specifies a CAS template set, and defines any parameters needed to connect to the database. For each database
server on which CAS is installed, there is a single CAS host configuration, which typically contains multiple CAS instances - for example, one CAS instance to monitor
operating system items, and additional CAS instances to monitor individual database instances.
The menu lists all database servers where CAS has been installed and this host has connected to the Guardium system.
2. Use list filtering to filter by OS Type or DB Type and find the host you would like to work with.
3. Highlight the host you want to modify and click Modify.
4. Select a Template Set from the menu.
Note: CAS Instance cannot be defined if the host is off line or this is a secondary Guardium system for the host.
5. Click Add Datasource to open the Datasource Finder panel.
Note: If no compatible datasource is available for this template set on this host you may click New to open the Datasource Definition panel and add a datasource.
6. Select the datasource that you want to add to the template set, and click Add to add it to the template set.
Click Harden. All the CAS functions are listed within the Configuration Change Control (CAS Application) header.
Open the CAS Configuration Navigator panel by clicking Harden > Configuration Change Control (CAS Application) > CAS Host Configuration.
A list of defined CAS instances associated with the selected host will be displayed with the following information and editing options:
Disable/Enable Instance Icon Click the Disable Instance icon to disable/enable the CAS instance
Delete Instance Icon Click the Delete Instance icon to delete the CAS instance
Datasource Identifies the datasource used by the instance. Click the Datasource to open the Datasource Definition panel to edit
the datasource definition
Template Set Identifies the CAS template set used by the instance. Click this link to open the Monitored Item Template Definitions
panel to view or modify the template set definition.
Monitored Items A count of items currently monitored by the instance. Click this link to open the Monitored Items Definitions panel
which displays the list of all items currently monitored.
Note: There is a default of 10,000 monitored items that are viewable for reports regardless of the number of
monitored items defined. It is suggested that multiple instances be defined when the number of monitored items
approach this limit.
All the monitored items refer to raw data, a character object on the host, the result of a SQL query, the output of an OS script, or the contents of a file. The size of that
character object is computed. If the item is a file, then the permissions, owner, group, and last modified time are also checked. If any of these have changed since the last
time the item was checked, the change will be noted.
Table 2. View Monitored Item Lists
Component Description
Select Box Check the Select Box if you'd like to edit a monitored item individually or as a group.
Type One of the following: OS Script, SQL Query, File, Environment Variable, or Registry Variable
OS Script or SQL Script: The actual text or the path to an operating system or SQL script, whose output will be compared
with the output produced the next time it runs
Period The average interval between tests, specified as a number of seconds(s), minutes(m), hours(h), or days(d).
Keep Data If marked a copy of the actual data is saved with each change. For example, for a file item, a copy of the file is saved. If
marked but the size of the raw data for the item is greater than the Raw Data Limit configured for this CAS host, no data
will be saved
Use MD5 Indicates whether or not the comparison is done by calculating a checksum of the raw data using the MD5 algorithm.
Computing the MD5 checksum is time consuming for large character objects. However, it is a better indicator of change
than just the size. The default is not to use MD5. If MD5 is used but the size of the raw data is greater than the MD5 Size
Limit configured for the CAS host, the MD5 calculation and comparison will be skipped.
GuardAPI Commands
delete_cas_host
list_cas_hosts
create_cas_host_instance
delete_cas_host_instance
list_cas_host_instances
update_cas_host_instance
CAS Reporting
This section describes Configuration Auditing System (CAS) reporting.
The admin user has access to all query builders and default reports. The admin role allows access to the default CAS reports, but not to the CAS query builders. The CAS
role allows access to both the default CAS reports and the query builders.
Domai
n Description
CAS Track CAS template definitions. Templates identify items to be monitored for changes. Monitored items can be files, environment or registry variables, OS or
Templa SQL script output sets, or the set of logged on users.
tes
CAS Tracks CAS host configurations, where a configuration is the application of one or more template sets to a specific database server host. From configuration
Config instances you can see which items within template sets are enabled or disabled, or exactly which files are selected and monitored (or not) by file name pattern
templates.
CAS Tracks CAS host events, including servers or clients going in or out of service.
Host
History
Entity Description
Attribute Description
DB Type Database Type (Oracle, MS-SQL, DB2®, Sybase, Informix®, etc.) or N/A for an operating system template
IsDefault Indicates whether or not this template is the default for the specified OS type and DB type combination
Editable Indicates whether or not this template can be modified. The default Guardium® templates cannot be modified. In addition, once a template set has been
used in a CAS instance, it cannot be modified. However, a template set can always be cloned and the cloned set can be modified.
Template Entity
Attrib
ute Description
Acces Depending on the Audit Type, this is the OS or SQL script, environment or registry value, or a file name or a file name pattern
s
Name
Use Indicates whether or not the comparison is done by calculating a checksum using the MD5 algorithm and comparing that value with the value calculated the last
MD5 time the item was checked. The default is to not use MD5. If MD5 is used but the size of the raw data is greater than the MD5 Size Limit configured for the CAS
host, the MD5 calculation and comparison will be skipped. Regardless of whether or not MD5 is used, both the current value of the last modified timestamp for
the item and the size of the item are compared with the values saved the last time the item was checked.
Save Indicates if the Keep Data checkbox has been marked. If so, previous versions of the item can be compared with the current version.
Data
Entity Description
Host Identifies a CAS host (a database server) and the current status of CAS (online/offline). This entity is also available in the CAS Host History domain
Instance For each host, an Instance Config entry describes a CAS instance, which contains database connection parameters (if needed) and identifies the template
Config set used by the instance. It provides current status of the instance (in use, enabled, or disabled) and the date of the last revision.
Monitored Identifies an item (a file or an environment variable, for example) monitored by a CAS instance. In contains the item definition and indicates whether or not
Item the item is enabled.
Details
Host Entity
Entity Description
Attribute Description
DB Type Database Type (Oracle, MS-SQL, DB2, Sybase, Informix, etc.) or N/A for an operating system instance
User The user name that CAS uses to log onto the database; or N/A for an operating system instance.
Port The port number CAS uses to connect to the database; this can be empty for an operating system instance
DB Home Dir The home directory for the database; this can be empty for an operating system instance
Att
rib
ute Description
Te Database Type (Oracle, MS-SQL, DB2, Sybase, Informix, etc.) or N/A for an operating system instance
mpl
ate
ID
Au The user name that CAS uses to log onto the database; or N/A for an operating system instance.
dit
Typ
e
Ena The port number CAS uses to connect to the database; this can be empty for an operating system instance
ble
d
In The home directory for the database; this can be empty for an operating system instance
Syn
ch
Use Indicates whether or not the comparison is done by calculating a checksum using the MD5 algorithm and comparing that value with the value calculated the last
MD time the item was checked. The default is to not use MD5. If MD5 is used but the size of the raw data is greater than the MD5 Size Limit configured for the CAS host,
5 the MD5 calculation and comparison will be skipped. Regardless of whether or not MD5 is used, both the current value of the last modified timestamp for the item
and the size of the item are compared with the values saved the last time the item was checked.
Sav When marked, previous version of the item can be compared with the current version
e
Dat
a
Te The template entry that is the basis for this monitored item, set from the Template entity Access Name attribute when the instance was created. Typically this will
mpl be the same as the monitored item, but in the case where a file pattern was used in the template, this will be the file pattern
ate
Co
nte
nt
Drill-Down Reports
Report Description
Report Details Displays the monitored items included in the count of monitored item column
Host Identifies a CAS host (a database server) and the current status of CAS (online/offline). This entity is also available in the CAS Config domain.
Host Event Date and time of an event in the CAS client/server relationship, details a client or server going in and out of service.
Host Entity
Attribute Description
Host Event
Attribute Description
"Failover Off": A server is available (following a disruption), so CAS data is being written to the server
"Failover On": The server is not available, so CAS data is being written to the failover file
CAS Host History Report Lists CAS events for each CAS host
Entity Description
Attribute Description
Sample Time Timestamp (date and time on host) that sample was taken
Saved Data ID Identifies the Saved Data entity for this change
Audit State Label ID Identifies the Host Configuration entity for this change
Timestamp Data and time this change record was created on the server (Guardium appliance server clock)
Owner UNIX only. If the item type is a file, the file owner
Permissions UNIX only. If the item type is a file, the file permissions
0, File does not exist, but this file name is being monitored (it never existed or may have been deleted)
Last Modified Timestamp for the last modification, taken from the file system at the sample time
Last Modified Weekday Day of the week for the last modification
Group UNIX only. If the item type is a file, the group owner
Attribute Description
DB Type Database Type (Oracle, MS-SQL, DB2, Sybase, Informix, etc.) or N/A if the change is to an operating system instance
OS Script or SQL Script: A change triggered by the OS script contained in the monitored item template definition.
File: A specific file. There is no host configuration entity for a file pattern defined in the template set used by the instance. Instead, there is a separate
host configuration entity for each file that matches the pattern.
Monitored The name of the changed item, from the Description (if entered), otherwise a default name depending on the Type (a file name, for example).
Item
Attribute Description
Saved Data ID Unique numeric identifier for the saved data item
Timestamp Timestamp for when the saved data entity was recorded in the server database
Change Identifier Identifies the monitored changes entity for this saved data entity
Default
Report Description
CAS Change For each monitored item, lists changes by owner. This report lists changes to the properties of the file, such as the owner or access permissions. It does
Details not list changes to the contents of the file.
CAS Saved For monitored items with the optional Keep data box checked, lists the data for each changed detected. This report lists changes to the contents of the
Data file, not to its properties.
Drill-Down Reports
Report Description
Record Details Displays the saved data included in the Count of Saved Data column
Drill-Down Reports
Report Description
View Difference Displays the difference between the selected data and prior version
Parent topic: Configuration Auditing System
CAS Status
Open the Configuration Auditing System Status by clicking Manage > Change Monitoring > CAS Status
For each database server where CAS is installed and running, and where this Guardium system is configured as the active Guardium® host, this panel displays the CAS
status, and the status of each CAS instance configured for that database server.
If you have trouble distinguishing the colors of the status indicator lights, hover your mouse over status lights, and a text box will display the current status.
CAS System The light found on this panel indicates whether CAS is actively running on the Guardium system.
Status indicator
light Red: CAS is not running on this Guardium system.
CAS agent status These status lights indicate whether the individual CAS agent is connected to a Guardium system. Identify each CAS agent by referencing the IP
indicator lights address that appears before the row of status indicator lights
Reset Reset the CAS agent on this monitored system. This stops and restarts the CAS agent on the database server.
Note: This will also reset checkpoint files; allowing for a fresh start and re-scan of files from scratch.
Delete (X) Remove this monitored system from CAS and also deleting the data on the Guardium system that was associated with the CAS client.
This button is disabled if the CAS agent is running on this system. You must stop the CAS agent to be able to delete. See Stopping and Starting the
CAS Agent for more information.
Red/Yellow/Gree Each set of lights indicates the status of a CAS instance on the monitored system. If the owning monitored system status is red (indicating that the
n light CAS agent is offline), ignore this set of status lights.
Green: The instance is enabled and online, and its configuration is synchronized with the Guardium system configuration.
Yellow: The instance is enabled, but the instance configuration on the Guardium system does not match the instance configuration on the monitored
system (it has been updated on the Guardium system, but that update has not been applied on the monitored system).
Refresh Click Refresh to re-check the status of all servers in the list. This button does not stop and/or restart CAS on a database server – it only checks the
connection between CAS on the Guardium system and CAS on each database server.
Note: The TAP_IP entry in the guard_tap.ini file is required. If TAP_IP is missing CAS will not start and an error message will be logged in the log file on the CAS client.
cas:2345:respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/guard_stap/cas/bin
Use this procedure to restart the CAS agent only when it has been stopped by editing the /etc/inittab file as described previously.
#cas:2345:respawn:/usr/local/guardium/guard_stap/cas/bin/run_wrapper.sh /usr/local/guardium/guard_stap/cas/bin
3. Uncomment the line, in our example (step 2.), by removing the # in the first character position. Depending on the operating system the comment character may be
something else.
4. Save the file.
5. Enter the following command to restart the CAS agent: init -q
1. In the Services panel, highlight the Configuration Auditing System Client item.
2. Select either Start or Stop from the Action menu.
System Configuration
Most of the information on the System Configuration panel is set by using the CLI at installation time.
For instructions on how to configure the system, or to modify any other System Configuration settings, see Modify the System Configuration.
There must be a valid license to use various functions within the appliance. When a license is entered after the system starts, a restart of the GUI is needed.
To encrypt files that are exported from the appliance by archive/export activities
To establish secure communications between Central Managers and managed units
If you are using Central Management and/or aggregation, you must set the System Shared Secret for all related systems to the same value.
The system shared secret value is null at installation time. Depending on a company’s security practices, it may be necessary to change the system shared secret on a
periodic basis. Each appliance maintains a shared secret keys file, containing an historical record of all shared secrets defined on that appliance. The same system thus
will have no problem at a later date decrypting information that has been encrypted on that system.
When information is exported or archived from one system, and imported or restored on another, the latter must have access to the shared secret used by the former. For
these cases, there are CLI commands that can be used to export the system shared secrets from one Guardium system, and import them on another.
Note: The applied changes do not take effect until the Guardium system is restarted. After you apply configuration changes, click Restart to stop and restart the system.
Table 1. System Configuration Panel Reference
Field or Control Description
Unique Global Identifier This value is used for collation and aggregation of data. The default value is a unique value that is derived from the MAC address
of the machine. Do not change this value after the system begins monitoring operations.
System Shared Secret Any value that you enter here is not displayed. Each character you type is masked.
The system shared secret is used for archive/restore operations, and for Central Management and aggregation operations. When
used, its value must be the same for all units that will communicate. This value is null at installation time, and can change over
time.
When secure connections are being established between a Central Manager and a managed unit.
When an aggregated unit signs and encrypts data for export to the aggregator.
When any unit signs and encrypts data for archiving.
When an aggregator imports data from an aggregated unit.
When any unit restores archived data.
Depending on your company’s security practices, you might be required to change the system shared secret from time to
time. Because the shared secret can change, each system maintains a shared secret keys file, containing a historical record of all
shared secrets defined on that system. This allows an exported (or archived) file from a system with an older shared secret to be
imported (or restored) by a system on which that same shared secret has been replaced with a newer one.
Caution: When used, be sure to save the shared secret value in a safe location. If you lose the value, you will not be able to
access archived data.
Retype Secret When you enter or change the system shared secret, retype the new value a second time. Any value that you enter here is not
displayed. Each character you type is displayed as an asterisk.
License Key The license key is inserted in the configuration during installation. Do not modify this field unless you are instructed to do so by
Technical Support. You might need to paste a new product key here if optional components are being added.
If you install a new product key on the central management unit, when you click Apply, you will receive a warning message that
reads: Warning: changing the license on a Central Management Unit requires refreshing all managed
units. After you click OK to close the message window, you must click Apply a second time to install the new product key. You
will know that the new license has been installed when you receive the message: Data successfully saved.
If you install a new product key on a Central Management Unit, you might get a warning that states the license applied to the CM
must be refreshed on the managed unit. This requires a refresh done from the Central Manager and is done by pressing the
refresh icon from the Central Manager to each of the collectors listed.
Number of Datasources If a limited license is applied, the maximum number of datasources permitted per datasource license is displayed.
Metered Scans Left If a limited license is applied, the number of vulnerability assessment scans permitted (datasource metering) per metering
license is displayed. Each time a vulnerability assessment is triggered, the scan counter decreases by one.
License valid until If a limited license is applied, a fixed date when the license will be disabled is displayed.
Note: Configure Network Address, These settings cannot configured through the GUI and appear grayed-out on the System Configuration user interface.
Secondary Management Interface
and Routing settings using the CLI
System Hostname The resolvable host name for the Guardium system. This name must match the DNS host name for the primary System IP
Address.
Domain The name of the DNS domain on which the Guardium system resides.
System IP Address The primary IP address that users and S-TAP® or CAS agents use to connect to the Guardium system. It is assigned to the
network interface labeled ETH0.
SubNet Mask The subnet mask for the primary System IP Address.
Hardware (MAC) Address The MAC address for the primary network interface.
System IP Address (Secondary) Optional: A port can also be configured to team with the primary interface in order to provide high-availability failover IP teaming.
Alternatively, a port on the device can be configured as a secondary management interface with a different IP address, network
mask, and gateway from the primary.
There are two different, and mutually exclusive, kinds of secondary management connections, both controlled by options to the
same CLI command:
Bonding or teaming
Turns eth0 and another specified network interface card (NIC) into a bonded pair with standby failover. To implement this
option, use the CLI command store network interface high-availability on <nic>, where nic is an available NIC.
Secondary interface
Allows the GUI and CLI to be accessible from another NIC in the Guardium system. To implement this option, use the CLI
command store network interface secondary on <nic> <ip> <mask> <gateway> to specify the secondary NIC, its IP
address and network mask, and optionally a gateway.
BOTH physical and VM systems have the same capabilities. dependent on the number of NICs installed on the Guardium system
or VM.
To display the network interfaces installed on the unit, use the show network interface inventory CLI command. For example:
Note: The "Member of" will show which NICs are in a bond pair, if a bonding exists.
To locate the eth connectors on your appliance, use the show network interface port CLI command, which will blink the orange
light on that port, 20 times. For example:
Note: The secondary IP address and its associated port are NOT related to the high availability feature, which provides fail-over
support via IP Teaming for the primary connection. For more information about the high-availability option, see the store
network interface commands in the CLI Appendix. Â
SubNet Mask (Secondary) Optional. The subnet mask for the secondary System IP Address.
Default Route/ Secondary Route The IP address of the default router for the system./ The IP address of the Secondary Router
Primary Resolver Secondary Resolver The IP address for the Primary Resolver (DNS) is required. The secondary and tertiary are optional.
Tertiary Resolver
Test Connection Click Test Connection to test the connection to the corresponding DNS (Domain Name System) server. This only tests that there is
access to port 53 (DNS) on the specified host. It does not verify that this is a working DNS server. You will receive a message box
indicating if the DNS server responded.
Restart Click Restart to stop and then restart the system. You will be prompted to confirm the action.
Apply Click Apply to save the changes. The changes will be applied the next time the system restarts.
The inspection engine extracts SQL from network packets; compiles parse trees that identify sentences, requests, commands, objects, and fields; and logs detailed
information about that traffic to an internal database.
You can configure and start or stop multiple inspection engines on the Guardium® appliance.
Inspection engines cannot be defined or run on a Central Manager unit. However, you can start and stop inspection engines on managed units from the Central Manager
control panel.
Inspection engines are also defined on S-TAPs. If S-TAPs report to this Guardium appliance, be sure the appliance does not monitor the same traffic as the S-TAP®. If
that happens, the analysis engine will receive duplicate packets, will be unable to reconstruct messages, and will ignore that traffic.
Selecting IP addresses
Each inspection engine monitors traffic between one or more client and server IP addresses. In an inspection engine definition these are defined using an IP address and
a mask. You can think of an IP address as a single location and a mask as a wild-card mechanism that allows you to define a range of IP addresses.
IP addresses have the format: n.n.n.n, where each n is an eight-bit number (called an octet) in the range 0-255.
The mask is specified in the same format as the IP address: n.n.n.n. A zero in any bit position of the mask serves as a wildcard. Thus, the mask 255.255.255.240
combined with the IP address 192.168.1.3 matches all values from 0-15 in the last octet, since the value 240 in binary is 11110000. But it only matches the values
192.168.1 in the first three octets, since 255 is all 1s in binary (in other words, no wildcards apply for the first three octets).
Specifying binary masks can be a little confusing. However, for the sake of convenience, IP addresses are usually grouped in a hierarchical fashion, with all of the
addresses in one category (desktop computers, for example) grouped together in one of the last two octets. Therefore, in practice, the numbers you see most often in
masks are either 255 (no wildcard) or 0 (all).
Thus a mask 255.255.255.255 (which has no zero bits) identifies only the single address specified by IP address (192.168.1.3 in the example).
Alternatively, the mask 255.255.255.0, combined with the same IP address matches all IP addresses beginning with 192.168.1.
Note: The applied changes do not take effect until the inspection engines are restarted. After applying inspection engine configuration changes, click the Restart button to
stop and restart the system (using the new configuration settings).
Note: For HTTP support, there are Inspection Engine configuration limitations. The following Inspection Engine settings are not supported for HTTP: Default Capture
Value; Default Mark Auto Commit; Log Sequencing; Log Exception Sql String; Log Records Affected; Compute Avg. Response Time; Inspect Returned Data; Record Empty
Sessions.
Table 1. Settings that Apply to All Inspection Engines
Control Description
Default Capture Value Default value is false. Used by Replay function to distinguish between transactions and capture values, meaning that if you have a
prepared statement, assigned values will be captured and replayed. If you want to replay your captured  prepared statements as
prepared statements the check box should be checked for the captured data.
Default Mark Auto Commit Default value is true. Due to various auto-commit models for different databases, this value is used by Replay function to explicitly
mark up the transactions and auto commit after each command.
Note: If the check box is checked then commits and rollbacks will be ignored. Databases currently supported include DB2®,
Informix®, and Oracle.
Log Sequencing If marked, a record is made of the immediately previous SQL statement, as well as the current SQL statement, provided that the
previous construct occurs within a short enough time period.
Log Exception Sql String If marked, when exceptions are logged, the entire SQL statement is logged.
Log Records Affected Records affected - Result set of the number of records which are affected by each execution of SQL statements.
If marked, the number of records affected is recorded for each SQL statement (when applicable). Default value for log records
affected is FALSE (0).
Note: When using JDBC, this must be marked to properly log Oracle bind variable traffic
Note: The records affected option is a sniffer operation which requires sniffer to process additional response packets and postpone
logging of impacted data which increases the buffer size and might potentially have a adverse effect on overall sniffer performance.
Significant impact comes from really large responses. To prevent large amount of overhead associated with this operation, Guardium
uses a set of default thresholds that allows sniffer to decide to skip processing operation when exceeded.
Note: Usually, Records Affected is set correctly when the user turns on Log Records Affected via Inspection Engines > Log Records
Affected. However using MS-SQL via stored procedure will set Records Affected as -1.
Refer to Configuration and Control CLI Commands store max_results_set_size, store max_result_set_packet_size and store
max_tds_response_packets, to set levels of granularity.
Case 1, record affected value, positive number - this represents correct size of the result set.
Case 2, record affected value, -2 - This means number of records exceeded configurable limit (This can be tuned through CLI
commands).
Case 3, record affected value, -1 - This shows any unsupported cases of packets configurations by Guardium.
Case 4, record affected value, -2 - If the result set is sent by streaming mode.
Case 5, record affected value, -2 - Intermediate result during record count to update user about current value, ends up with
positive number of total records.
Note: Records Affected feature is not supported in DB2 when streaming to used to send the results.
Compute Avg Response Time When marked, for each SQL construct logged, the average response time will be computed.
Inspect Returned Data Mark to inspect data returned by SQL requests  as well as update the ingress and egress counts.
If rules will be used in the security policy, this checkbox must be marked.
Record Empty Sessions When marked, sessions containing no SQL statements will be logged. When cleared, these sessions will be ignored.
Parse XML The Inspection Engine will not normally parse XML traffic. Mark this checkbox to parse XML traffic.
Logging Granularity The number of minutes (1, 2, 5, 10, 15, 30, or 60) in a logging unit. If requested in a report, Guardium summarizes request data at
this granularity. For example, if the logging granularity is 60, a certain request occurred n times in a given hour. If the check box is
not marked, exactly when the command occurred within the hour is not recorded. But, if a rule in a policy is triggered by a request, a
real time alert can indicate the exact time. When you define exception rules for a policy, those rules can also apply to the logging
unit. For example, you might want to ignore 5 login failures per hour, but send an alert on the sixth login failure.
Max. Hits per Returned Data When returned data is being inspected, indicate how many hits (policy rule violations) are to be recorded.
Ignored Ports List A list of ports to be ignored. Add values to this list if you know your database servers are processing non-database protocols, and
you want Guardium to not waste cycles analyzing non-database traffic. For example, if you know the host on which your database
resides also runs an HTTP server on port 80, you can add 80 to the ignored ports list, ensuring that Guardium will not process these
streams. Separate multiple values with commas, and use a hyphen to specify an inclusive range of ports. For example:
101,105,110-223
Buffer Free: n % Display only. n is the percent of free buffer space available for the inspection engine process. This value is updated each time the
window is refreshed. There is a single inspection engine process that drives all inspection engines. This is the buffer used by that
process.
Restart Inspection Engines Click Restart Inspection Engines to stop and restart all inspection engines.
Add Comments Click Comment to add comments to the Inspection Engine Configuration.
Note: Any global changes made (and saved by using Apply) do not take effect until you restart the inspection engines. However,
individual inspection engine attributes, such as exclude, sequence order, etc., take effect immediately.
Click the plus sign to add additional IP address and subnet mask. Click the minus sign to remove the last IP address and subnet mask.
6. In the DB Server IP/Mask boxes, enter a list of database servers (where a database sits) to be monitored. The servers are identified by IP addresses and subnet
masks. There are detailed instructions on how to use these fields in the overview.
Click the plus sign to add additional IP address and subnet mask. Click the minus sign to remove the last IP address and subnet mask.
7. In the Port box, enter a single port or a range of ports over which traffic between the specified clients and database servers will be monitored. Most often, this
should be a single port.
Warning: Do not enter a wide range of ports, just to be certain that you have included the correct one! You may cause the inspection engine to bog down attempting
to analyze traffic on ports that carry no database traffic or traffic that is of no interest for your environment.
8. Mark the Active on startup box if this inspection engine should be started automatically on start-up.
9. Mark the Exclude DB Client IP box if you want the inspection engine to monitor traffic from all clients except for those listed in the DB Client IP/Mask list. Be sure
that you understand the difference between this and the Ignore protocol selection. This includes all traffic except for the from IP addresses. To ignore a specific set
of clients without including all other clients, define a separate inspection engine for those clients and use the Ignore protocol.
10. Click Add to save the definition.
11. Optionally reposition the inspection engine in the list of inspection engines. Filtering mechanisms defined in the inspection engines are executed in the order. If
necessary, reposition the new inspection engine configuration, or any existing configurations, using the Up and/or Down buttons in the border of the definition.
12. Optionally click Start to start the inspection engine just configured. The Start button will be replaced by a Stop button, once the engine has been started.
13. Note: If you provide a value for TAP_IDENTIFIER and the value contains spaces, Guardium will automatically replace the spaces with hyphens. For example, the
value "Sample description" will become "Sample-description".
1. Click Manage > Activity Monitoring > Inspection Engines to open the Inspection Engines.
2. If the inspection engine to be removed has not been stopped, click Stop.
3. To remove an inspection engine, click Delete.
Portal Configuration
You can keep the Guardium® appliance Web server on its default port (8443) or reset the portal. We strongly recommend that you use the default port.
1. Click Setup > Tools and Views > Portal to open the Portal.
2. If it is not marked, mark the Active on Startup checkbox (this should never be disabled).
3. Set the HTTPS Port to an integer value between 1025 and 65535.
4. Click Apply to save the value. (The Guardium security portal will not start listening on this port until it is restarted.) Or click Revert to restore the value stored by the
last Apply operation.
5. Click Restart to restart the Guardium Web server if you have made and saved any changes. You can now connect to the unit on the newly assigned port.
Note: To re-connect to the unit after it has restarted with the new port number, you must change the URL used to open the Guardium Login Page on your browser.
The Guardium Portal Configuration is used to define the way user passwords are authenticated when logging into the Guardium appliance. There are three choices.
The Portal configuration screen under Setup > Tools and Views > Portal is used for the following:
The Local connection will work when a password for a given user is defined from a login. The login is defined using the accessmgr role. By default login into the accessmgr
account which has the accessmgr role. This role gives a user the ability to add or uploaded user accounts and create passwords.
When you define your username and password using the accessmgr role type, the defined password per user will be used when logging into the Guardium appliance.
The RADIUS connection allows login authentication through a radius server. The Radius/RSA server can be defined using both a password and a SecurID token number.
The SecurID token numeric password is displayed via a hardware token.
The Radius/RSA server is defined on a Windows server. The security RSA SecurID token is also defined and stored on the Radius server and does not have to be
downloaded in order for the Radius portal to work.
In addition, a Radius server connection can be defined using a UNIX platform. Radius is also defined as FreeRadius. User account and passwords are defined on the
Radius servers and do not have to be downloaded. In order to use FreeRadius, the client (Guardium server), username and passwords are defined on the FreeRadius UNIX
servers and used when the Radius Portal connection is defined.
The LDAP connection will work when the password is defined and stored on a given LDAP server. In order for a user to use the LDAP portal and to login, a user account
name must be imported from the LDAP server first. Use the User LDAP Import function available from the accessmgr account to define the LDAP location and then import
the LDAP users. The password does not have to be uploaded.
To increase the security of the Guardium system, from Guardium release v10.1.4, communications protocols TLS 1.0/1.1 can be optionally disabled. Disabling TLS 1.0/1.1
results in only the TLS 1.2 protocol being enabled. Communications may be less secure when using TLS 1.0/1.1.
You must disable TLS 1.0/1.1 from the Central Manager and/or standalone unit using the CLI. Your Guardium appliances, S-TAP agents, CAS and GIM clients must be at
specific versions to enable this feature.
The disablement of TLS 1.1 automatically checks to make sure managed units and S-TAPs are at specific versions, but cannot check CAS client versions. Customers using
CAS need to make sure their CAS clients are at version 10.1.4 and their database servers have Java 7 enabled. Lack of doing this will result in the inability to see CAS
connections to database servers.
You must also make sure all managed units have version 10.1.4 installed, and GIM Clients and S-TAPs are at a minimum version of 10.1.2. Failure to meet all requirements
will mean that TLS 1.0/1.1 will not be disabled.
To get information about, and to disable TLS1.0/1.1 on all units in a managed environment, (Central Manager, Aggregator, Managed units), the following commands should
be run on the Central Manager.
Procedure
1. Access the CLI as admin.
2. Enter the following command.
grdapi get_secured_protocols_info
grdapi disable_deprecated_protocols
Running this command from a Central Manager to propagate down to all managed units. This command firsts run the version checks described above. If the requirements
for disablement are met, then this command changes the configuration settings for each service on the Central Manager as well as all managed units. If the requirements
for disablement are not met, then the system indicates that the deprecated protocols are enabled and must be kept enabled until all managed units and/or components
are upgraded.
4. For any managed unit that was offline during the disablement of depreciated protocols, Guardium users with admin role must manually start a CLI session on the
managed unit and execute local_disable_depreciated_protocols to make the configuration changes.
grdapi local_disable_deprecated_protocols
This GuardAPI command is a fallback that changes back the configuration settings and restart services on the Central Manager and all managed units to enable the
deprecated protocols. This GuardAPI command can be run with the all=true argument from a Central Manager to enable deprecated protocols on the Central
Manager and all managed units. Absence of the parameter all=true enable deprecated protocols on the appliance running the GuardAPI only.
6. Guardium users with admin role should check that communications between Central Managers and managed units are stable and working properly.
Note: Default .psml structures for user and role can be defined, via the GUI, by the admin user. See Portlet Editor for further information.
Use the generate-role-layout CLI command to generate a new layout for an existing role, based on the layout for the specified user. Once the new role layout has been
defined, any users who are assigned that role before they log in for the first time, will receive the layout for that role.
generate-role-layout
Parameters
If either of the following parameters contains spaces (John Doe is user , or DBA Managers is role), replace the space characters with underscore characters.
For example:
user - The name of the user whose layout will be used as a model for the role layout. If the user does not exist, you will receive the following error message: No such
user '<user>'.
Configure Authentication
By default, Guardium® user logins are authenticated by Guardium, independent of any other application.
For the Guardium admin user account, login is always authenticated by Guardium alone. For all other Guardium user accounts, authentication can be configured to use
either RADIUS or LDAP. In the latter cases, additional configuration information for connecting with the authentication server is required.
Note: FreeRadius client software is supported.
When an alternative authentication method is used, all Guardium users must still be defined as users on the Guardium appliance. It is only the authentication that is
performed by another application.
While user accounts and roles are managed by the accessmgr user, the authentication method used is managed by the admin user. This is a standard separation-of-duties
best practice.
This attribute identifies a user for LDAP authentication. The Access Manager should be made aware of what attribute is used here, since the Access Manager
performs the LDAP User Import operation. Click on this help link LDAP User Import for further information on Importing LDAP Users.
If a user is using SamAccountName as the RDN value, the user must use either a =search or =[domain name] in the full name.
Global Profile
The Global Profile panel defines defaults that apply to all users.
An alias provides a synonym that substitutes for a stored value of a specific attribute type. It is commonly used to display a meaningful or user-friendly name for a data
value. For example, Financial Server might be defined as an alias for IP address 192.168.2.18.
If you want to see aliases by default, you can change the default aliases setting for all reports, as follows:
Click Setup > Tools and Views > Global Profile to open the Global Profile.
Mark the Use Aliases in Reports unless otherwise specified check box.
Click Apply.
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the PDF Footer Text field, enter the text to be printed at the foot of each page.
Note: PDF footer text is not distributed from the Central Manager/ Aggregator to the Managed Units.
3. Click Apply.
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the Message Template text box, edit the alert template text.
You can mark the no wrap check box to see where the line breaks appear in the message.
Attention: The Baseline Builder and related functionality is deprecated starting with Guardium V10.1.4.
%%lastError Last error description; available only when a SQL error request triggering an exception rule contains a last error description field
%%netProtocol Network protocol, for K-TAP on Oracle, this may display as either IPC or BEQ
%%receiptTimeMills Numeric representing the time when the alert occurred, in milliseconds since the fixed date of Jan 1 1900
%%sessionStartMills Numeric representing the start of the session where the alert occurred, in milliseconds since the fixed date of Jan 1 1900
%%SQLNoValue SQL string with masked values. The value of SQL will be replaced by ? in the syslog.
%%Subject[ ] If this variable is used in the message template, all that appears between [ ] (for example, file name, email sender, description)
will be the subject line of the email sent to user.
%%violationID Numeric representing the POLICY_VIOLATION_LOG_ID of this alert in GDM_POLICY_VIOLATION_LOG (this is the same as the
Violation Log ID in the Policy Violations / Incident Management report)
Named Template
Message templates are used to generate alerts.
The feature defines multiple message templates and facilitates the use of different templates on different rules. In the past, only a single message template was available
for all rules, all receiver types, etc.
To add, modify and delete named message templates, click Edit. When creating a new named template, the starting value of the string is a copy of whatever is currently in
the Message template of the Global Profile. "R/T Alert" is the only level of severity permitted.
Predefined message templates have been created for the SIEM solutions, ArcSight, EnVision, and QRadar. The Guardium system comes preloaded with two certified
(agreed upon) templates to integrate with these two SIEM solutions.
The Named Template builder can select from two template types - Real-time Alerts and Audit Process Report.
Click Edit Named Templates. Choose an SIEM and then click Modify. Select Real-time Alerts or Audit Process Report.
After editing, the multiple message templates can be selected from within the Policy Builder menu. See Policies.
Adding the QRadar template allows sending real-time alerts or Audit Process Report to QRadar using the LEEF Format (this is QRadar's format).
Click Harden > Vulnerability Assessment > Audit Process Builder to open the Audit Process Builder.
For example, here is the default LEEF template for the Databases Discovered report:
Here are the report columns that are mapped to the template:
Time Probed Server IP Server Host Name DB Type Port Port Type
This will now push all records from the audit process to the supplied IP address.
Sender Encoding
To encode outgoing messages (email and SNMP traps) in an encoding scheme other than UTF8, use the CLI command, store sender_encoding.
As in real-time alerts, you can choose a template for the message that is sent when the threshold is reached. The template uses a predefined list of variables that
are replaced with the appropriate value for the specific alert.
The default template for threshold alerts is as follows (can be cloned and edited):
Threshold: %%alertThreshold
Category: %%category
Severity: %%severity
CSV Separator
To define a separator to be used in the audit process:
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. Choose Comma, Semicolon, Tab, or define your own in Other box to define the CSV Separator that is used.
3. Click Apply.
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the HTML - Left and HTML - Right text boxes, enter the HTML for the text or any other items you want to include on the window.
3. Optionally click the preview button to verify that your HTML is displayed as you expect.
4. Click Apply.
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In the Login Message text box, enter the text that you want to display when each user logs in.
3. Mark the show login message box to enable the display of the login message (or clear the box to disable the display).
4. Click Apply.
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. Locate the field Concurrent login from different IP.
3. Click Enable or Disable, depending on the current status, to change the setting.
Note: When the feature is disabled, an Unlock button appears next to the Enable button. You can click Unlock to allow a second user to log in with this user account,
from a different IP address. This is provided for support purposes.
Restriction: Data Level Security and the Investigation Dashboard cannot be enabled concurrently.
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. Click the Enable or Disable button for the Data level security filtering option
Note: The datasec-exempt role is activated when data level security is enabled and the datasec-exempt role has been assigned to a user.
3. Additional choices include:
Note: If data level security at the observed data level is enabled, then audit process escalation is allowed only to users at a higher level in the user hierarchy.
Default Filtering
Online viewer default setting and for audit process results distribution.
At this point in the Global Profile menu is a button to see Current usage. Click on the Current Usage button to show values for INNODB, MYISAM and Total.
Note: The custom size limit is tested before importing data. The import can exceed the maximum size limit. After the limit is exceeded, the next import will be prevented.
For Global Profile - Export and Patch Backup can be changed. The default port for ssh/scp/sftp is 22. The default port for FTP is 21.
Note: Seeing a zero 0 in the Guardium GUI as the port indicates that the default port is being used and that there is no need to change.
1. Click Setup > Tools and Views > Global Profile to open the Global Profile.
2. In Upload Logo Image, if you want to include a logo image in the portal window, enter an image file name or click Browse to select a file to upload to the Guardium
appliance, and then click Upload.
3. Refresh your browser window. The new logo appears.
Note: The name of the uploaded logo file cannot contain a single quotation mark, double quotation mark, less than sign, or greater than sign.
The corresponding GrdAPI command to update this value is: grdapi update_datasource_connection_timeout timeoutInSecond=80
Alerter Configuration
No e-mail messages, SNMP traps, or alert related Syslog messages will be sent until the Alerter is configured and activated.
Other components create and queue messages for the Alerter. The Alerter checks for and sends messages based on the polling interval that has been configured for it.
To configure, enable or disable individual correlation alerts, see Correlation Alerts. For correlation alerts and appliance alerts to be produced, Anomaly Detection must
also be started. For real-time alerts to be produced, a security policy must be installed.
Anomaly Detection
The Anomaly Detection process runs every polling interval to create and save, but not send, correlation alert notifications that are based on an alert's query.
This notification is run according to the schedule defined for each alert. See Alerter Configuration for more information about sending notifications.
The Anomaly Detection process uses the results of a correlation alert's query, which looks back over a specified period of time, and the correlation alert's threshold, to
determine whether a condition is satisfied (an excessive number of failed logins, for example). See Correlation Alerts for more information.
In a Central Manager environment, the Anomaly Detection panel for each Guardium system can be used to turn off correlation alerts that are not appropriate for that
particular Guardium system. Under Central Management, all correlation alerts are defined on the Central Manager, regardless of which Guardium system they were
created or updated. These correlation alerts are the same for all Guardium system, and when activated, are activated on all Guardium system by default.
Note: The Alerter component must be configured and started to send a saved alert message to SYSLOG, email, or an SNMP trap.
Note: Anomaly Detection does not play a role in the production of real-time alerts, which are produced by security policies.
Set the frequency that Anomaly Detection checks for appliance issues
1. Click Setup > Tools and Views > Anomaly Detection to open Anomaly Detection.
2. Enter the Polling Interval in minutes.
3. Click Apply.
To enable or disable an alert on a single Guardium system in a Central Management environment, follow these steps:
1. Log in to the UI of the Guardium system on which you want to disable one or more alerts.
2. Click Setup > Tools and Views > Anomaly Detection to open Anomaly Detection.
3. To disable an alert, select it from the Active Alerts box, and click Disable.
4. To enable an alert, select it from the Locally Disabled Alerts box, and click Enable.
Session Inference
Session Inference checks for open sessions that have not been active for a specified period of time, and marks them as closed.
To stop Session Inference, open the Session Inference panel and click Stop.
You can also control this feature with the CLI command store stap approval or with the GuardAPI command, grdapi store_stap_approval.
If you use the CLI command store stap approval, the new configuration takes effect after you run the command restart inspection-core.
View approved STAPs in Manage > Reports > Change Monitoring > Approved Tap Clients or Reports > Real-Time Guardium Operational Reports > Approved Tap Clients.
Procedure
1. Access Manage > Activity Monitoring > S-TAP Certification.
2. Select S-TAP Approval Needed.
3. Specify the approved S-TAP client host IP addresses (not host name) in the Approved S-TAP Clients section, and click Add.
4. Repeat for each S-TAP client.
Results
Note: In a Central Managed environment, after you add the IP addresses to approved S-TAPs, there is a wait time for synchronization that might take up to an hour. After
synchronization is complete, the status of the approved S-TAPs appears green in Manage > Activity Monitoring > S-TAP Control
Parent topic: Configuring your Guardium system
IP to Hostname Aliasing
The IP-to-Hostname Aliasing function accesses the Domain Name System (DNS) server to define hostname aliases for client and server IP addresses.
There are two separate sets of IP addresses: one for clients, and one for servers. When IP-to-Hostname Aliasing is enabled, alias names will replace IP addresses within
Guardium® where appropriate.
1. Click Protect > Database Intrusion Detection > IP-to-Hostname Aliasing to open IP-to-Hostname Aliasing.
2. Mark the check box for Generate Hostname Aliases for Client and Server IPs (when available) to enable hostname aliasing.
A second check box can now be accessed. The name of this check box is Update existing Hostname Aliases if rediscovered.
3. Mark the check box to update a previously defined alias that does not match the current DNS hostname (usually indicating that the hostname for that IP address
has changed). You may not want to do this if you have assigned some aliases manually. For example, assume that the DNS hostname for a given IP address is
dbserver204.guardium.com, but that server is commonly known as the QA Sybase Server. If QA Sybase Server has been defined manually as an alias for that IP
address, and the check box for Update existing Hostname Aliases if rediscovered is marked, that alias will be overwritten by the DNS hostname.
4. Click Apply to save the IP-to-Hostname Aliasing configuration.
5. Do one of the following:
Click Run Once Now to generate the aliases immediately.
Click Define Schedule to define a schedule for running this task. See Scheduling for more information.
System Backup
Use the System Backup function to define a backup operation that can be run on demand or on a scheduled basis. Use the Patch Backup function to create the backup
profile settings.
System Backup
All configuration information and data is written to a single encrypted file and sent to the specified destination, using the transfer method configured for backups on this
appliance.
To restore backed up system information, use the restore system CLI command. The CLI command, diag, can also be used, provided that diag is defined as a role for given
user.
SCP - defined by default and accessible via CLI and the GUI
FTP - defined by default and accessible via CLI and the GUI
Centera - can be added to the GUI by logging into CLI and running the following command, store storage centera backup on
TSM - can be added by logging into CLI and running the following command, store storage tsm backup on
AMAZON S3 - is defined by default and accessible via CLI and GUI. It is accessible from CLI as long as it is defined in the GUI.
Softlayer - Softlayer cloud backup
Cleversafe - CleverSafe Functionality: Storing backups in a similar fashion to Amazon S3. Will draw a list of available buckets for you directly to the GUI. The first
listed name is the name of the bucket you saved to the DataBase. Note: You cannot make new buckets nor delete any buckets (from the Guardium UI/CLI).
Note: System restore must be done to the same patch level  of the system backup. For example, if a customer backed up the appliance when it was on Version 7.0, Patch
7 and then wants to restore this backup into a newly-built appliance, then there is a need to first install Version 7.0, Patches 1 to 7 on the appliance and only then to
restore the file.
1. Click Manage > Data Management > System Backup to open System Backup.
2. Select storage method radio button from the list. Depending on how the Guardium system has been configured, one or more of these buttons may not be available.
For a description of how to configure the archive and backup storage methods, see the description of the show storage-system and store storage-system
commands in Configuration and Control CLI Commands.
EMC CENTERA
TSM
SCP
FTP
AMAZON S3
Softlayer
Cleversafe
3. Perform the appropriate procedure depending on the storage method selected:
Configure SCP or FTP Archive or Backup
Configure EMC Centera Archive or Backup
Configure TSM Archive or Backup
Configure AMAZON S3 Archive or Backup
Configure Softlayer object storage cloud backup
Cleversafe - Enter> Valid Endpoint, Valid Bucket name, Valid Access Key, Valid Secret Key
4. Mark one or both of the Backup check boxes:
Mark the Configuration check box to back up all definitions.
Mark the Data check box to back up all data. (If you are archiving data on a regular basis, this is unnecessary.)
5. Use the Scheduling section to define a schedule for running this operation on a regular basis.
6. Click Save to verify and save the configuration changes. The system will attempt to verify the configuration by sending a test data file to that location.
If the operation fails, an error message will be displayed and the configuration will not be saved.
If the operation succeeds, the configuration will be saved.
7. Click Run Once Now to run the operation once.
Note: During a SCP/FTP/TSM/Centera/AMAZON S3/Softlayer file transfer, if the backup file transfer fails, the last file of each set of backup/archive files (system backup,
configuration backup, archive, CSV archive, etc.) will be saved in the diag/current folder. Then when the backup file destination is again online, a manual transfer of the
backup files can be made from the diag/current folder to the destination. The set of backup/archive files will only be saved in the diag/current folder if the file transfer is
unsuccessful. If during another backup file transfer there is a file transfer failure, the set of backup/archive files will again be saved in the diag/current folder. However, in
order to avoid saving too many files and running out of disk space, ONLY the latest file of each type will be saved. The earlier backup files will be overwritten.
Note: When performing a system backup and restore from one server, which has GIM defined, to another server, then the user must configure a GIM failover to the restore
server. This GIM configuration applies to a Backup Central Manager or a System backup and restore.
For System Backup or Patch Backup - Set the protocol (SCP or FTP) and specify Host, Directory and Port. The default port for ssh/scp/sftp is 22. The default port for FTP is
21.
The archive process will check the size of the static tables and make sure there is room in /var to create the archive.
An error is logged in the logfile and GUI if the backup is over 50%. For example:
ERROR: /var backup space is at 60% used. Insufficient disk space for backup.
Amazon S3 (Amazon Simple Storage Service) provides a simple web service interface that can be used to store and retrieve any amount of data, at any time, from
anywhere on the web. It gives any developer access to the same highly scalable, reliable, secure, inexpensive infrastructure that Amazon uses to run its own websites.
1. An Amazon account.
3. Amazon S3 credentials are required in order to access Amazon S3. These credentials are:
Access Key ID - identifies user as the party responsible for service requests. It needs to be included it in each request. It is not confidential and does not
need to be encrypted. (20-character, alphanumeric sequence).
Secret Access Key - Secret Access Key is associated with Access Key ID calculating a digital signature included in the request. Secret Access Key is a secret,
and only the user and AWS should have it (40-character sequence). This key is just a long string of characters (and not a file) that is used to calculate the
digital signature that needs to be included in the request.
There are two archive operations available on the Administration Console, in the Data Management section of the menu:
Data Archive backs up the data that has been captured by the appliance, for a given time period.
Results Archive backs up audit tasks results (reports, assessment tests, entity audit trail, privacy sets and classification processes) as well as the view and sign-off
trails and the accommodated comments from work flow processes.
When Guardium data is archived, there is a separate file for each day of data.
<time>-<hostname.domain>-w<run_datestamp>-d<data_date>.dbdump.enc
The archive function creates signed, encrypted files that cannot be tampered with. The names of the generated archive files should not be changed. The archive operation
depends on the file names created during the archiving process.
System backups are used to backup and store all the necessary data and configuration values to restore a server in case of hardware corruption.
All configuration information and data is written to a single encrypted file and sent to the specified destination, using the transfer method configured for backups on this
appliance.
<data_date>-<time>-<hostname.domain>-SQLGUARD_CONFIG-9.0.tgz
<data_date>-<time>-<hostname.domain>-SQLGUARD_DATA-9.0.tgz
The Aggregation/Archive Log report can be used to verify that the operation completes successfully. There should be multiple activities listed for each Archive operation,
and the status of each activity should be Succeeded.
Regardless of the destination for the archived data, the Guardium catalog tracks where every archive file is sent, so that it can be retrieved and restored on the system
with minimal effort, at any point in the future.
A separate catalog is maintained on each appliance, and a new record is added to the catalog whenever the appliance archives data or results.
Catalog entries can be transferred between appliances by one of the following methods:
Aggregation - Catalog tables are aggregated, which means that the aggregator will have the merged catalog of all of its collectors
Export/Import Catalog - These functions can be used to transfer catalog entries between collectors, or to backup a catalog for later restoration, etc.
Data Restore - Each data restore operation contains the data of the archived day, including the catalog of that day. So, when restoring data, the catalog is also being
updated.
When catalog entries are imported from another system, those entries will point to files that have been encrypted by that system. Before restoring or importing any such
file, the system shared secret of the system that encrypted the file must be available on the importing system.
Amazon S3 archive and backup option is enabled by default in the Guardium GUI. To enable Amazon S3 via Guardium CLI, run the following CLI commands:
Amazon S3 requires that the clock time of Guardium system to be correct (within 15-minutes). Otherwise, this will result in an Amazon error. If there is too large a
difference between the request time and the current time, the request will not be accepted.
If the Guardium system time is not correct, set the correct time using the following CLI commands:
User Interface
Use the System Backup screen (Manage > Data Management > System Backup) to configure the backup. After enabling Amazon S3 through the CLI commands, Amazon
S3 will appear in the list of protocols.
S3 Bucket Name (Every object stored in Amazon S3 is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. Within a bucket, you
can use any names for your objects, but bucket names must be unique across all of Amazon S3.
Access Key ID
1. Log onto AWS Management Console using your email address and password.
http://aws.amazon.com/console/
1. Click on S3.
Authentication Endpoints - Authentication requests should be sent to the endpoint associated with the location of your Object Storage account.
https://dal05.objectstorage.softlayer.net/auth/v1.0
Container - The basic storage unit for all the data within Object Storage is a container. It stores data/files and must be associated with an Object Storage account.
1. Click Manage > Data Management > System Backup, Manage > Data Management > Data Archive, or Manage > Data Management > Results Archive.
Access CLI.
1. DATA
2. CONFIGURATION
1. SCP
2. CONFIGURED DESTINATION
Make sure destination is configured in the GUI under the <System Backup> option
Access CLI.
1. SCP
2. FTP
3. TSM
4. CENTERA
5. AMAZONS3
7. SOFTLAYER
Enter X-Auth-Key:
Authenticate success!
Select your recovery type, for most cases, use the normal option:
1. normal
2. upgrade
Prerequisite
The Guardium server must be set to the correct local time. Use NTPserver to change if necessary.
Bucket name
Procedure
1. Click Setup > Patch Backup to open the Patch Backup panel.
2. Choose the method of file transfer.
3. Enter the name of the host and the directory where the information is to be stored.
4. Enter a user name and password to own the file on the destination host.
5. Click Apply when you are finished.
Follow this procedure to configure permissions for socket all connections that are used by custom classes.
1. Click Setup > Evaluations > Communication Permissions to open the Communication Permissions.
2. Click Add permission To Socket Connection to expand that pane.
3. Enter the IP address or Host name for the host.
4. Enter a Port number for the socket connection.
5. Enter a description.
6. Click Save.
There are two predefined users on a Guardium® appliance: accessmgr and admin.
accessmgr is the user name assigned to the access manager. By default, the access manager is the only user authorized to manage user accounts and security
roles.
Note:
Admin and accessmgr roles can not be assigned to the same user. The same user may contain both of these roles through a legacy situation or as a result of an upgrade.
However, current use will not allow the two roles to be assigned to the same user.
In the past, when a unit was upgraded, the accessmgr role was assigned to the admin user, and the accessmgr user was disabled. In this upgrade situation, it was
necessary to first log in as admin and enable the accessmgr user, then log in as accessmgr (with initial password "accessmgr", the system prompted the user to change it),
and remove the accessmgr role from the admin user.
User - Role -- a report that shows, by user, the number of roles that user belongs to.
All Roles - User -- a report that shows, by role, the number or users that belong to that role.
Note: admin and access manager are pre-existing, other roles are created by the Access manager.
The following reports are available on a Central Manager or a standalone unit. If trying to use on a managed machine, an error message will appear. Servers Not Associated
will show servers from ALL managed units in Central Manager systems.
Datasources Associated
This report identifies Datasource Name, Host, Service Name, Login Name and Association Type. This information comes from the choices made in the User-Database
Associations activity. See the Data User Security - Hierarchy and Associations help topic.
Servers Associated
This report identifies Server IP, Service Name, Login Name and Association Type. This information comes from the choices made in the User-Database Associations
activity. See the Data User Security - Hierarchy and Associations help topic.
Understanding Roles
Assign a role to a Guardium user to grant them specific access privileges. Some examples of roles are: CLI, admin, accessmgr, CAS, and user.
Managing roles and permissions
Roles and permissions provide different levels of access to users based on their job duties.
How to create a role with minimal access
This topic explains how to create a new role with minimal access permissions, for example an auditor role that can only access the Audit Process To-Do List and
view specific reports.
Manage Users
Use the access manager, assigned the user name accessmgr, to add user accounts, enable or disable user accounts, import members from LDAP, or edit user
permissions. Open the User Browser and browse the user accounts by clicking Access > Access Management > User Browser
How to create a user with the proper entitlements to login to CLI
Use this task to create a user who has the proper roles and entitlements to use CLI to run GuardAPI commands.
Importing Users from LDAP
You can import Guardium user definitions from an LDAP server by configuring an import operation to obtain the appropriate set of users.
Understanding Roles
Assign a role to a Guardium user to grant them specific access privileges. Some examples of roles are: CLI, admin, accessmgr, CAS, and user.
The access manager defines roles and assigns them to users and applications. When a role is assigned to an application or the definition of an item (a specific query, for
example), only those Guardium users who are also assigned that role can access that component.
When user definitions are imported from an LDAP server, the groups to which they belong can optionally be defined as roles. For more information, see Importing Users
from LDAP.
Note: When assigning roles to a user, the admin and access manager role cannot be assigned to the same user.
Note: Custom-created roles cannot be combined with default-provided roles (examples are user, admin, accessmgr, cli, inv, datasec-exempt, review-only).
Note: Admin role and object owner have access to all objects by default.
Note: Taking a base role and customizing (with additional navigation items), and then copying this customized role, will result in a loss of the customization if the
customized or copied role is reset to default.
Default Roles
The Guardium system is pre-configured to support users who fall into four broadly defined default roles: admin, user, access manager, and investigations. The Guardium
access manager can create new roles as well.
Note: Note: If data level security at the observed data level is enabled (see Global Profile settings), then audit process escalation is allowed only to users at a higher level
in the Data Hierarchy (see Access Manager). The Datasec-exempt user can escalate, without restrictions, to anyone.
Table 1. Default Roles
Default Role Description
user Provides the default layout and access for all common users. This role can not be deleted.
admin Provides the default layout and access for Guardium administrators. Do not confuse the admin role with the admin user, which is a special
user account having the admin role, but also having additional powers that are reserved for the admin user account only. This role can not
be deleted.
accessmgr Provides the default layout and access for the access manager. This role can not be deleted.
cli Provides access to CLI. The admin user has default access to CLI. Everyone else must be given permission when users are created by
access manager and roles specified. The access manager can define as many users in the system and give them the CLI role. These users
have access to the CLI and all activities of their CLI sessions are associated with this user.
To run GrdAPI or CLI commands without admin rights, click the role CLI for Admin Console in the User Role Permissions selection.
See the topic, diag CLI Command, on how to manage the diag role.
inv Provides the default layout and access for investigation users. An investigation user must have the restore-to database name of INV_1,
INV_2 or INV_3, as the Last Name in their user definition. This is not enforced by the GUI, but is required for the application to function
properly. When assigned, the user role must also be assigned. This role can not be deleted.
Note: The Ad-Hoc Process for run once now button is available on all report screens for all users except investigation (INV) user.
datasec-exempt Data Security - Exempt. This role is activated when Data level security is enabled (see Global Profile in Administration Console) and the
datasec-exempt role has been assigned. If the user has this role, a Show all check box appears in all reports. If checked, all sniffed data
records are shown (no filter is applied). This role cannot be deleted in the Role Browser.
review-only A user that is specified by this role can view only results (Audit, Assessment, Classifier), Audit Results and the To Do List. This role cannot
be deleted in the Role Browser.
Users with this role is allowed to enter comments in the audit process viewer (not workflow or comments/data per row, but comments at
process/result level).
Users with this role cannot perform any changes/actions on any workflow automation result (escalate, reassign, etc).
Sample Roles
In addition to the default roles, a set of sample roles is also defined.
dba Users who have a database-centric view of security, allowing access to database-related reports and tracking of database objects
infosec Users who have an information security focus, including tracking access to the database, and handling network requests, audits, and
forensics
netadm Users who have a network-centric view, including IP sources for database requests
appdev Application developers, architects, and QA personnel who have an application-centric focus and want to track and report on SQL streams
generated by an application
Note: If trying to copy this role, an embedded message will appear explaining that not all aspects of this role can be copied. The message
is: "Create a new role using the layout and permission from the "audit" role. Special privileges and actions associated with the "audit" role
will not be copied."
audit-delete This role is used to track or log when an audit process result has been deleted. Users with the audit-delete role can delete reports. Admin
users can also delete reports. Tracking is done through the User Activity Audit Trail report.
admin-console-only A user that is specified by this role can only access the admin console tab.
vulnerability-assess A user that is specified by this role can view only vulnerability results.
diag A user that is specified by this role can access and run the diag commands in CLI.
workload-replay-admin A user that is specified by this role can define and modify the workload-replay functions.
workload-replay-user A user that is specified by this role can run the workload-replay functions.
fam A user that is specified by this role can define and modify the File Activity Monitor functions.
Basel II Part 2 Sections 4 and 5 require that banking institutions must define a Securitization Framework around financial information and
estimate the associated operational risk.
DataPrivacy Accelerator - DataPrivacy. This role can not be deleted.
The Data Privacy Accelerator delivers a portfolio of pre-configured policies, real-time alerts, and audit reports that are specifically tailored
to the challenges of identify theft and based on industry best practices. With the Data Privacy Accelerator, security managers, privacy
officers, and database administrators begin by defining combinations of data elements – called "privacy sets" – whose access may
indicate hacking or inappropriate activities by internal users.
GDPR Accelerator - GDPR. This role can not be deleted.
The Guardium GDPR accelerator provides predefined reports based on GDPR groups and policies. To begin working with the GDPR
accelerator, assign the GDPR role to a Guardium user, then navigate to Accelerators > GDPR with that user account.
pci Accelerator - PCI. This role can not be deleted.
The PCI DSS is a set of technical and operational requirements designed to protect cardholder data and applies to all organizations who
store, process, use, or transmit cardholder data. Failure to comply can mean loss of privileges, stiff fines, and, in the case of a data breach,
severe loss of consumer confidence in your brand or services. The IBM Guardium accelerator helps guide you through the process of
complying with parts of the standard using predefined policies, reports, group definitions, and more.
sox Accelerator - SOX. This role can not be deleted.
SOX Section 404 requires that companies must establish and maintain an adequate internal control structure and procedures for financial
reporting.
Create a Role
1. Login as accessmgr, and open the User Role Browser by clicking Access > Access Management > Role Browser.
2. Click Add Role to open the Role Form panel.
3. Enter a unique name for Role Name and click Add Role.
Remove a Role
1. Open the User Role Browser by clicking Access > Access Management > Role Browser.
2. Click Delete for any role (some roles cannot be removed, and do not have the Delete option). This opens the Role Form for the role.
3. Click Confirm Deletion. A message displays informing you that all references to the role are removed, and you will be asked to confirm the action.
4. Click OK to confirm the deletion, or Cancel to abort the operation.
Examples of roles include user, admin, and audit. Using roles allows you to easily define permissions for an entire group of users. Only access managers can create new
roles and assign users to that role. As part of role creation, access managers can also customize the navigation menu and permissions for that role.
Limit access from the application by deselecting the All Roles check box on the Role Permissions > Edit Application Role Permissions screen. Next, select the
individual roles that should have access to the application.
The process is the same if you find that the All Roles check box is already deselected: simply select or deselect the individual roles to grant or revoke access to the
application.
When All Roles is selected for a particular application, every currently-defined role will have access to that application.
Limit access from the role by navigating to the Role Browser > Manage Permissions screen and move individual applications from the Accessible applications list to
the Inaccessible applications list.
When managing permissions or customizing the navigation menu for a new role, the defaults shown in the Accessible applications list reflects any application with
the All Roles check box selected on the Role Permissions > Edit Application Role Permissions screen.
When working with roles and permissions, removing permissions for an application also changes the default permissions for new roles. That is, removing permissions for
an application means that any subsequent roles you create will also lack permissions for that application. If you want a new role to have permissions for an application
that no longer appears in the Accessible applications list by default, you will need to move the desired application from the Inaccessible applications list to the Accessible
applications list for the new role.
It is also possible to restrict access to specific tools by hiding menu items using the Role Browser > Customize Navigation Menu tool. This approach limits access without
altering the default application permissions, but it may be less secure than a permissions-based approach.
Best Practices:
After editing permissions for a role, review the navigation layout for that role as shown on theRole Browser > Customize Navigation Menu screen. Add or remove
items from the Navigation Menu list as needed to create a layout appropriate for the role.
Copy and edit predefined roles to establish the desired permissions and navigation menu. This approach allows you to revert to the original role if needed.
Procedure
1. Create a new role.
a. Log in as accessmgr, navigate to Access > Access Management, and select the Role Browser.
b. Click the Add Role button, give the role a name, and click the Add Role button to create the new role.
2. Manage permissions so the new role can only access the Audit Process To-Do List and the Report Builder (which is required for viewing reports).
a. From the Role Browser, click the Manage Permissions link for the new role.
b. Select the checkbox in the header of the Accessible Items list and use the arrow to move all items to the Inaccessible Items list. When creating a highly
restricted role, it is easier to begin by removing permissions.
c. In the Inaccessible items list, select the Audit Process To-Do List and the Report Builder, and use the arrow to move them back to the Accessible items list.
The new role now has access to only these two specific applications.
d. Click the OK button to commit your changes.
3. Customize the menus and navigation by defining which reports and applications are available to the new role.
a. From the Role Browser, click the Customize Navigation Menu link for the new role.
b. In the Navigation Menu list, select the Reports group so it is highlighted. The selected group acts as the destination for menu items added in subsequent
steps.
c. In the Available Tools and Reports list, expand the Reports section or use the Filter to identify specific reports, select the check box next to each item that
should be available to the new role, and use the arrow to add the items to the Navigation Menu list. Items moved into the Navigation Menu list will become
visible to users assigned to this role.
d. In the Navigation Menu list, remove access to the Report Builder by clicking the icons next to the Reports > Report Configuration Tools and Investigate
groups. This further simplifies the menu structure for this role and removes access to the Report Builder tool without also removing application permissions
that are required to access reports.
e. Click the OK button to commit your changes. You have now created a new role with very minimal privileges that can be assigned to users.
4. Optionally specify a custom home page for the new role.
a. From the Role Browser, click the Customize Navigation Menu link for the new role.
b. In the Navigation Menu list, specify a new default home page by selecting Comply > Tools and Views > Audit Process To-Do List and clicking the icon in
the toolbar. Users assigned to this role will now see the Audit Process To-Do List as the default screen after logging in.
c. Click the OK button to commit your changes.
5. Create a new user and add that user to the new role.
a. Navigate to Access > Access Management and select User Browser.
b. Click Add User, provide the required information, and click Add User to create the new user. You will now see the user you created listed in the User Browser.
c. From the User Browser, click the Roles link for the new user to view a list of available roles.
d. Select the Assign check box next to the custom role you created earlier. This will assign the user to the new role.
e. Deselect the Assign check box next to the user role. Deselecting the user role prevents the new user from inheriting the default user access and permissions.
f. Click Save to commit your changes.
Manage Users
Use the access manager, assigned the user name accessmgr, to add user accounts, enable or disable user accounts, import members from LDAP, or edit user permissions.
Open the User Browser and browse the user accounts by clicking Access > Access Management > User Browser
Defining and modifying users involves deciding both who will be using the Guardium® system and to what roles they will be assigned. A group of users can all have the
same role and the same access privileges if you so choose. For more information on roles, see Understanding Roles.
Note: A default layout can be defined for a role, so that any new user assigned that role will have that layout. See Generate New Layout in the CLI Reference.
Regardless of how users are defined to the Guardium system, the Guardium administrator can configure the system to authenticate users via Guardium, LDAP, or Radius.
When getting started with your Guardium system, an important early task is to identify which groups of users will use the system, and what their function will be. For
example, an information security group might use Guardium for alerting and troubleshooting purposes while a database administrator group might use Guardium for
reporting and monitoring. When deciding who will access the Guardium system, keep in mind that sensitive company data can be picked up by the system. Therefore, be
very aware of who will be able to access that data.
Once you decide which groups of users will use the Guardium system (and for what purpose), collect the following information for each user:
By default, password validation is enabled. This means that a minimum of eight characters is required, and the password must contain at least one character from
each of the following categories:
Uppercase letters: A-Z
Lowercase letters: a-z
Digits: 0-9
Special characters: @#$%^&.;!-+=_
Note: If password validation is disabled, any characters are allowed.
By default, password expiration is enabled. Passwords can be configured to expire after a designated number of days.
By default, account lockout following a specified number of failed login attempts is enabled. Lockout can be configured to occur after a fixed number of attempts in
a given time, or after a total number of attempts for the life of the account.
Locked Accounts
1. Open the User Browser by clicking Access > Access Management to view the list of users.
2. Click Edit for any user, clear the Disabled check box, and click Update User to save changes.
Note: If the admin user account becomes locked, use the unlock admin CLI command to unlock it (see Configuration and Control CLI Commands in the CLI
Reference).
6. (Caution) The Disabled check box is checked by default. We suggest that you defer clearing the check box and enabling the account until after the correct set of
roles have been assigned for the user.
7. Click Add User to save the new user account definition and close the panel.
This completes the user definition. We suggest that you add the appropriate roles for the user before informing them of their password for the initial login. See
Understanding Roles for more information.
Note: Changing a user's password will require the user to change it following their next login.
Note: Alerts that were sent to deleted user will be sent now to the admin; however this will not take effect until the access policy is re-installed.
This can create a measure of data-level security, by permitting the parent of a hierarchy to look at specified servers and databases, but not the children of the
hierarchy. Depending on the configuration, inheritance can also take place in that the parent inherits the data-level security of the child.
Note: Many-to-many relationships are permitted where a user may have more than one parent and a parent may have more than one user.
Unlink User from parent - will sever the descendent from the parent
Remove all descendents - will sever all descendents from the parent
4. Click Refresh Cached Hierarchy to apply the recent changes to the user hierarchy map.
5. Click Full Update Active User-DB Map to fully apply all recent changes to the active User-DB association map.
Note: Best practices dictate a Full Update Active User-DB Map after changing the User Hierarchy.
When you make a change to a hierarchy or to a database association (via UI or GuardAPI), this change DOES NOT take effect automatically. The Periodic Update will
NOT pick up this change, unless it is the FIRST time the Periodic Update has run. Otherwise, the user MUST click Full Update or run the Full Update GuardAPI
command for their changes to take effect.
A periodic update of the user hierarchy is run every 10 minutes automatically. This cannot be run manually. This is an incremental update, meaning that it is only
looking at new server IPs or Service Names that have been sniffed since the last time the periodic update was run. It compares the existing hierarchy and
associations against the new IPs/Service Names and determines what users should have access to these IPs/Service Names.
A full update of the user hierarchy is NOT run automatically. It is only run when the user executes it, either via the UI or GuardAPI function. This compares ALL
IPs/Service Names to the existing hierarchy and associations to determine who has access to what.
1. Open the User-DB Association panel by clicking Data Security > User-DB Association.
2. Select the check boxes of the Server & Service Name Suggestion to find databases and service names to associate to users. Choices include:
Observed Accesses - Observed traffic from Guardium internal database table GDM_Access
Datasource Definitions - Existing datasource definition information such as name, database type, authentication information, and location of datasource.
S-TAP® Definitions - Existing S-TAP definition information such as the IP address of the database server and the IP address of the Guardium host that will
receive data from S-TAP.
Auto-Discovered Hosts - Hosts discovered by the Guardium Auto-discovery process that were not previously known. Guardium's Auto-discovery application
can be configured to probe the network, searching for and reporting on all databases discovered.
Guardium Install Manager (GIM)-Discovered Systems - Hosts discovered by the GIM that were not previously known.
3. Click Go to find and display available servers, service names, and currently associated users.
Note: When traversing the node tree, numerical indicators are displayed next to each server and service name to provide a count of direct and descendant users
that have been associated. The indicators take the format of [nn] for direct association and (mm) for descendant association (a server or service name within the
A full update of the user hierarchy is NOT run automatically. It is only run when the user executes it, either via the Full Update Activer User-DB Map button or the
GuardAPI function. This compares ALL IPs/Service Names to the existing hierarchy and associations to determine who has access to what.
A periodic update of the user hierarchy is run every 10 minutes automatically (cannot be run manually). This update is only looking at new server IPs or Service
Names that have been sniffed since the last time the periodic update was run. It compares the existing hierarchy and associations against the new IPs/Service
Names and determines what users should have access to these IPs/Service Names.
When you make a change to a database association (via UI or GuardAPI), this change DOES NOT take effect automatically. The periodic update will NOT pick up this
change, unless it is the FIRST time the periodic update has run. Otherwise, the user MUST click the Full Update Activer User-DB Map button, or run the full update
GuardAPI command for the changes to take effect.
Procedure
1. Login as the accessmgr and open the User Browser by clicking Access > Access Management > User Browser.
2. Click Add User from the User Browser panel
3. Fill in the User Form, clear the Disabled check box to enable the user upon creation, and click Add User.
4. From the User Browser panel, click Roles for any user to bring up the User Role Form panel.
5. Check the CLI check box, and click Save to grant the user CLI access
Now when the user tries to use one of the CLI accounts (guardcli1,...,guardcli5) under the newly created user we are asked for a password and granted access to
the CLI.
6. Grant any additional roles, if desired, to allow access to the user to execute GuardAPI functions.
For example, if the user johnsmith were to issue the following GuardAPI command, he would find out he does not have any API commands to execute:
But if we were to grant johnsmith the accessmgr role (previously in step 5) the same GuardAPI command would result in the following API commands being
available:
You can run the import operation on demand, or schedule it to run on a periodic basis. You can elect to have only new users imported, or you can have existing user
definitions replaced. In either case, LDAP groups can be imported as Guardium roles.
The Guardium admin user definition will not be changed in any way.
Existing users will not be deleted (in other words, the entire set of users is not replaced by the set imported from LDAP).
Guardium passwords will not be changed.
New users being added to Guardium:
Will be marked inactive by default
Will have blank passwords
Will be assigned the user role
Note:
When adding a user manually via Access Management (either from Add User or LDAP user import), if there is no first name and/or last name, the login name will be used.
This LDAP configuration menu screen has tool tips for certain menu choices. Move the cursor over a menu choice (such as Object Class for user), and a short description
will appear.
Guardium CLI users can not authenticate in the LDAP environment, as there is no privilege separation for the CLI users.
Note: In order to configure LDAP user import, accessmgr user must have the privilege to run Group Builder. In certain situations, when changes are made to the role
privilege, accessmgr's privilege to Group Builder can be taken away. This results in an inability to save or run successfully LDAP user import. Go to the access management
portal, select Role Permissions from the choices. Choose the Group Builder application and make sure that there is a checkmark in the all roles box or a checkmark in the
accessmgr box.
1. Open the LDAP User Import panel by clicking Access > Access Management > LDAP User Import.
See Example of Tivoli® LDAP Configuration at the end of this help topic for reference in filling out the required information.
2. For LDAP Host Name, enter the IP address or host name for the LDAP server to be accessed.
3. For Port, enter the port number for connecting to the LDAP server.
4. Select the LDAP server type from the Server Type menu.
5. Check the Use SSL Connection check box if Guardium is to connect to your LDAP server using an SSL (secure socket layer) connection.
6. For Base DN, specify the node in the tree at which to begin the search. For example, a company tree might begin like: DC=encore,DC=corp,DC=root
7. For Attribute to Import, enter the attribute that will be used to import users (for example: cn). Each attribute has a name and belongs to an objectClass.
8. Check the Clear existing group members before importing check box if you want to delete all existing group members before importing.
9. For Log In As and Password, enter the user account information that will connect to the Guardium server.
10. For Search Filter Scope, select One-Level to apply the search to the base level only, or select Sub-Tree to apply the search to levels beneath the base level.
11. For Limit, enter the maximum number of items to be returned. We recommend that you use this field to test new queries or modifications to existing queries, so that
you do not inadvertently load an excessive number of members.
12. Optional: For Search Filter, define a base DN, scope, and search filter. Typically, imports will be based on membership in an LDAP group, so you would use the
memberOF keyword. For example: memberOf=CN=syyTestGroup,DC=encore,DC=corp,DC=root
13. Click Apply to save the configuration settings.
Note: The Status indicator in the Configuration - General section will change to LDAP import currently set up for this group as follows and the Modify Schedule and
Run Once Now buttons will be enabled. You can now import from your LDAP server.
1. Open the LDAP User Import panel by clicking Access > Access Management > LDAP User Import.
1. Open the LDAP User Import panel by clicking Access > Access Management > LDAP User Import.
2. Click Run Once Now. After the task completes, the set of members satisfying your selection criteria will be displayed in the LDAP Query Results panel.
3. In the LDAP Query Results panel, mark the check box for each user you want added, and click Import (or click Cancel to return without importing any users).
4. To view the added users, open the User Browser by clicking Access > Access Management > User Browser. Verify that the correct user accounts have been added.
Port 389
Log in as cn=root
Password Â
Limit Â
Search filter Â
Role filter Â
Follow these steps to enable and use Guardium data security features:
When data security features are used with the Classification feature (which discovers and classifies sensitive data found in multiple places of the database), the Data Level
Security prevents a specified user from seeing classifier results from a specified datasource (datasource definition). Using Data Level Security can also prevent a specified
user from seeing Audit Task results when the task type is Classifier.
1. Log in as the admin user and open the Global Profile by clicking Setup > Global Profile.
2. Click Enable for Data level security filtering.
Note: The status indicator icon for Data level security filtering will now appear as .
You can verify that Data level security filtering is enabled by referencing the Services Status panel (Setup > Services Status).
With data level security filtering enabled, log in as the accessmgr to use the User Hierarchy and User-DB Association features.
Log in as accessmgr and open the User Hierarchy by clicking Data Security > User Hierarchy.
Click Full Update Active User-DB Map to view the full hierarchy of users.
Use the Roles and Users filters to view the hierarchy for a specific user or role. Right-click a node in the hierarchy to expand or collapse the tree, or add a user to a
specific hierarchy.
Click Refresh Cached Hierarchy to update the hierarchy.
Note: Depending on the configuration, inheritance can also take place where the parent inherits the data-level security of the child.
Log in as accessmgr and open the User-DB Association by clicking Data Security > User-DB Association.
1. View the current mapping of users to databases by clicking Full Update Active User-DB Map.
2. Create a new User-DB association map by selecting options from the Server & Service Name Suggestion list and clicking Go.
Note: Once the map is fully updated, you will see a tree listing all your servers. Click any node in the tree to view which users are currently associated with that
node.
If you are using dual-stack configuration, there is a root node, and two trees of addresses to choose from. One tree is for the IPV4 address, and the longer tree is for
the IPV6 address.
Add a user or group to a node by selecting the node and clicking Add user or Add group.
Central Management
On a Central Management appliance, there is also a box on the User-Database Associations screen that allows a user to create database associations based on data from a
managed node. Select a remote source from only a box that appears for Central Management appliances. Also, there is a check box to get data from ALL managed nodes.
Filter Results
Data level security at the observed data level requires the filtering of data for specific users and the specific databases they are responsible for.
Filtering at the system level is based on the User Hierarchy and User-DB Association so that users will see only information from their assigned databases for the various
reports, audit processes, security assessments, and so on, within the Guardium system.
Log in as the admin user and use the Global Profile to filter results. Open the Global Profile by clicking Setup > Global Profile.
Default filtering:
Show all - This option is available only if the user logged in has the special role datasec-exempt defined, which allows the user to see all data as if there was
no data level security.
Include indirect records - This check box shows the viewer not only the rows that belong to the user logged in, but also all the rows that belong to other users
within that hierarchy.
Audit Process Escalation: Escalation is allowed for tasks on this type only to users who have the datasec-exempt role. Users without the datasec-exmpt role are not
shown in the escalation list.
Escalate results to all users - A check mark in this check box escalates audit process results (and PDF versions) to all users, even if data level security at the
observed data level is enabled. The default setting is enabled. If the check box is disabled (no check mark in the check box), then audit process escalation only will
be allowed to users at a higher level in the user hierarchy and to users with the datasec-exempt role. If the check box is disabled, and there is no user hierarchy,
then no escalation is permitted.
PDF and CSV generation for results (attached to email) distribution will use the default global profile values set in Administration Console parameters.
PDF and CSV generated from the viewer will use the same filtering as in the screen.
Note:
The Data Security User to Database Association filters reports only from the following domains: Access; Exception; and, Policy Violations (as well as custom domains using
these domains or tables from these domains). All other domains (reports) are not filtered by the Data Security User to Database Association.
Users with admin role will be able to see event types on all roles (the information will still be filtered based on observed data level security parameters).
If Data Level Security is turned on, predefined entities added to a custom domain need to be in the same domain(s) for the data level security filtering to work properly.
If Data Level Security is on, and two predefined entity subjects are trying to send data from two domains (not Custom Domains) that are using a filtering policy, then the
sending of the two predefined entity subjects will not be permitted. Data Level Security can only enforce one kind of filtering policy (for example, there can be only one
policy depending on server_ip/service_name and one policy depending on datasource).
Procedure
1. Login as accessmgr and click Data Security > User Hierarchy.
2. Select a user from the Users drop-down menu to display it in the Data Security User Hierarchy pane. This example uses john smith as a user.
4. After clicking Add user from the drop down list, the Add user dialog appears. Select one or more users that you would like to add to the user's hierarchy, and then
click Add.
5. After adding the users to a hierarchy, the Data Security User Hierarchy panel will be refreshed; allowing the user to drill down and see the new hierarchy.
Government applications refer to Personal Identification and Verification Cards (PIV). Civilian applications refer to Common Access Cards (CAC). PIV and CAC cards have
different certificate authorities, but the cards are otherwise the same.
Guardium Smart card support meets the HIGH confidence PIV assurance level described in the PIV Cardholder Authentication (6) section of the Personal Identity
Verification (PIV) of Federal Employees and Contractors (FIPS Publication 201-2) document. FIPS 201-2 is available through this NIST web site: https://www.nist.gov .
Prerequisites
Access to the Guardium UI via a web browser that can access the Smart card certificate
A Smart card reader
A valid PIV/CAC card
Create Guardium users to associate with Smart cards. If you want to associate existing users with Smart cards, you do not need to create any new users. For more
information about user creation and access management, see Access Management Overview.
Example
Create Users
The Guardium application provide various ways for users to be created. It doesn’t matter how your users are created, and once you configure your web to use the
Smart card for authentication it only uses the Smart card credential to establish SSL/TLS communication (Guardium site uses https).
1. Login as Accessmgr on CM
2. Select AccessUser Browser.
3. Click Add.
4. Add Username Test Cardholder X
5. Add password twice
6. Enter first name and last name same as user
7. Click Add.
Now you will configure the mapping, so when a Smart card is present, the information on the Smart card will be correctly mapped to a user in the system.
Now use a regular expression, in the Regex Match Pattern, to match the user information on the Smart card. Here is an example of a Regex Match Pattern:
CN ?= ?(.*?), ?OU ?= ?Test Agency, ?OU ?= ?Test Department, ?O ?= ?Test Government, ?C ?= ?US
This works with a Smart card with client certificate, the client certificate you selected to send to the webserver to establish HTTPS. On the Smart card you selected, this
client certificate gives to the webserver when the server requests it, which is exactly what happens when this feature is enabled. An example of the client certificate has
the details: Version, Serial number, Signature algorithm, Signature hash algorithm, Issuer, Valid from, Valid to, Subject.
In this example you can use one of the following patterns. They both will match the mapping. Pattern 1 is more exact. Pattern 2 depends on your purpose, you can write
your own to match your needs. You need to work with someone who is familiar with the data on the Smart card to write efficient mapping patterns.
Pattern 1:
CN ?= ?(.*?), ?OU ?= ?Test Agency, ?OU ?= ?Test Department, ?O ?= ?Test Government, ?C ?= ?US
Pattern 2:
CN ?= ?(.*?)
Both of the examples will get the value for CN attribute in the certificate subject which you can see by examine the detail of the certificate from the browser. In this case it
is Test Cardholder X. Configure this pattern correctly is probably the most important part to make sure the authentication on Smart card is successful.
Note that the regex validation tool currently available for other modules is not available for this purpose. (see Troubleshooting section, items 2 and 3).
Now save it. Note, you are not done yet and you need to enable it from CLI since part of the enablement can only be done after the server is shut down, during which there
is no GUI.
Before you leave GUI for the CLI part, you need to upload the root CA certificate to the trust store.
This part describes how you upload the root CA’s certificate into the trust store used by the GUI. Use the Import Certificate selection from the Guardium Portal and
Authentication Configuration screens.
If you do not have the root certificate of the CA that signed the certificates on the Smart cards, you can export a root certificate from a CA-signed user certificate or a
Smart card that contains one.
We assume you obtained the certification either by having it given to you by the customer or exporting it from a Smart card using certification management tools such as
certMgr.exe or tools like open SSL.
The public root certificate of a trusted CA. This is the most common source of a root certificate in environments that already have a Smart card infrastructure and a
standardized approach to Smart card distribution and authentication.
Select a certificate to use for Smart card authentication. The signing chain lists a series of signing authorities. The best certificate to select is usually the intermediate
authority above the user certificate.
When the feature is turned off, the GUI is automatically restarted with the system using local authentication. This is also useful when you first deploy the system and the
regular expression you set is not quite right and you see errors.
Note: While the Smart card authentication is used to authenticate, the access control (for example, what module a user has access, what navigation the user has) is still
done through the same way as without Smart card authentication.
Once the feature is enabled, you can only access the site with a valid Smart card (PIV, CAC etc.)
Now when you visit the GUI site, you’ll see an authentication prompt, asking you to choose a certificate.
The above details are for an administrator to set it up. As for end user, if it is set right, the user just needs to put the card in and the user will go straight to the site content.
For a user with a valid Smart card, when the user load the websites, the browser will prompt for a Smart card pin. This pin allows the client certificate on the card is access
when requested.
After the pin is provided, the regular Guardium login page will display with the user field pre-filled with the login extracted from the Smart card. Note there is no password
used here. The only thing you see in the user field is the extracted user place holder for mapping.
For example, if the certificate are valid and the root CA of the Smart card issuer for Test Cardholder X is loaded in Guardium web server (See section Upload the root
CA’s certificate for how to do it), the user field will be pre-filled with Test Cardholder X and prompt you for the Smart card pin. This is to access the client certificate on
the Smart card. The client certificate stays on the Smart card and you cannot export it into a file. You may see the prompt twice and just provide the pin.
What to do next
Troubleshooting or recovery scenarios
Diagnostic: Most likely, your configuration of the matching regular expression is not right or you don’t have a valid certificate on the card.
You created a matching Regex and it does not seem to be working. You remember that Guardium has a regex validation tool and used it thinking that if it works in the tool,
it's a good Regex. Unfortunately, while the test is successful in that tool, the Regex pattern doesn't work for Smart Card Configuration.
Diagnostic: That tool is to find if an expression can be found inside a text paragraph. So it won't work in this case. This configuration is to extract a piece of text from the
certificate text as displayed in the subject as shown in certificate details.
You didn’t get prompt from the browser to select a certificate at all.
Diagnostic: PC/laptop is able to install the card reader and the Smart card. A copy of the certificate in the Smart card gets copied to the certmgr in Windows OS. However,
when accessing the site, browser (IE or Firefox or Chrome) does not read the certificate. In other words, all the three browsers are unable to read the certificate and there
is no prompt to choose the certificate.
This has been noted on all browsers on some laptops we tested. If this is the case, it’s not only happening to Guardium site. Other sites that require Smart cards to
operate will also experience this. This is rare.
Aggregation
Collect and merge information from multiple Guardium® units into a single Guardium Aggregation appliance to facilitate an enterprise view of database usage.
Central Management
In a central management configuration, one Guardium unit is designated as the Central Manager. That unit can be used to monitor and control other Guardium
units, which are referred to as managed units. Un-managed units are referred to as stand-alone units.
Investigation Center
Investigation Center is an extension of the Aggregation Servers. Investigation Users (once defined) can restore data and results of selected historic dates and
perform forensic investigation. Once the days (dates) are restored, the investigation users can define and view reports using the standard Guardium UI, only in the
scope of the investigated dates.
Aggregation
Collect and merge information from multiple Guardium® units into a single Guardium Aggregation appliance to facilitate an enterprise view of database usage.
Aggregation Process
Accomplished by exporting data on a daily basis from the source appliances to the Aggregator (copying daily export files to the aggregator).
Aggregator then goes over the uploaded files, extracts each file and merges it into the internal repository on the aggregator.
For example, if you are running Guardium in an enterprise deployment, you may have multiple Guardium servers monitoring different environments (different geographic
locations or business units, for example). It may be useful to collect all data in a central location to facilitate an enterprise view of database usage. You can accomplish this
by exporting data from a number of servers to another server that has been configured (during the initial installation procedures) as an aggregation appliance. In such a
deployment, you typically run all reports, assessments, audit processes, and so forth, on the aggregation appliance to achieve a wider view, not always an enterprise view.
Note: The Aggregator does not collect data, but it is used to present the data from the collectors.
Pre-defined aggregation reports can be located on the Guardium Monitor tab, Enterprise Buffer Usage Monitor, and the Daily Monitor tab, Logging Collectors.
Appliance Types
Collector
Used to collect database activity, analyze it in real time and log it in the internal repository for further analysis and/or reacting in real-time (alerting, blocking, etc.).
Use this unit for the real-time capture and analysis of the database activity. Â
Note:
In many environments, the Central Manager is also the Aggregator. Central Manager and Aggregator can be installed on the same appliance.
Guardium appliance needs to be configured as an Aggregator at install time, in order to be promotable to a Central Manager.
Solution: Once the Aggregator unit has been upgraded and the user does not want to see the aggregator unit to show as down on the search tooltip, User can run
the two commands below
2. restart network
Note: If the environment was in and will be in cm_only or local_only mode, this step will not enable search from aggregator, just make it so that aggregator does not
show as down.
Terminology
Table 1.
Term Description
Guardium Appliance The physical or virtual Guardium box; can be either a “collector†or an “aggregator†(with or without central management)
Purge For the best performance, purge all data that is not needed. Purge to free disk space.
Archive Compress the data of a single day into an encrypted file and send it to the aggregator.
Hierarchical Aggregation
Guardium also supports hierarchical aggregation, where multiple aggregation appliances merge upwards to a higher-level, central aggregation appliance. This is useful for
multi-level views. For example, you may need to deploy one aggregation appliance for North America aggregating multiple units, another aggregation appliance for Asia
aggregating multiple units, and a central, global aggregation appliance merging the contents of the North America and Asia aggregation appliances into a single corporate
view. To consolidate data, all aggregated Guardium servers export data to the aggregation appliance on a scheduled basis. The aggregation appliance imports that data
into a single database on the aggregation appliance, so that reports run on the aggregation appliance are based on the data consolidated from all of the aggregated
Guardium servers.
When secure connections are being established between a Central Manager and a managed unit.
When an aggregated unit signs and encrypts data for export to the aggregator.
When any unit signs and encrypts data for archiving.
When an aggregator imports data from an aggregated unit.
When any unit restores archived data.
Depending on your company’s security practices, you may be required to change the system shared secret from time to time. Because the shared secret can change,
each system maintains a shared secret keys file, containing an historical record of all shared secrets defined on that system. This allows an exported (or archived) file from
a system with an older shared secret to be imported (or restored) by a system on which that same shared secret has been replaced with a newer one. Shared secrets
(current and historic ones) can be exported from one appliance and imported to another through the CLI.
For aggregation to work, the shared secret must be set and be the same for aggregator and all aggregated collectors.
Note:
When setting the schedule of import on an aggregator, it should be planned to run after export is completed on all collectors.
Exporting Data
Function Compress the data of a single day (midnight to midnight, typically - yesterday) Â into an encrypted file and send it to the aggregator (or to
an external repository on Archive).
Load the relevant data (last day’s activity) to the tmp db.
Copy the export file to the aggregator (or to an external repository on Archive).
To export data to an aggregation appliance, follow the procedure. You can define a single export configuration for each Guardium unit.
1. Click Manage > Data Management > Data Export to open Data Export.
2. Check the Export box as this will open additional options for exporting data.
3. In the boxes following Export data older than, specify a starting day for the export operation as a number of days, weeks, or months prior to the current day, which
is day zero. These are calendar measurements, so if today is April 24, all data captured on April 23 is one day old, regardless of the time when the operation is
performed. To archive data starting with yesterday’s data, enter the value 1.
4. Optionally, use the boxes following Ignore data older than to control how many days of data will be archived. Any value specified here must be greater than the
Export data older than value, so you always export at least two days of data. If you leave the Ignore data older than blank, you export data for all days older than the
value specified in the Export data older than row; It is recommended to always set the Ignore older than value, otherwise you will be exporting the exact same days
over and over again; overloading the network and the aggregator with redundant data (that will be ignored).
5. The Export Values box is checked by default. In some cases, where the collector resides in a country that prohibits the export of data, and the aggregation
appliance resides in another country, you would want to clear the Export Values check box, which would mask all fields containing database values.
6. In the Host box, enter the IP address or DNS host name of the aggregation appliance to which this system’s encrypted data files will be sent. There is also an
option to enable a secondary aggregation for export data over more then one aggregator. There are two Host boxes available, the first one is required, while the
Secondary Host is an option. This unit and the aggregation appliance to which it is sending data must have the same System Shared Secret. If not, the export
operation works, but the aggregation appliance that receives the data is not able to decrypt the exported file and the Import will fail. See System Shared Secret in
System Configuration for more information. The Shared Secret is required to be identical on both exporting system and receiving system. The reason for this is that
unless they have same shared secret, the configuration on the exporting system will not be set and there will be a message for a test file that can not be sent to the
receiving system.
7. Use the Scheduling section to define a schedule for running this operation on a regular basis.
8. Click the Save button to save the export and purge configuration for this unit. When you click the Apply button, the system attempts to verify that the specified
aggregator host will accept data from this unit. If the operation fails, the following message is displayed and the configuration will not be saved: A test data file
could not be sent to this host. Please confirm the hostname or IP address is entered correctly and the host is online.
9. Click Run Once Now to run the operation one time.
Stopping Export
To stop the export of data to an aggregation appliance:
1. Click Manage > Data Management > Data Export to open Data Export.
2. Clear the Export checkbox.
3. Click Save.
Note: Stopping an export after the Run Once Now button has been clicked is impossible.
Importing Data
The Guardium collector units export encrypted data files to another Guardium appliance configured as an aggregation appliance. The encrypted data files reside in a
special location on the aggregation appliance until the aggregation appliance executes an import operation to decrypt and merge all data to its own internal database.
Note: To avoid the possibility of importing files that have not completely arrived, the aggregation appliance will not import files that have changed in the last two minutes.
Table 3. Importing Data
Topic Description
Function Import and merge the imported data into the internal databases of the Aggregator.
Schedule Executed on a daily basis. Do not run more than once a day.
High Level Process (for each Construct the delete command for each purged table (tables and the purge conditions defined in AGG_TABLES).
purged day)
Execute the delete commands for each of the tables.
Stopping Import
To stop importing data sent from other Guardium units:
Note: Stopping an import once the RUN ONCE NOW button is clicked is impossible.
The archive and purge process frees space and preserves information for future use. You should periodically archive and purge data from standalone units and from
aggregation units. The Guardium’s archive function creates signed, encrypted files that cannot be tampered with. Archive files are transferred and stored on external
systems such as file servers or storage systems.
Note:
If both Archive and Purge are scheduled, Purge will run after Archive.
Data that was archived on a collector can be restored either on another collector or an aggregator server. Restoring of data that was archived on an aggregator to a
collector machine is not supported.
Archiving data on aggregator system - on the first day of the month, all static tables are archived. On all other days, only additional data added to archived data will be
archived. This methodology is the same as used by collectors. Adding the static tables to the normal purge process eliminates the existence of orphans, freeing up disk
space and improving report performance.
Archive and export of static tables on an aggregator includes full static data only on the first day of the month (archive) or when the export configuration changes (export).
Use the CLI commands, store archive_table_by_date [enable | disable] or show archive_table_by_date. Other relevant CLI commands are store aggregator clean orphans
or show aggregator clean orphans.
Scheduling Data Management tasks - Default schedule times are supplied when the unit is built and these can be amended accordingly. The Data Management tasks
should be scheduled at less busy times, for example, overnight. They should be spaced out so as not to overlap (for example, the start of one task should not run into the
start of another before finishing.)
Aggregator Data Archive, when dealing with an Aggregator/ Central Manager that performs Data Imports and Data Archives. A default or common setting is to have the
Data Archive perform an Archive of data older than one day ignoring data older than two days. If it happens that the Data Archive is scheduled to run BEFORE the Data
Imports from other Collector(s)/Aggregator(s), then the Archive will NOT contain the Imports meant for that days Archive. Imagine the following schedule: Data Archive to
run at 30 minutes past Midnight; Data Imports to run at 6:00 AM for data older than 1 day - ignoring older than 2 days. When the Archive happens - it will not Archive any
relevant yesterday data - no Imports for that days data have yet occurred. In this example, the Data Archive should be re-scheduled to occur AFTER the Data Import(s)
have finished. This way the Archive would correctly contain data for yesterday.
Purge Function Delete old records from appliance (typically - older than 60 days) to free up space and speed up access operation to the internal
database.
Purging is based on dates (deleting whole days’ worth of data), but will not delete records that are still “in use†(for example:
open sessions).
Schedule The default purge activity is scheduled every day at 5:00 AM.
High Level Process (for each Purge configuration is used by both Data Archive and Data Export.
purged day)
Use the Purge data older than field to specify a starting day for the purge operation as a number of days, weeks, or months prior to the
current day, which is day zero.
For a new install a default purge schedule will be installed that is based on the default value and activity
When a unit type is changed between manager managed or back to standalone the default purge schedule will be applied The purge
schedule will not be affected during an upgrade
It may be necessary to run reports or investigations on this data at some point. For example, some regulatory environments may require that you keep this information for
three, five, or even seven years in a form that can be queried within 24-hours. This functionality is supported by the Guardium restore capability, which allows you to
restore archived data to the unit.
The following sections describe how to define and schedule archiving and how to restore from an archive.
Note: The archive and restore operations depend on the file names generated during the archiving process. DO NOT change the names of archived files.
Archive data files can be sent to an SCP or FTP host on the network, or to an EMC Centera or TSM storage system (if configured). You can define a single archiving
configuration for each unit To archive data to another host on the network and optionally purge data from the unit, follow the procedure.
1. Click Manage > Data Management > Data Archive to open Data Archive.
2. Check the Archive checkbox to expose additional fields for the archive process.
3. In the boxes following Archive data older than, specify a starting day for the archive operation as a number of days, weeks, or months prior to the current day, which
is day zero. These are calendar measurements, so if today is April 24, all data captured on April 23 is one day old, regardless of the time when the operation is
performed. To archive data starting with yesterday’s data, enter the value 1.
4. Optionally, use the boxes following Ignore data older than to control how many days of data will be archived. Any value specified here must be greater than the
value in the Archive data older than field. If you leave the Ignore data older than row blank, you archive data for all days older than the value specified in the Archive
data older than row. This means that if you archive daily and purge data older than 30 days, you archive each day of data 30 times (before it is purged on the 31st
day). Depending on the archive options configured for your system (using the store storage-system CLI command), you may have EMC Centera or TSM options on
your panel. If you select one of those archive destinations, see the appropriate topic.
a. EMC Centera Archive and Backup
b. TSM Archive and Backup
5. Enter the IP address or DNS Host name of the host to receive the archived data
6. In the Directory box, identify the directory in which the data is to be stored. How you specify this depends on whether the file transfer method used is FTP or SCP.
For FTP, specify the directory relative to the FTP account home directory. For SCP, specify the directory as an absolute path.
7. In the Username box, enter the user name to use for logging onto the host machine. This user must have write/execute permissions for the directory specified in the
Directory box.
8. In the Password box, enter the password for the user, then enter it again in the Re-enter Password box.
9. Data Purge
10. Check the Purge checkbox to purge data, whether or not it is archived. When this box is marked, the Purge data older than fields display. It is important to note that
the Purge configuration is used by both Data Archive and Data Export. Changes made here will apply to any executions of Data Export and vice-versa. In the event
that purging is activated and both Data Export and Data Archive run on the same day, the first operation that runs will likely purge any old data before the second
operation's execution. For this reason, any time that Data Export and Data Archive are both configured, the purge age must be greater than both the age at which to
export and the age at which to archive.
11. If purging data, use the Purge data older than fields to specify a starting day for the purge operation as a number of days, weeks, or months prior to the current day,
which is day zero. All data from the specified day and all older days will be purged, except as noted otherwise. Any value specified for the starting purge date must
be greater than the value specified for the Archive data older than value. In addition, if data exporting is active (see Exporting Data to an aggregation appliance), the
starting purge date specified here must be greater than the Export data older than value. There is no warning when you purge data that has not been archived or
exported by a previous operation. The purge operation does not purge restored data whose age is within the do not purge restored data timeframe specified on a
restore operation. For more information, see Restoring Archived Data.
12. Use the Scheduling section to define a schedule for running this operation on a regular basis.
13. Click Save to verify and save the configuration changes. When you click the Save button, the system attempts to verify the specified Host, Directory, Username, and
Password by sending a test data file to that location.
14. Click Run Once Now to run the operation once.
If any changes are done through GuardAPI commands related to the expiration date, this will not affect the date restored data that is available for Orphans cleanup.
For example: The user restores data and wants to keep this data for 7 days. This means the expiration date of this data will be in 7 days from today and this data will be
available for orphan cleanup after 7 days.
If the expiration date is changed (set to keep the data for shorter/longer period - it won't affect the date this data is available for orphan cleanup. Customer should pay
attention for this especially if they change the expiration period to be longer - in order not to lose data), then the rest of the data on the machine will be available for
orphan cleanup as first designed.
1. Click Manage > Data Management > Data Archive to open Data Export.
2. Click on the Data Archive or System Backup in the Data Management section. Initially, the Network radio button is selected by default, and the Network backup
parameters are displayed
3. Select the EMC Centera radio button. The EMC Centera parameters will be displayed on the panel.
4. In the Retention box, enter the number of days to retain the data. The maximum is 24855 (68 years). If you want to save if for longer, you can restore the data later
and save it again.
5. In the Centera Pool Address box, enter the Centera Pool Connection String; for example: 10.2.3.4,10.6.7.8/var/centera/profile1_rwe.pea
The TSM (or Spectrum Protect client) lifecycle is defined by the Spectrum Protect product terms.
To use TSM:
1. Click Manage > Data Management > Data Archive to open Data Archive.
2. Select the TSM radio button. The TSM parameters will be displayed on the panel.
3. In the Password box, enter the TSM password that this Guardium unit uses to request TSM services, and re-enter it in the Re-enter Password box.
4. Optionally enter a Server name matching a servername entry in your dsm.sys file.
5. Optionally enter an As Host name.
6. Click Save to save the configuration. When you click the Apply button, the system attempts to verify the TSM destination by sending a test file to the server using the
dsmc archive command. If the operation fails, you will be informed and the configuration will not be saved.
Restoring
As described previously, archives are written to a SCP or FTP host, or to a Centera or TSM storage system. To restore archives, you must copy the appropriate file(s) back to
the Guardium system on which the data is to be restored. There is a separate file for each day of data. Depending on how your archive/purge operation is configured, you
may have multiple copies of data archived for the same day. Archive and export data file names have the same format: <daysequence>-<hostname.domain>-w<run>
datestamp>-d<data_date>.dbdump/TAR file. To restore file for archived data (and not backup system), you need to use the GUI screen called Catalog Archive. The archive
and restore operations depend on the file names generated during the archiving process. DO NOT change the names of archived files. If a generated file name is changed,
the restore operation will not work.
Unless you are restoring data from the first archive created during the month, you will need to restore multiple days of data. That is because when restoring data,
Guardium needs to have all of the information that it had when the data being restored was archived. After the archive was created, some of that information may have
been purged due to a lack of use. All information needed for a restore operation is archived automatically, the first time that data is archived each month. So, when
restoring data, you can restore the first day of the month and all the following days until the desired day or restore the desired day and then the first day of the following
month
For example, to restore June 28th, either restore June 1st through June 28th, or restore June 28th and July 1st.
To restore file for archived data (and not backup system), you need to use the GUI screen called Catalog Archive. The archive and restore operations depend on the file
names generated during the archiving process. DO NOT change the names of archived files. If a generated file name is changed, the restore operation will not work.
1. Click Manage > Data Management > Data Restore to open Data Restore.
2. Enter a date in the From box, to specify the earliest date for which you want data.
3. Enter a date in the To box, to specify the latest date for which you want data.
4. In the Host Name box, optionally enter the name of the Guardium appliance from which the archive originated.
5. Click Search.
6. In the Search Results panel, mark the Select box for each archive you want to restore.
7. In the Don't purge restored data for at least box, enter the number of days that you want to retain the restored data on the appliance.
8. Click Restore.
9. Click Done when you are finished.
Troubleshooting
On an escalation to technical support, please supply a detailed log from the time when the problem occurred. Navigate to Manage > Reports > Data Management >
Aggregation/Archive Log and define a report for the time period in question.
When a customer upgrades the Guardium system, the system calculates the maximum number of collectors using the following logic:
2. If results of step 1 is 0 (no collectors are found), the system sets this value to 10.
3. If a different number of collectors is found, the system will add 20 percent more to the number determined in step 2.
4. For example, if Step 1 did not find any collectors, then Step 2 will set a value of 10, and then Step 3 will add 20% to it and will make it 12.
5. Another example, in Step 1 the system found five collectors exporting to an aggregator. In this case, the value is set to 5. Step 2 is not relevant as result was 5 and
not 0. Step 3 will add 20% to 5 and will set this value to 6.
Central Management
In a central management configuration, one Guardium® unit is designated as the Central Manager. That unit can be used to monitor and control other Guardium units,
which are referred to as managed units. Un-managed units are referred to as stand-alone units.
The concept of a local machine can refer to any machine in the Central Management system. There are some applications (Audit Processes, Queries, Portlets, etc.) which
can be run on both the Managed Units and the Central Manager. In both cases, the definitions come from the Central Manager and the data comes from the local machine
(which might also be the Central Manager).
Once a Central Management system is set up, customers can use either the Central Manager or a managed unit to create or modify most definitions. Keep in mind that
most of the definitions reside on the Central Manager, regardless of which machine does the actual editing.
Note:
Using the Remote Source function, a user on the Manager can run any report on the managed unit (the user must have the correct role privileges) and view data and
information of that managed unit.
CAS template definitions are shared between all units of a federated environment just like all other definitions (reports, policies, alerts, etc.)
It is recommended that a user run CAS Reports on a manager, especially CAS Reports relating to CAS configurations, hosts, and templates.
If you use the Custom Domain Builder to create a report that uses some or all remote tables (tables that live on the manager in a Central Manager environment,
such as Datasource or Comments), this report does not work on a managed node. No data will be returned.
The Central Management page of a manager will no longer automatically refresh itself based on a certain interval. This page will timeout based on the GUI timeout
of the system.
After some time of inactivity, the system will log you out automatically and ask you to sign in again. The length of the GUI timeout can be set via the CLI command
show/store session timeout (default is 900 seconds). Status lights will refresh every five minutes when the session is active.
If a user is attempting to synchronize or upload any data from the Central Manager to managed nodes, all nodes that are involved in this type of activity MUST be on
the SAME version of Guardium.
During the Central Management Redundancy Transition, it can take up to five minutes for the Unit type Sync to occur depending on how many units are defined in
the Central Management environment.
That unit can be used to monitor and control other Guardium units, which are referred to as managed units. Unmanaged units are referred to as stand-alone units.
Users, Roles and Permissions Central Manager controls the definition of users, roles, groups and datamart tables for all managed systems. The Central Manager
exports the complete set of user, security role, group, and datamart tables definitions on a scheduled basis or on demand. The
managed units update their internal databases on an hourly basis. As a result, there might be a delay of up to an hour between the
time users, roles, permissions or datamart tables are added or modified on the Central manager and the time that the managed unit
applies those updates.
Note: If you have Guardium® users or security roles that are defined on an existing stand-alone unit that is about to be registered for
central management, those definitions will not be available after the system is registered, unless those users and security roles have
also been defined on the Central Manager. You cannot administer users or security roles on a managed unit. Those definitions can be
administered only when logged on to the Central Manager. When a unit is unregistered for central management, all added users and
security roles are removed leaving only the default users (admin, accessmgr). When installing an Accelerator add-in product (PCI,
SOX, etc.), in a Central Manager environment, install it first on the Central Manager and then on the managed unit. Add any roles and
users as required for the Accelerator on the Central Manager (and those will be synchronized with the managed unit from there).
Accelerator documentation is contained within the Accelerator module. See an overview of PCI Accelerator at the end of this
Component Services table.
Aliases and Groups On all processes that automatically generate aliases or groups, for example: import user groups from LDAP, group generation from
queries, alias generation from queries, classifier, etc. if the same group or alias is automatically generated on more than one managed
machine (managed by the same manager), then it might conflict with an existing group or alias, which will not be replaced.
Audit Processes The definitions of the Audit Process itself and all of its corresponding tasks are saved to the Central Manager and available to all
managed units. However, Schedules, Results, and To-Do lists are saved on the local machine. Â This means that the same Audit
Process tasks can be run on all Managed Units, plus the Central Manager. But it can be run at different times on different machines,
which can be useful if the Managed Units have different peak load periods. Each machine has its own set of results, which are based
on the data that the machine has collected; and each machine has its own set of To-Do lists for all users. Audit Process definitions are
exported from the Central Manager to the managed units as part of the user synchronization process (see Synchronizing Portal User
Accounts). When audit process results have been produced, the results are available to users, but on managed units, there might be a
delay of up to an hour before reports or monitors such as Outstanding Audit Process Reviews are updated.
Queries Each query can get only database information from a single machine. Queries that require access information including both Central
Manager definitions and Managed Unit data show no data, or missing data.
Policies Policy definitions are saved on the Central Manager. However, when you install a policy on a Managed Unit, a local copy is made and
saved on the Managed Unit. The reason for that is that the Managed Unit is needed to keep on monitoring the database activity and
using the policy even when the Central Manager is not available for any reason.
Note: Installing a policy on a managed node will not upload this policy to the Central Manager until the Refresh on the Central
Manager is clicked. Versions must be the same between Central Manager and Managed Unit when installing policies else policies will
not install and errors are generated.
When regenerate portlet is called on a Central Manager, it also sends a management (https) request to all managed units to
regenerate the portlet (with the report ID). When regenerate is called on a managed unit - if it is called from the screen (not the
management request), then it should send a management request to the manager to refresh the portlet (this would also send it to all
units). There is a persistence mechanism for management requests for the case a unit is down - see sections within this topic on
registration and policy installation.
From the Central Manager, reports and audit processes can use data from a managed unit but not managed aggregators. The managed
unit is selected as a run-time parameter, is referred to as a remote datasource, and presented as a filtered drop-down selection list
containing only managed units. When an audit process references a remote datasource, that audit process can be run from the Central
Manager only, so it will not appear in a list of audit processes that are displayed on a managed unit.
Note: Certain reports, on a Central Manager, of domain Sniffer Buffer Usage (for example, Request Rate, CPU Usage, Buffer Usage
Monitor) will NOT display any data. The reports will be empty.
Security Assessment Like the Audit Process, the definition of the Security Assessment itself is saved to the Central Manager. But the results are saved on
the local machine. This means that the same Security Assessment can be run on all Managed Units, plus the Central Manager.
Baselines Baselines are always saved on the Central Manager. However, baselines are GENERATED using the logged data that is local to the
machine on which it is generated. Therefore, if you want to include constructs from all Managed Units, you must regenerate the
baseline on ALL Managed Units and merge the new results into the existing baseline.
Attention: The Baseline Builder and related functionality is deprecated starting with Guardium V10.1.4.
Comments Comments can be saved on either the local machine or the Central Manager, depending on what the comment is associated with. If
the Comment is associated with a definition that resides on the Central Manager, then it is also saved on the Central Manager. If the
Comment is associated with a Result on the local machine, OR something specific to a Managed Unit (like an Inspection Engine), the
Comment is also saved on the local machine.
Schedules Schedules are always saved on the local machine, even when the definition is saved on the Central Manager.
Non-Central Manager Tasks When a server is configured as a Central Manager, you must be aware of the tasks that cannot be performed on that unit, but rather
must be performed on other (non-Central Manager) units. Inspection engines cannot be defined on the Central Manager and can be
created only on the Managed Units. But Inspection engines can be viewed from the Central Manager.
Upgrade Considerations It is recommended to have your Central Manager and managed units on the same version. The Central Manager should be upgraded
first and then the managed units should follow. Having a manager in a different version than its managed units should be a temporary
thing and it is highly recommended to upgrade all managed units to the same version as the manager. Run Sync (Refresh) on all
managed nodes after upgrading, in order for these managed nodes to recognize the proper software version that they are.
PCI Accelerator for Compliance The PCI Data Security Standard consists of twelve basic requirements. Much of the requirements are focused on protecting physical
infrastructure (for instance, Requirement 1: Install and maintain a firewall configuration to protect data) or implementing procedural
best practices (for instance, Requirement 5: Use and regularly update anti-virus software). However, an extra emphasis is placed on
real-time monitoring and tracking of access to cardholder data and continuous assessment of database security health status (for
instance, Requirement 10: Track and monitor all access to network resources and cardholder data).
Guardium's PCI Accelerator for Database Compliance is tailored to simplify organizational processes that are needed to support these
monitoring and tracking mandates and to allow for cardholder data security. The Accelerator report templates can be customized to
directly reflect specific organizational and regulatory requirements. You can access these templates using the tabs that are provided:
Other tools in the Guardium family of solutions are available to help meeting regulations include the following:
PCI Compliance Report Card - A detailed view of cardholder databases access security health that is used to automate the
compliance processes with continuous real-time snapshots customized for user-defined tests, weights, and assessments. The
Report Card can be generated using security assessment.
Full Audit Trail - The non-intrusive generation of a full audit trail for data usage and modifications that are required by
regulatory compliance.
Automated Scheduling - Automated scheduling of PCI work flows, audit tasks, and dissemination of information to responsible
parties across the organization.
The following table can help identify which components are taken from which location in a central management environment.
Attention: The Baseline Builder and related functionality is deprecated starting with
Guardium V10.1.4.
Policies Schedules
Users, Security Roles, Audit Process Definitions, and Groups are exported from the Central Manager to all managed units on a scheduled basis, as described later.
Note: Application Role Permissions can also be changed by the administrator from any managed unit. When this happens, the permissions are changed for all managed
units.
Parent topic: Central Management
1. Log in to the CLI of the Machine that you want to make the Central Manager.
2. Enter store unit type manager. This step makes the machine a Central Manager; however, it is not yet managing anything.
1. Click Setup > Tools and Views > System to open System.
2. Set the shared secret to the same string on all systems.
Registering Units
Register managed units to communicate with the Central Manager.
Unregistering a Managed Unit
When a unit is unregistered, always unregister from the Central Manager. This method is the only way that the Central Manager decrements its count of managed
units.
Synchronizing Portal User Accounts
Manage portal user synchronization by using the Central Manager.
Registering Units
Register managed units to communicate with the Central Manager.
You can register Guardium units for central management either from the Central Manager or from the unit itself. Regardless of how the registration is done, the Central
Manager and all managed units must have the same system shared secret. If the unit to be managed is already registered for central management with another manager,
unregister the unit from that central manager before you register it with the new manager. Be sure to understand exactly what happens to that unit when it is registered
and unregistered for central management.
Note: If the user that is logged in to a managed unit does not exist on the Central Manager, the session is invalidated. It remains invalidated until the unit is registered with
a Central Manager.
After registration all definitions of reports, queries, groups, policies, audits, and more are retrieved from the Central manager.
Note: If the registration of a unit is offline, the registration request persists. It is resent to the IP/port specified on a set interval until the unit registers. A registration
request that does not succeed expires after seven days.
1. Click Setup > Central Management > Registration and Load Balance to open Central Management Registration.
2. For Host IP, enter the IP address of the Central Manager.
3. For Port, enter the https port for the Central Manager (usually 8443).
4. Click Register.
After you register on the managed unit, it initiates communication with the Central Manager, and nothing more needs to be done.
Note: The central management unit must be online and accessible by this unit when you register for central management. In contrast, when you register units for
management from the central management unit, you can register units that are not currently accessible.
After you register on the managed unit, it initiates communication with the Central Manager, and nothing more needs to be done.
1. Navigate to Manage > Central Management > Central Management to open Central Management.
2. Click Register New. The unit Registration page opens.
3. Enter the Unit IP and port, and click Save. The Central Management page refreshes with the new unit.
Unregistering from the managed unit does NOT unregister the unit on the Central Manager. The Central Manager still counts that unit as a managed unit for licensing
purposes and treats the unit as managed. It might not allow another unit to be registered with the Central Manager. The unregister function on the managed unit is
included for emergency use ONLY. If a manager is no longer in service, then you must unregister the unit before you can register it to another manager.
If you unregister a unit from the managed unit, it still shows on the Central Manager screen. Pressing refresh for that unit reregisters it. Pressing any other operation for
that unit gives out a message that the unit is no longer managed and removes it from the manager.
On a managed unit, you can use the GUI to unregister the unit with the Central Manager. Also, you can use the CLI unregister command as described in Unregistering a
Managed Unit with the CLI.
After unregistration all definitions of reports, queries, groups, policies, audits, and more are retrieved from the local database, the definitions that are stored on Central
Manager are no longer accessible.
If you are unsure about how to verify, contact Guardium Support before you unregister the unit.
Unregistering a managed unit from the Central Manager screen removes it from the managed unit list and sets the unit to be a stand-alone unit.
Note: The product key of the unit is removed and unless the unit is registered to another manager the product key is placed in manually.
To unregister a Managed Unit by using the CLI, complete the following steps.
After you have unregistered from the Managed Unit, it severs communication with the Central Manager, and nothing more needs to be done.
A full user synchronization cycle occurs on registration or by pressing Refresh from the Central management screen. In both cases, the synchronized information is sent
from the manager and loaded on the managed units immediately.
Note: Use caution when setting the schedule so that it does not interfere with other scheduled jobs like Import which can fail to start.
Procedure
Click Manage > Central Management > Portal User Sync to manage portal user synchronization.
a. Click Modify Schedule to change the user synchronization task schedule by using the standard task scheduler.
b. If the task is actively scheduled, click Pause to stop further scheduled executions.
c. If the task is paused, click Resume to start running the task again (according to the defined schedule).
d. Click Run Once Now to run the synchronization task immediately.
Note: The task that is scheduled or Run Once Now refers to the collection of data and its transmission to the managed units only. The managed units might not use
that data to update their user tables until up to 1 hour after it is received.
In an existing Guardium environment, refer to the procedure outlined to develop a plan for implementing central management. If you are converting an existing Guardium
unit to a Central Manager, keep in mind that a Central Manager cannot monitor network traffic. For example, inspection engines cannot be defined on a Central Manager.
1. Select a system shared secret to be used by the Central Manager and all managed units. For more information, see the system shared secret in System
Configuration.
2. Install the Central Manager unit or designate one of the existing systems as the Central Manager. In either case, use the store unit type command to set the
manager attribute for the Central Manager.
3. Any definitions from the stand-alone unit that you want to have available in the central management environment must be exported before the stand-alone unit is
registered for management. Later, those definitions are imported on the Central Manager. BEFORE exporting or importing any definitions, follow the procedure that
is outlined for each stand-alone unit that is to become a managed unit. Read through the introductory information under Export/Import Definitions.
Decide which definitions from the standalone system you want to have available after the system becomes a managed unit. Ignore any components on the
stand-alone system you do not want to have available.
Compare the security roles and groups that are defined on the stand-alone unit with those defined on the Central Manager. Under central management, a
single version of these definitions applies to all units. If a security role with the same name exists on both systems and it is used for different purposes, add a
new role on the Central Manager and assign the new role to the appropriate definitions after they are imported.
If the same group name exists on the stand-alone unit and the Central Manager but it has different members, create a new duplicate group on the stand-
alone system, taking care to select a group name that does not exist on the Central Manager. In all of the definitions to be exported, change the old group
name references to new group name references.
All security roles that are assigned to all definitions that are exported from the stand-alone system. When definitions are imported, they are imported
WITHOUT roles, so you must add them manually.
Check the application role permissions on each system. If any security roles assigned to an application on the stand-alone unit are missing from the Central
Manager, add them to the Central Manager.
After these steps are performed, the CAS collector has the same instances and monitor the same files that it did when it was a stand-alone.
Note: The CAS data that was collected when it was a standalone is deleted. There is no collected CAS data unless a file changes.
Parent topic: Implementing Central Management
The deployment health views help you investigate system-utilization trends and quickly identify ailing or down systems. These views decrease reaction times and reduce
risks from problems in your Guardium deployment. The deployment health views are designed to work together by consolidating several different sources of information
into unique but related views.
The deployment health topology and table views show the data flow relationships between systems in your environment. These views make it easy to identify
problematic systems and investigate the underlying issues.
Access the topology view by navigating to Manage > System View > Deployment Health Topology. Access the table view by navigating to Manage > System View >
Deployment Health Table.
The deployment health dashboard provides an at-a-glance summary of issues that are found across a Guardium deployment. The dashboard is especially useful for
identifying patterns and trends in the health data before investigating individual systems where problems are identified.
Access the dashboard by navigating to Manage > System View > Deployment Health Dashboard.
The following table summarizes the types of data available to each of the deployment health views.
Unit utilization
Correlation alerts  Â
Self-monitoring  Â
System requirements  Â
Aggregation Â
Connectivity Â
S-TAP connectivity  Â
Attention: The deployment health views present data gathered from an entire Guardium environment and are only available from a central manager.
It is likely that your deployment is already configured to support the deployment health views. Verify the configuration steps that are described in this procedure if you
notice any of the following issues on any of the deployment health views:
Procedure
1. Configure the collection and processing of unit utilization data from the central manager. For more information, see Configuring unit utilization data processing.
2. Enable correlation alerts for inclusion on the deployment health dashboard.
a. Open Protect > Database Intrusion Protection > Alert Builder.
b. Select an existing alert and click the icon, or create a new alert by clicking the icon.
c. Provide a Category for the alert. Alerts without a specified category are displayed as Uncategorized.
d. Select the View in deployment health dashboard check box to include the alert on the dashboard.
Attention: Alerts must have the Severity set to LOW, MED, or HIGH to be included on the deployment health dashboard.
For more information about defining alerts, see Building alerts.
3. Configure data import and export from the central manager. For more information, see, Aggregation.
Tip: Use the distribute configuration profiles tool to simplify the process of configuring data import and export for a Guardium deployment. For more information,
see Working with configuration profiles.
4. Configure S-TAP verification for all supported S-TAPs. For more information, see Windows Inspection engine verification and UNIX Inspection engine verification.
Results
After you complete the configuration procedures and allow the data to update, the deployment health topology and deployment health table views will predominately
show status except for systems with preexisting health issues. The deployment health dashboard will include any preexisting unit utilization issues and begin showing
new correlation alert conditions.
The deployment health topology view is accessible from any central manager and provides an at-a-glance visualization of the entire Guardium environment that is
connected to that central manager. In addition to showing relationships between nodes in the environment, the deployment health topology view also provides health
information about all connected aggregators, collectors, and S-TAPs. Several investigation and resolution actions are available directly from the deployment health
topology view to help quickly address health issues that are discovered in your environment.
The default deployment health topology view is a data flow view that shows the data import and export relationships between aggregators and managed units. Open the
deployment health topology view at Manage > System View > Deployment Health Topology.
A sortable table view of the deployment health data is also available at Manage > System View > Deployment Health Table.
Data availability
Several factors influence that availability of system data and how that data is displayed on the deployment health topology and table views. For information about
configuring your system to use the deployment health views, see Configuring a central manager for the deployment health views.
Types of data
When correctly configured, the deployment health topology and table views display data that is collected from several different sources. The specific types of data
that are displayed depend on the unit type, as summarized in the following sections.
Connectivity
The connectivity category indicates whether systems in a Guardium environment are able to communicate.
Unit utilization
The unit utilization category provides information about how heavily Guardium systems are being loaded.
Aggregation
The aggregation category provides information about data import and export flow between Guardium systems.
Inspection engines
Applies to S-TAPs
Examples include S-TAP verification failed
For more information, see Configuring the S-TAP verification schedule, and Viewing S-TAP verification results.
Click the icon to open the Customize Settings dialog to define the types of data shown on the deployment health topology and table views.
Data latency
Several preset and user-defined schedules determine the latency of data that is displayed on the deployment health topology view. These schedules are
summarized in the following table.
Unit Central manager, aggregator, or 1 - 2 hours, based on the recommended configuration. For more information, see Configuring unit utilization
utilization collector data processing.
Observe the following latencies for specific environment and configuration changes:
Newly registered aggregators or collectors become available to the deployment health views within 15 minutes.
Deleting the data export schedule or data export configuration from a collector are reflected on the deployment health views within 2 hours.
Data presentation
Health status
The deployment health topology view displays three categories of health information for Guardium systems: connectivity, unit utilization, and aggregation. Metrics
under these categories are assigned one of the following health statuses: status unavailable (least severe), no health issues, low severity, medium severity, and high
severity (most severe). The overall status is determined by the most severe status of any individual metric included under any of the health categories being
displayed. Data that has been excluded using the Customize Settings dialog is not used for determining the overall status of a system.
For example, if the Restarts metric under the Unit utilization category is assigned a High severity status, but no health issues exist under another category, the
Overall status for that system is High severity. This behavior ensures that the most severe condition is always visible at-a-glance as the overall status of a system.
At the Manage > System View > Deployment Health Topology view, detailed statuses for the available health categories are only displayed when at least one low,
medium, or high severity issue is found.
At the Manage > System View > Deployment Health Table view, detailed statuses for the available health categories are always displayed.
The deployment health topology view implements a health status roll-up strategy to efficiently display health information for an entire Guardium environment.
Using this strategy, child nodes are collapsed under their parent nodes, and the child's health status is rolled-up to the parent. The rolled-up status is expressed as
a small icon attached to the parent node.
Attention: Health status roll-up is only supported for S-TAP nodes rolling-up status to their parent collector.
For example, indicates a collector with no health issues, but the small red circle indicates that one or more S-TAPs that are associated with that collector has
high severity issues. Clicking the collector expands the node and reveals the associated S-TAPs and their health status. For example,
indicates four S-TAPs that are associated with the collector: two S-TAPs have high severity health issues, and two S-TAPs have low severity health issues.
Only the most severe status is rolled-up from the child to the parent node when the child nodes are collapsed. In the previous example, the parent node shows a
small red circle because one or more of its children has high severity issues. However, if one or more child nodes contain low severity issues but all the other child
nodes have no health issues, the parent node would display a small yellow circle.
Deployment presentation
Some deployment configurations display unexpectedly on the deployment health topology view. Several of these configuration scenarios are described in the following
sections.
Best practice: In a managed environment, it is recommended that all units operate at the same Guardium version level.
Managed units before Guardium V10.1
Managed units before Guardium V10.1 display Status unavailable under the Aggregation health section when viewed from either the Deployment Health Topology
page or the Deployment Health Table.
Best practice: In a managed environment, it is recommended that all units operate at the same Guardium version level.
Unsupported S-TAPs
The deployment health topology view displays any S-TAPs that are configured for S-TAP verification or that participate in enterprise load balancing. If an S-TAP
cannot be configured for S-TAP verification or to participate in enterprise load balancing, the S-TAPs will not be displayed.
If S-TAP load balancing is configured with the participate_in_load_balancing parameter and an S-TAP is configured to balance traffic across multiple collectors, the
deployment health topology view displays that S-TAP as a child node of each collector. For example, if S-TAP 1 is load balancing with Collector A and Collector B,
both Collector A and Collector B display S-TAP 1 as a child in the deployment health topology view.
If a collector exports data to a central manager or to an aggregator that is configured as a central manager, but that collector is not designated as a managed unit of
that central management cluster, the Overall status of the collector in the deployment health topology view is shown as Health status unavailable. No additional
information about the collector is made available through the deployment health topology view unless the collector is designated as a managed unit of the central
manager.
When a collector is configured to export data to both primary and secondary hosts, only the primary host is used for the deployment health topology view.
Data availability
Several factors influence that availability and latency of health data and how that data is displayed on the deployment health dashboard. The following table summarizes
the data included on the dashboard, trigger criteria, and data latency and purge information.
Table 1. Summary of deployment health dashboard data
Data source Information type Trigger criteria Data latency Data purge interval
System System configuration, such System does not meet Updated whenever the user-interface server Not applicable
resources as CPU cores, system minimum requirements is started or restarted
memory, /var disk capacity
Unit Unit utilization data such as Value exceeds unit utilization Updated within 1 - 2 hours, based on the Unit utilization data is purged after 60 days
utilization sniffer restarts, MySQL disk thresholds recommended configuration. For more
usage, and CPU load. information, see Configuring unit utilization Sniffer buffer usage data is purged after 14
data processing. days
System self- MySQL disk usage and Usage meets or exceeds Updated every 5 - 10 minutes. High-severity issues are purged after 7 days
monitoring system disk usage default thresholds (75% for
high severity, 90% for critical For high-severity, if the same event occurs Critical issues are never purged
severity) multiple times in a 15 minute period, the
timestamp is updated to reflect the most
recent instance. If the same event occurs
after a 15 minute interval, a new entry is
created with the most recent timestamp.
Correlation Triggered correlation alerts An alert threshold is reached Updated based on the alert notification Data is purged after 7 days
alerts frequency. For more information, see
Correlation Alerts.
Important:
Only data from systems that are running Guardium V10.1.2 and later are included on the deployment health dashboard.
When you change the host name of a system, preexisting data that is associated with the original host name is no longer displayed on the deployment health
dashboard.
When a primary central manager transfers data to a backup central manager during a failover scenario, up to 30 minutes of data is unavailable to the deployment
health dashboard.
Data presentation
The deployment health dashboard formats and presents data through various tiles or small window-like containers. The following table summarizes the data that is
presented on each dashboard tile.
Table 2. Summary of deployment health dashboard tiles
 Tile name
Data source Resource Unit utilization Unit Alerts (by category, name, Events High severity Critical
requirements issues utilization severity, or system)
timecharts
System resources     Â
Unit utilization   Â
System self-monitoring    Â
Correlation alerts    Â
The following tiles are displayed by default: alerts by name, critical issues, events timeline, high severity issues, and unit utilization issues.
The Guardium systems filter allows filtering the dashboard by unit type or by groups defined at Manage > Central Management > Managed Unit Groups.
By default, the dashboard displays all available issues: low, medium, high, and critical. Use the Severity menu to filter data on the dashboard by severity. Selecting high
filters the entire dashboard to display only high-severity issues. Selecting critical filters the entire dashboard to display only critical issues. It is possible to select both high
and critical issues to filter out all lower-severity data.
Notes:
Outstanding or unresolved critical issues are displayed on the dashboard regardless of the Severity filter setting.
For the unit utilization issues tile, the dashboard Severity filter is based on the overall unit utilization severity. For more information about how unit utilization
severity is assigned, see Unit utilization issues.
The time filter determines the range of data that is displayed on the dashboard. Default settings allow time periods from 1 hour to 3 weeks, but custom time periods are
also supported. The time filter does not apply to critical issues: critical issues are always displayed, regardless of the time filter setting.
Use the Add chart menu to add tiles to the dashboard or replace default tiles that you previously removed.
Dashboard summary
The dashboard summary provides overall counts of health issues that are detected in your Guardium deployment. The Collectors with issues and Aggregators with issues
counts indicate the number of systems--collectors and aggregators--that are detected with health issues. The Critical and High counts indicate the number of issues
detected from all systems that are included on the dashboard.
Note:
The Critical and High counts are not affected by adding or removing tiles from the dashboard.
The counts on the dashboard summary bar reflect the dashboard filter settings.
Correlation alerts must be explicitly configured for inclusion on the deployment health dashboard. For information about configuring alerts for the dashboard, see
Configuring a central manager for the deployment health views.
Resource requirements
The resource requirements tile indicates whether systems in a Guardium deployment meet the minimum hardware requirements for CPU, memory, and /var disk capacity.
Any system resource that does not meet the minimum requirement is designated as a high-severity issue and displayed on both the resource requirements tile and the
high severity issues tile.
Use the Include healthy systems check box on the details view of the tile to include all available data for the systems and time frame that are indicated on the dashboard
filter bar. By including all available data, the Include healthy systems check box overrides the Severity setting of the overall dashboard filter. Systems without any detected
health issues are excluded by default.
A table that displays all met and unmet resource requirements in your Guardium deployment is also available at Manage > Central Management > System Resources.
Note:
System resource issues are not displayed in the Events timeline because they are not associated with a specific time stamp
The details view of the unit utilization issues tile includes both a Period start time and a Timestamp:
The Period start time indicates that the CM buffer usage monitor data is rolled-up into hourly periods, for example periods starting at 13:00, 12:00, and 11:00.
The Timestamp indicates when the unit utilization levels data is added to the deployment health dashboard, either based on the unit utilization levels schedule or
by using run once now.
13:00 14:40
12:00 13:40
11:00 12:40
Note: Systems are not included on Timechart settings > Host name menu when unit utilization data does not exist for that system in the time frame that is specified on the
dashboard filter bar.
Parent topic: Deployment health views
Related tasks:
Configuring a central manager for the deployment health views
Configuring unit utilization data processing
Scenario: Troubleshooting overloaded systems using the deployment health topology view
This topic describes using the deployment health topology view to identify and fix an overloaded system in your environment.
Procedure
1. On a central manager, navigate to Manage > System View > Deployment Health Topology.
2. Review the deployment topology and assess the overall health of systems in the environment. At a high level, icons indicate healthy systems while and
icons indicate systems with some health issues.
3. If you notice systems with or status icons, click the node to view an overlay with additional health information.
4. Use the information presented on the node overlay to begin diagnosing any health problems. For example, a collector with high or medium severity statuses for /var
disk usage, Restarts, Analyzer queue, and Logger queue indicates that the collector is overloaded.
5. After initially assessing health issues from the deployment health topology view, try to correlate your findings with additional data. For example, if you suspect that
a system is overloaded, begin monitoring the traffic for that system.
6. When you are confident that you have diagnosed the underlying health issues, take corrective actions. In the example of an overloaded system, you could establish
Enterprise load balancing or reassign S-TAPs to another collector. Typically, this set of symptoms would not occur if enterprise load balancing was already
configured and in use.
7. After taking corrective actions, the status of the node on the deployment health topology view will be updated following the next refresh of unit utilization and
central manager buffer usage monitor data. This refresh interval depends on your schedule for processing unit utilization data.
Overview
Load balancing automatically allocates managed units to S-TAP agents when new S-TAPs are installed and during fail-over when a managed unit is unavailable. The load
balancing application also dynamically re-balances loaded or busy managed units by relocating S-TAP agents to less-loaded managed units.
It removes the need to manually evaluate the load of managed units before assigning those managed units to an S-TAP agent.
It eliminates the need to define fail-over managed units as part of post-installation S-TAP configuration because the load balancer dynamically manages fail-over
scenarios.
It removes the need to manually relocate S-TAP agents from loaded managed units to less loaded managed units.
Important: When using the enterprise load balancing application, the Guardium system assumes control over the allocation of managed units to S-TAP agents. This is an
automated and dynamic process: the S-TAPs change their associations based on the relative load of available manged units. Use the Load Balancer Events report to
review all load balancing activity.
Note: When configuring the S-TAP to use enterprise load balancing, the F5-based load balancing cannot be used.
Load balancing is disabled by default on Guardium systems. For information about enabling S-TAPs to participate in load balancing, see Windows General parameters and
UNIX General Parameters.
How it works
The enterprise load balancing application works by collecting and maintaining up-to-date load information from all its managed units.
It uses the load information from managed units to create a load map. This load map provides the data that directs load balancing and managed unit allocation activities.
Use the GuardAPI command grdapi get_load_balancer_load_map to view the current load map at any time.
Load information is only collected from managed units that are online and configured with the parameter LOAD_BALANCER_ENABLED=1. Setting
LOAD_BALANCER_ENABLED=0 disables load balancing and prevents that managed unit from being dynamically allocated to S-TAP agents during load balancing activities.
Load collection errors from specific managed units are recorded in the Load Balancer Events report but do not interfere with the overall load collection and load balancing
processes. However, failure to collect load information from a managed unit excludes that managed unit from participation in load balancing processes.
Procedure
1. On a Central Manager, navigate to Manage > Central Management > Enterprise Load Balancer > Associate S-TAPs and Managed Units.
2. If an S-TAP group has not already been created or a new one is required, create a new S-TAP group.
a. Click the icon to open the Create New S-TAP Group dialog.
b. Provide a name in the Group Name field. For example, North_American_S-TAPs.
Recommendation: To ensure compatibility with other Guardium components, do not use spaces or special characters in group names.
c. Add group members by selecting from existing host names or adding new members using the Group Member field. S-TAPs indicated with a icon are
included with the new S-TAP group.
d. Click Create New Group to create the S-TAP group.
3. Associate the S-TAP group with a group of managed units.
a. Select the S-TAP group you want to associate. For example, North_American_S-TAPS.
b. Click Associate Managed Units to open the Associate Managed Unit Group dialog.
c. If necessary, create a new group of managed units.
i. Navigate to Manage > Central Management > Managed Unit Groups.
ii. Click the icon to open the Create New Managed Unit Group dialog.
iii. Provide a name in the Group Name field. For example, North_American_MUs.
Recommendation: To ensure compatibility with other Guardium components, do not use spaces or special characters in group names.
iv. Add group members by selecting from existing Managed Unit IP addresses.
v. Click Create New Group to create the new group of managed units.
d. Select the group(s) of managed units to associate with the S-TAP group. For example, North_American_MUs.
e. Click Apply.
4. Click Save to complete the association between an S-TAP group and a group of managed units.
5. (Optional) Associate the S-TAP group with a failover group of managed units.
a. Select the S-TAP group you want to associate, that already is associated to a managed units group. For example, North_American_S-TAPS.
b. Click Associate Failover Groups to open the Associate Failover Group dialog.
c. If necessary, create a new group of managed units same as described above. Both Regular managed unit groups and failover groups are the same until
specified during association with S-TAP group.
d. Select the group(s) of managed units to associate with the S-TAP group. For example, North_American_MUs_failover.
e. Click Apply.
6. Click Save to complete the association between an S-TAP group and a group of managed units.
Procedure
1. To view the current load map as a report in the Guardium UI, navigate to Manage > Reports > Unit Utilization > Load Balancer.
2. It is also possible to view the current load map using the Guardium API. Issue the following GuardAPI command: grdapi get_load_balancer_load_map.
ID=0
Procedure
To view the report, navigate to Manage > Reports > Activity Monitoring > Enterprise Load Balancer Events.
Parent topic: Enterprise load balancing
Default value
Parameter (valid values) Description
If disabled on the managed unit, the load balancer (running on the central manager) does not collect load
information from that managed unit. All the S-TAPs connected to that managed unit do not participate in
load balancing.
On the CM, enabling this parameter (after it was disabled) triggers an immediate full load collection from all
the managed units enabled for load balancing.
When this parameter is enabled (set to 1), the collection interval is proportional to the number of managed
units (1 hour per 10 connected managed units). Changes to this parameter triggers an immediate
recalculation of the next full load collection time.
USE_APPLIANCE_HW_PROFILE_FACT 1 (0 or 1) The load balancer can use managed units' hardware profile indicators (specified by the parameter
OR APPLIANCE_HW_PROFILE_INDICATORS) when evaluating vacant managed units for relocating S-TAPs.
MAX_RELOCATIONS_BETWEEN_FULL 3 (≥-1) Defines the maximum number of S-TAP relocations (between managed units) allowed after a full load
_LOAD_COLLECTIONS collection.
ALLOW_POLICY_MISMATCH_BETWEE 1 (0 or 1) The load balancer can take into account managed units' installed policies.
N_APPLIANCES
0: does not allow S-TAP relocation to an MU that has a different policy.
1: allows an S-TAP relocation to an MU that has a different policy.
TIME_TO_IGNORE_STAP_CONNECTIO 10 (≥5) When collecting the load statistics for S-TAPs of each managed unit, we want to avoid including data that
N_RELATED_LOAD represents the initial S-TAP connection to the managed unit. This data can indicate traffic spikes that
create a false-positive for the load balancer. The TIME_TO_IGNORE_STAP_CONNECTION_RELATED_LOAD
parameter tells the load balancer to ignore S-TAP load for the specified number of minutes after the S-TAP
has connected to the managed unit.
ENABLE_RELOCATION 1 (0 or 1) Relocation of resources (rebalancing) is a process that the load balancer executes after full load collection.
Relocation here means transferring S-TAPs from loaded managed units to vacant manged units.
LOADED_SNIFFER_QUEUE_USAGE_TH 0.6 (0.1 to 1 in A managed unit is considered loaded if its sniffer has at least one queue whose size reaches the
RESHOLD increments of 0.1) LOADED_SNIFFER_QUEUE_ USAGE_THRESHOLD.
DEFAULT_STAP_MAX_QUEUE_USAGE 0.15 (0.10 to 1 in When an S-TAP is initially assigned to a managed unit, the load balancer does not have load information
increments of about it. The value of this parameter defines the temporary sniffer max used queue until the real load is
0.10) collected from the managed unit (after the interval defined by the
TIME_TO_IGNORE_STAP_CONNECTION_RELATED_LOAD parameter).
DEFAULT_STAP_MAX_CONTRIBUTION 0.1 (0.1 to 1 in When an S-TAP is initially assigned to a managed unit, the load balancer does not have load information
_TO_MAX_QUEUE_USAGE increments of 0.1) about it. The value of this parameter defines the temporary max S-TAP load contribution to the temporary
max used queue until the real load is collected from the managed unit (after the interval defined by the
TIME_TO_IGNORE_STAP_CONNECTION_RELATED_LOAD parameter).
REBALANCE_IF_MU_CLASSIFIED_AS_ 1:168 (≥0 : Loaded managed units can be rebalanced only if they have been classified as loaded a specified number of
LOADED_N_TIMES_IN_M_HOURS ≥0) instances over a specified period of hours. For example, a value of 1:168 requires that a managed unit be
classified as loaded at least 1 time during a period of 168 hours.
APPLIANCE_HW_PROFILE_INDICATO NUM_PROCESSOR The load balancer can take into account managed units' hardware profile indicators. A colon delimited list
RS S: CPU_SPEED: of indicators (column names from the table APPLIANCE_RESOURCE_INFO) are used by the load balancer to
CPU_CACHE: evaluate the hardware profile.
CPU_CORES:
MEMORY_SIZE This parameter should not be changed under normal circumstances.
(Columns names
from the table
APPLIANC
MAX_CONCURRENT_LOAD_COLLECTIO 10 (≥1) The maximum number of concurrent load collection processes the load balancer runs at any given point in
NS time. That is, the number of concurrent, non-persistent, remote SQL connections from the Central Manager
to the managed unit.
MAX_RELOCATIONS_PER_MU_BETWE 3 (≥-1) The maximum number of S-TAP relocations allowed from a specific managed unit during any one period of
EN_FULL_LOAD_COLLECTIONS full load.
This parameter is the maximum number of STAPs that can be relocated per MU. If you have 2 loading S-
TAPs, and the value is set to 1, then only one of these S-TAPs can be moved for a specific MU. If the value is
set to 0 then STAP does not relocate.
ENABLE_FAILOVER_GROUPS_REBALA 0 (0 or 1) Controls automatic relocation of S-TAP from the failover group back to the main MU group once an MU is
NCE available again in the main MU group.
Deployment inventory
The inventory view provides centralized view of all database servers and any installed S-TAPs or GIM clients.
Procedure
1. Navigate to Manage > Central Management > Managed Unit Groups.
2. From the Managed Unit Groups page, click to create a new managed unit group or to edit an existing group.
3. From the Create new managed unit group dialog, type a name for the group in the Group name field.
Recommendation: To ensure compatibility with other Guardium components, do not use spaces or special characters in group names.
4. Use the icons to select managed units to include in the group.
5. When you have finished selecting managed units to include in the group, click the Save button. The new managed unit group will be saved and appear on the
Managed Unit Groups page.
6. Optionally, from the Managed Unit Groups page, click the icon to expand a group and view its managed units.
Results
Once defined, a managed unit group is available from the Manage > Central Management > Central Management page, the Manage > Central Management > Distribute
Configuration Profiles page, as a managed unit group within the Manage > Central Management > Enterprise Load Balance > Associate S-TAPs and Managed Units tool,
and in other locations where managed unit groups are used.
Parent topic: Using Central Management Functions
1. Log in to the Guardium® GUI of the unit to be managed as the admin user.
2. Click Reports > Guardium Operational Reports > Managed Units to open Managed Units.
Select all check box Mark this box in the shaded area of column one to select all managed units.
Check box Mark this box to select the unit for wanted operation.
Refresh unit information Refreshes all information that is displayed in the expanded view of that unit and issues new requests to that unit. This
action also causes a full user synchronization cycle.
Reboot unit Reboots the unit at the operating system level. By default, the Guardium portal is started at startup.
Restart unit portal Restarts the Guardium application portal on the managed unit. You can then log in to that unit to do Guardium tasks
(defining or removing inspection engines, for example).
View unit SNMP attributes Opens the SNMP Viewer pane in a separate window. Clicking the refresh icon in the SNMP Viewer pane refreshes the
data in the window.
View unit syslog Opens the Syslog Viewer in a separate window, displaying the last 64 KB of syslog messages. Clicking the Refresh icon
in the Syslog Viewer pane refreshes the data in the window.
Shortcut to unit portal Opens the Guardium login page for the managed unit, in a separate browser window.
Unit Name The host name of the managed unit. If you hold the mouse pointer over the unit name, its IP address displays as a
tooltip. If the host name changes on the unit, the Central Manager no longer sees that unit when automatically
refreshing the Online status. If you suspect the host name was changed, use Refresh on the toolbar. Obtain the
changed host name and update the displayed current Online status and other information for that unit.
Online Indicates whether the unit is online. If the green indicator is lit, the unit is online; if the red indicator is lit, the unit is
offline. The Central Manager refreshes this status at the refresh interval that is specified in the central management
configuration (1 minute by default). If an error occurred connecting to a unit, the error description can be viewed as a
tooltip. Hover the mouse indicator over that unit's record in the management table.
Inspection Engines Click the icon to expand the list of inspection engines; click the icon to hide the list of inspection engines.
From here, depending on status, you might stop or start the inspection engine.
The information that is displayed for each inspection engine is as follows (This information is fetched from the
managed unit when the Refresh is pressed, not on every ping):
Protocol - The protocol that is monitored by the inspection engine: Oracle, MSSQL, Sybase, Informix®, or DB2®
Exclude From IP - Indicates if the list of from-IP addresses is to be excluded (not examined).
From-IP/Mask - A list of the IP addresses and subnet masks of the clients whose database traffic to the To-IP/Mask
addresses the inspection engine monitors.
Ports - The ports on which database clients and servers communicate; can be a single port, a list of ports, or a range of
ports
To-IP/Mask - A list of IP addresses and subnet masks of servers whose traffic from the corresponding client machine
(From-IP/Mask) is monitored.
Installed Security Policy The name of the security policy that is installed on the managed unit. This field is updated on every ping.
Last Ping Time The last time that the unit was pinged by the Central Manager to determine the managed unit's online/offline status.
Selected Units Â
Group Setup Group Setup opens a new window that allows the user to maintain groups; creating new groups, removing groups, and
associating managed units with groups.
Restarting Â
Restart Inspection Engines Restart the inspection engines of the selected units.
Distribution Â
Install Policy The policy name is a link that opens a new window with the policy's detail.
Patch Distribution Patch Distribution opens a new screen, display an available patch list with dependencies, and allow for the selecting of
a patch and installing it to all selected units. Schedule a patch up to one year in the future.
Distribute Uploaded JAR files Click Harden > Vulnerability Assessment > Customer Uploads. Then, enter the name of the file to be uploaded.
Otherwise, click the Browse to locate and select that file. Upload one driver at a time.
Click Upload. You are notified when the operation completes, and the file that is uploaded is displayed. This action
brings the uploaded file to the Central Manager.
Select a check box of the managed unit or units where these JAR files are to be distributed. Click Distribute Uploaded
JAR files.
Distribute Patch Backup Settings This setting distributes the following to selected units:
Distribute Authentication Config Select the managed units that receive the distribution of the Central Management authentication.
Click Distribute Authentication Config to distribute the authentication configuration to all managed units selected.
Distribute Configurations The following configurations are distributed to sync parameters between the Central Manager and the managed units:
Some of these configurations do not take effect until the portal is restarted (Anomaly Detection, Session Inference).
Other processes, such as the Alerter, need to be restarted, either directly through the admin portal of the managed
unit, or by rebooting all relevant managed units from the manager.
The Distribute Configurations does not restart the managed units. There is a separate icon for each managed unit to be
restarted.
After Distribution, a message will display saying that the managed units will need to be restarted for all the
configurations to take effect on managed units.
Each parameter that has scheduling has a second check box. When this second box is checked, this parameter's
scheduling is distributed.
Alerter                      Â
Active on Startup check box. Each time the appliance restarts, the Alerter is activated automatically.
Distributing configuration from Central Manager to managed units needs a reboot on managed units to take full effect
The Alerter to be manually restarted on the managed units through the admin portal (Admin Console/ Alerter). Since
this restart cannot be done from the Central Manager, restart the managed units from Admin Console and get the same
effect.
Â
Active on Startup check box. Each time the appliance restarts, Anomaly Detection is activated automatically.
Distributing configuration from Central Manager to managed units needs restart portal on managed units to take full
effect
Â
Active On Startup check box to start Session Inference on startup of the Guardium appliance.
Distributing configuration from Central Manager to managed units needs restart portal on managed units to take full
effect
Â
Distributing configuration from Central Manager to managed units takes effect without restart of portal on managed
units
Â
Global profile
Distributing configuration from Central Manager to managed units takes effect without restart of portal on managed
units (Though using a different named template applies only when policy is installed.)
Register New Opens the Unit Registration pane to register a new unit for management.
Patch Installation Status The Patch Installation Status screen displays, for each unit, failed installations and discrepancies. For example, having
one patch installed on part of the units only, regardless if it failed on other units or was not installed.
It allows the central manager to assign correlation alerts to individual managed units or managed unit groups. You can either assign it to a unit or group or you can exclude
it from a unit or group. You must also specify whether to run it on the Central Manager itself. The groups used are managed unit groups, the same types of groups that are
used on the Central Manager page.
In the managed environment, on the Central Manager, the alert builder has a new section for "Managed Units". In this section, you specify either single units or groups of
managed units to either include or exclude from an alert. You also specify with a checkbox whether that Central Manager itself is included or excluded. The default
behavior matches the existing behavior: alerts run everywhere. If you specify that alerts should not run everywhere, verify that the alerts run where you specify. The UI
includes four options for including/excluding single units or groups, and dialogs for selecting from the list of management groups and if desired, creating new management
groups, or editing existing managed unit groups.
On the individual managed units, the alert builder does not show any section on managed units, only the Central Manager can assign alerts to units and groups.
If there are entries in the alert table on a given managed unit, there will automatically be a system generated group created to exclude that unit for each alert it is excluded
from. This will occur when the alerts are started on that managed unit.
The alert panes on the anomaly detection page under admin console were used to enable/disable alerts locally. For this feature, the alert panes appear only on the Central
Manager.
On the managed units, there is now a table showing active alerts and whether they are enabled.
Procedure
1. Click Setup > Tools and Views > Policy Installation to open Currently Installed Policies and the Policy Installer.
2. From the Policy list, select the policy that you want to install.
3. From the list, select an installation action. After you select an installation action, you are informed of the success (or failure) of each policy installation. If a selected
unit is not available (it might be offline or a link might be down), the Central Manager informs you of that fact. It continues attempting to install the new policy for a
maximum of seven days (on the condition that unit remains registered for central management).
4. From the Policy list, select the policy that you want to install.
5. The available installation actions include the following items:
a. Install and Override - delete all installed policies and install the selected one instead
b. Install last - installing the selected policy as the last one in the sequence; installing the policy after all currently installed policies and having the lowest
priority
c. Install first - installing the selected policy as the first one in the sequence; installing the policy before all currently installed policies.
Note: If you install a policy from the Central Manager, the selection of Run Once Now (and scheduler) updates existing groups within the installed policies.
To load changes to rules, including addition and subtraction of groups, you must either:
When you install a patch, a date and time request can be specified to indicate when the patch is installed. If no date and time is entered or if now is entered, the
installation request time is immediate.
Note: A patch that is installed successfully can be installed again. This fact is important for batched patches. A warning informs you if the patch is already installed.
Log in to the Guardium® GUI of the unit to be managed as the admin user:
Procedure
1. Click Manage > Central Management > Central Management.
2. Select the units that need the patch, and click Patch Distribution
3. From the Patch Distribution screen select the patch you want to distribute and click Install Patch Now or Schedule Patch.
4. To see the status of the installation, click Manage > Central Management > Central Management and then select the units and click Patch Installation Status. The
Patch Installation Status screen displays, for each unit, failed installations and discrepancies. For example, having one patch installed on part of the units only,
regardless if it failed on other units or was not installed. To remove patches from the Patch Distribution screen, click the delete icon (red x) next to the patch. This
does not delete the patch from the patch distribution directory on the appliance, but will remove it from the display.
allow communication over port 8447 between the central manager and its managed units
the central manager and the managed units that will receive configurations must be at or above Guardium V10.1
Configuration profiles are defined independently of the local settings on the central manager. This allows you to quickly define configuration settings and deploy those
settings to managed unit groups without disrupting the configuration of your central manager or configuring each managed unit individually.
This task describes how to create, distribute, and save a configuration profile.
Procedure
1. Navigate to Manage > Central Management > Distribute Configuration Profiles.
4. From the What to distribute panel, click to define a new configuration, or select an existing configuration and click to edit.
a. From the Configuration type menu, select a configuration type to add to the profile.
b. Specify configuration and scheduling details for the selected configuration type. For more information about configuration settings, see the product
documentation for the configuration type you are defining.
Restriction: Distributing data export configuration settings to an aggregator will not distribute any purge settings. The existing purge settings on an
aggregator will be retained. Purge settings, including retention periods, will be distributed to and replace existing purge settings on collectors.
c. Click Save to finish editing the configuration details.
Continue adding or editing configurations as needed. Click Next to continue.
5. From the Where to distribute panel, select groups from the Managed unit groups table and use the icon to add the groups to the Selected groups table. Click
Next to continue.
Note: click to create a new managed unit group or to edit an existing group. Managed unit groups can also be defined and edited at Manage > Central
Management > Managed Unit Groups.
6. From the Distribute configurations panel, click Run Now to distribute the configuration profile to the selected groups. When the status indicates that distribution is
complete, click Next to continue.
7. From the Review results panel, review a summary of the distribution process and its results.
Optional: click Run Log to view a detailed log of the distribution process.
8. Click Save to save the configuration profile for reuse.
What to do next
If you need to move configuration profiles between central managers, use Manage > Data Management > Definitions Export and Manage > Data Management > Definitions
Import and select Configuration profile from the Type menu.
Parent topic: Using Central Management Functions
Related concepts:
Aggregation
Alerter Configuration
Export/Import Definitions
IP to Hostname Aliasing
Scheduling
Distribute Configuration
Configurations and their schedules, can be distributed, either all or individually, between the Central Manager and the managed units.
Procedure
1. Select the managed units that receive the configurations.
2. Click Distribute Configurations to display the Distribute Configurations window.
3. Check the appropriate boxes for those Configurations that you would like distributed. Use the check box in the header to select all configurations.
4. Check the appropriate boxes for those Schedules that you would like distributed. Use the check box in the header to select all schedules. If a configuration is not
scheduled, there is not a check box for it and displays 'n/a' instead.
5. Click Distribute to distribute the configurations and schedules.
6. Option: Click Cancel to abort distribution.
Results
ACTIVATE_ALIASES
CUSTOM_DB_MAX_SIZE
CHECK_CONCURRENT_LOGIN
HTML_BOTTOM_RIGHT
HTML_BOTTOM_LEFT
DISPLAY_LOGIN_MESSAGE
LOGIN_MESSAGE
CSV_DELIMETER
FILTERING_ENABLED
INCLUDE_CHILDREN_ON_FILTER
SHOW_ALL_RECORDS
ACCORDION_DISABLED
SCHEDULER_RESTART_INTERVAL
SCHEDULER_RESTART_WAIT_SHUTDOWN
ESCALATE_TO_ALL
MESSAGE_TEMPLATE
Procedure
1. Ensure authentication (Configure Authentication) on both the central manager and the managed unit. So if LDAP authentication is being used, ensure that LDAP is
configured on the central manager and the managed unit.
2. Select the managed units to receive the distribution of the central management authentication.
3. Click Distribute Authentication Config to distribute the authentication configuration to all managed units selected.
1. Backup Central Manager - Make Primary CM link will be available after Primary Central Manager loses connection.
3. User and roles are in the synch backup and will not rely on Portal User Sync.
5. A GuardAPI function make_primary_cm, has been added to allow switch to Central Manager from CLI.
6. Data is retained from Audit Process Builder processes after switching Primary Central Manager to Backup Central Manager.
7. Central Management backup includes all the definitions (reports, queries, alerts, policies, audit processes etc.), users and roles as it did before.
8. It includes the schedules for enterprise reports, distributed reports and LDAP.
9. It includes schedules for all audit processes, schedules and settings for data management processes such as archive, export, backup, and import.
11. User's GUI customization's, custom classes and uploaded JDBC drivers are included.
Note: Data, either collected data, audit results and custom tables data, is not included.
Note:
To list status of cm_sync_file(s) on Backup CM, use the CLI command, show local_cm_sync_file. To list the value of Backup CM IP for each managed unit, use the
GuardAPI command, grdapi show_backup_cm_ip (this API command can only run on a Central Manager).
Note: Failover with Central Manager load balancing - After failover, if the new Managed Units connect and then disconnect right away, the correct DB_USER will not be sent
until the failover message is received.
Perform these steps on your development or secondary servers and test. If successful, then perform these steps on your Primary or live Guardium Servers.
2. Install patches with the following CLI command, store system patch install scp
3. This CLI command will copy the files over to your Guardium Server and give you the ability to install them.
4. Watch these patches being installed with the following CLI command, show system patch install
2. Select the Setup > Tools and Views and then choose Central Manager.
3. Click check boxes for the Backup CM managed unit ONLY on the Central Manager.
4. Click Patch Distribution and install all of the patches that you just installed onto the Primary CM.
3. Wait approximately 15 minutes to be sure the patch is installed on all managed servers.
4. To verify, login as CLI on the Backup CM and run CLI command, show system patch install, from Backup CM server.
2. Verify that all patches have been installed before going to the next procedure.
After all Patches have been installed on the CM and managed servers
2. Select Setup > Tools and Views and then choose Central Manager. Click Designate Backup CM.
3. Select Backup CM server from the returned list of eligible Backup CM candidates.
4. Click Apply.
5. Wait approximate two minutes for the Backup CM to sync and the NEW Backup CM file to be created and copied to the Backup CM.
6. Wait for two complete rounds of backups to complete (approximate 1 hour) for two Backup CM sync files that will be copied to the Backup CM and can be
viewed from the Guardium Monitor tab - Aggregation Archive Log Report.
7. Select Guardium Monitor and select Aggregation/Archive Log Report to view the progress of the creation of the Backup CM sync file.
8. Verify the Activity Backup has started and the cm_sync_file.tgz file has been created from the Aggregation/Archive Log Report.
9. When complete:
c. Option: The patches have been installed on all other managed units.
d. Two Backup CM Sync files have been completed (see Aggregation/Archive Log file under Guardium Monitor Tab).
e. The following steps outline the process to convert the now Primary CM and its managed nodes to the Backup CM.
Note:
IMPORTANT: Wait approximately one hour to be sure at least TWO of the Backup CM sync files supporting Backup CM have completed.
The backups schedule for Backup CM sync files is approximately every 30 minutes.
The process will run on the CM to create a backup CM file and copy that file to the directory on the Backup CM.
Start the Backup CM Process after two sync file process have completed
If you have no access to shutdown the Primary CM, then go directly to the Backup CM and login as Admin. (select Setup > Tools and Views and then choose Central
Management) and click Make Primary CM). Skip to section “Steps to start the Backup CM configuration to become the Primary CM†in this document.
1. Wait approximate five minutes and login again as admin in the GUI of the Backup CM.
2. Once the Primary CM is shutdown completely, you can continue onto the next step
Note:
If you are logged into the Primary CM and it goes down, you get a message indicating that the connection has timed out.
1. When the Primary Server goes down, you will get a message on the Backup CM “Unable to connect to Remote Manager, consider switching to (the name
of the backup CM)".
a. Login as admin
c. Click Make Primary CM (do not click the “Make Primary CM†link more than once. Also stay on this screen and do not select anything else during
the running of this process. A log file will be created that you can view to see the progress and completion of this process.) Be patient as this process
will take awhile to complete. There is a safeguard that if you do click this button more than once nothing will change with the current running process.
d. Within seconds you should get a message “Are you sure you want to make this unit the primary CM? Click OK.
e. Within a few seconds more you will get a message stating “This may take a few minutes†. The time it takes for the Backup CM to become the
primary CM depends on the amount of data backed up from the Backup CM sync file and the amount of managed nodes that switch to the Backup CM
which will become the Primary CM. Click OK.
As soon as we click OK a log file will be created called load_secondary_cm_sync_file.log that will allow you to view the progress of the switch to the
completion of the Backup CM switch process. This file can be viewed from your GUI. The following steps indicate how to view this log file.
f. The last message will take a while to be presented to the screen. It will be the last message before the Backup CM switch has completed. The
message is “GUI will restart now. Try to login again in a few minutes and the Backup CM will now become the Primary CM†. Click OK.
Wait a few minutes for the Backup CM to become Primary and for all the managed nodes to complete switching over to the new Primary CM.
While the CM Backup Process is running – viewing the progress log file
From the Backup CM while the Make Primary CM process is running, you can do the following to view the progress of the Backup CM becoming the Primary CM.
Prerequisite: You will need the IP of the server you are connected to in order to view the log files.
2. From CLI run Fileserver <IP> “enter your IP number†3600", for example: fileserver 9.70.32.122 3600
3. From the GUI, enter the value: http://yourserver.x.x.x.com (will display in the CLI screen after entering the command, example:
http://joe.server.guardium.com (the server name will be the Backup CM server).
Fileserver Window on the UI will open to select file – Select Sqlguard logs
4. Select the file: load_secondary_cm_sync_file.log. (The file will display in a list of files from Step #3.) This will allow you to view the progress of the Backup
CM becoming the Primary CM.
CM Backup Process is complete when you see this line in the load_secondary_cm_sync_file.log
5. Wait approximately 10 minutes for all the Managed units to become available to the New Primary CM.
After the Backup CM becomes the Primary and all Managed nodes are now managed by the Backup CM server
You can now bring up the old CM server. Once it is up and running, perform the following steps to add it as the Backup CM server.
3. Delete the manager unit type, enter delete unit type manager.
5. VERY IMPORTANT: Wait approximately five minutes for the GUI to completely restart even after the deleted unit type displays a successful message and the
GUI restart message.
6. After five minutes, log into the New Primary CM to register Old CM as a managed unit.
12. Click Save. (IMPORTANT: Be patient, do not click this button twice).
19. Refresh Central Management screen to see the New Unit type Backup CM defined.
The following data is missing after the Backup CM process is completed. This is related to only the "first" switch from the Primary to the Secondary CM.
Missing Data:
4. VA Results
5. Classifier Results
6. DSD Results
7. CAS results
8. Datamart Data
9. Collected Data
The reports will be populated again is once you run these reports again on the New Primary CM. If you switch back to the old Primary CM, the data for these reports
will be presented.
Investigation Center
Investigation Center is an extension of the Aggregation Servers. Investigation Users (once defined) can restore data and results of selected historic dates and perform
forensic investigation. Once the days (dates) are restored, the investigation users can define and view reports using the standard Guardium® UI, only in the scope of the
investigated dates.
Each Guardium appliance maintains a Catalog of all the data and results archived. The Catalog contains information about the archive, its location and credentials to
access them. The Catalog is exported from the collectors and merged into a complete Catalog on the Aggregation Server as part of the aggregation process. With the
Catalog in place, investigation users can now select the desired dates for restoration and these dates will automatically be uploaded to the Investigation Center and
merged into that investigation user’s view. In addition to merging collectors’ Catalogs through the Aggregation Server, it is also possible to Export and Import
Catalogs from Setup > Tools and Views.
An investigation user for the most part utilizes the same query and report definitions as any other user would. The biggest difference is that the investigation user sees
only data selected for his investigation database (multiple investigators can be configured to share an INV database). Selected data can be restored from archive or
viewed from the current database in the case of data that was not purged yet. An investigation user can also restore archived audit process results and view them.
Caution: Role inv is a special role which will cause the user to be connected to a separate, investigation-only internal database. It should be combined with the role user
and in general it is incompatible with all other roles.
Note: To correctly configure an investigation user, the user's Last Name must be set to the name of one of the three investigation databases, INV_1, INV_2, or INV_3
(case-sensitive).
When creating an investigation user, it is suggested that the user's name correspond or have some representation that denotes which investigation database that will be
used. For instance, if a user will be using the INV_1 database, the user's name could be john1 or inv1.
Note: The Run an Ad-Hoc Audit Process button is available on all report screens for all users except investigation (INV) user.
If the user is INV, then the audit process definition menu screen will permit the following:
Only Investigation users and/or specific email addresses are allowed as receivers (no regular users, no groups, no roles other than INV are permitted as receivers).
The Events and Additional Columns button within a saved Report Audit Task is always disabled. No API automation can be specified.
No schedule can be specified. Audit process on INV, data can be run only manually using the Run Now button. Â
Only audit tasks of type Report are allowed.
Active is disabled, Keep Days and Keep Runs fields are disabled.
If the user is not INV, the audit process finder will not display any audit process owned by an investigation user (regardless of the roles assigned).
A comment is attached to the results specifying the dates and source hosts of the data mounted on the Investigation database at execution time.
The results can be viewed either from the Audit Process Builder or for the result navigation list.
Results of audits run on Investigation center cannot be archived and the results are discarded when investigation data is discarded.
Investigation Context
Guardium’s Investigation Center supports one to three concurrent investigation periods, dubbed INV_1, INV_2 and INV_3, each can hold separate historic data and
provides means to forensic investigation of that period. When creating an investigation user, the user's last name is must be either INV_1, INV_2, or INV_3 to associate
that user with one of the investigation databases. When logged into the Investigation Center (using one of the investigation users) a label specifies the selected
investigation period.
GUI
A user with the investigation role will see two additional tabs that are particular to the Investigate Center.
1. Click Manage > Data Management > Data Restore to open the Data Restore Search Criteria.
2. C
3. Click Data Restore to open the Restored Data panel. If a prior restore was performed, this panel will display the currently mounted data periods being used. At this
point, you may click Discard Data to un-mount all previously mounted data periods.
4. Click Re-Select Investigation Period to open the Data Restore Search Criteria panel.
5. Enter the start date in the From: box for the beginning time period you wish to search
6. Enter the end date in the To: box for the ending time period you wish to search
7. Optionally, enter a Host name to aid in filtering the result set on the host name
8. Click Search to view the result set - this will search the catalog for all archives matching the search criteria.
9. From the result set produced, check the Select box(es) of those periods you wish to restore. You may also click Select All or Unselect All to speed the selection
process.
10. Click Restore to restore the selected periods. Depending on the number of periods to restore, and whether the datasets are local to the system, the restore process
could take long time.
11. You can monitor the progress of the restore process in the View Restore Log panel.
Note: Data of  any day restored to Investigation Center that falls within the merge period is also merged into the Guardium application database and is visible by non-inv
users.
After logging into the Guardium interface as a user with the inv role:
Guardium Administration
Guardium® administrators perform various administration and maintenance tasks.
Certificates
Check certificates periodically to avoid loss of function. Use CLI commands to obtain and install new certificates.
Unit Utilization Level
Use unit utilization reports to identify under- and over-utilized systems in your Guardium environment.
Customer Uploads
Database Activity Monitor Content Subscription (previously known as Database Protection Subscription Service) supports the maintenance of predefined
assessment tests, SQL based tests, CVEs, APARs, and groups such as database versions and patches.
Services Status panel
The Services Status panel is a centralized place to check status of services such as CAS or alerter, and if necessary, investigate each service further. Open the
Services Status panel by clicking Setup > Tools & Views > Services Status. Each time the Services Status panel is opened, the status of each service is refreshed.
Archive, Purge and Restore
Archive and purge operations should be run on a scheduled basis. Use Data Archive and Results Archive to store captured and information for auditing. Amazon S3
Archive and Backup in Guardium also appears at the end of this topic.
Guardium catalog
When you archive data from your Guardium system, the Guardium catalog tracks where every archive file is sent, so that it can be retrieved and restored.
How to manage backup and archiving
Establish data retention practices; control activity volume; manage scheduling of data archive and purge, and monthly backups.
Exporting Results (CSV, CEF, PDF)
CSV, CEF, and PDF files can be created by workflow processes. This function exports all such files that are on the Guardium system.
Export/Import Definitions
If you have multiple systems with identical or similar requirements, and are not using Central Management, you can define the components that you need on one
system and export those definitions to other systems, provided those systems are on the same software release level.
Distributed Interface
Use this configuration screen to define the Distributed Interface and upload the Protocol Buffer (.proto) file to the DIST_INT database.
Manage Custom Classes
Upload and maintain custom classes used in alerts or evaluations. Manage custom classes by clicking Setup > Custom Classes.
SSH Public Keys
Use this information to create, modify or remove an SSH Public Key.
How to install an appliance certificate to avoid a browser SSL certificate challenge
Use IBM Security Guardium CLI commands to create a certificate signing request (CSR), and to install server, certificate authority (CA), or trusted path certificates
on your Guardium system.
Self Monitoring
The Guardium solution monitors itself to minimize disruptions and correct problems automatically whenever possible.
Groups
Using groups makes it easy to create and manage classifier, policy and query definitions, as well as roll out updates to your S-TAP's and GIM clients. Rather than
having to repeatedly define a group of data objects for an access policy, put the objects into a group to easily manage them.
Security Roles
Security roles are used to grant access to data (groups, queries, reports, etc.) and to grant access to applications (Group Builder, Report Builder, Policy Builder, CAS,
Security Assessments, etc).
Notifications
Use the Alerter and Alert Builder to create notifications. When email or other notifications are required for alerting actions, follow this procedure for each type of
notification to be defined.
How to create a real-time alert
Send a real-time alert to the database administrator whenever there are more than three failed logins for the same user within five-minutes.
Custom Alerting Class Administration
Use a custom alert class to send alerts to a custom recipient. Upload the custom class, then use the Alert Builder to designate the custom class as an alert
notification receiver.
Predefined Alerts
Table describing the predefined alerts found in the Alert Builder.
Scheduling
The general purpose scheduler is used to schedule many different types of tasks (archiving, aggregation, workflow automation, etc.).
Aliases
Create synonyms for a data value or object to be used in reports or queries.
Dates and Timestamps
Use a calendar tool to select an exact date, and a relative date picker to select a date that is relative to the current time.
Time Periods
Use the Time Period Builder to create time periods that can be used for policy rules and query conditions.
Time Periods
Policy rules and query conditions can test for events that occur (or not) during user-defined time periods.
Guardium Administration
Guardium® administrators perform various administration and maintenance tasks.
Any user assigned the admin role is referred to as a Guardium administrator. This is distinct from the admin user account.
If automatic account lockout is enabled (a feature that locks a user account after a specified number of login failures), the admin user account may become locked after a
number of failed login attempts. If that happens, use the unlock admin CLI command to unlock it.
Note: The access manager (accessmgr) can unlock accounts from the User Browser. Open the User Browser by clicking Access > Access Management > User Browser.
The next time the admin user logs in, access manager functionality will be available to them. This is possible for the admin user only (and not for other users having the
admin role).
Note:
The same user may contain both of these roles through a legacy situation or as a result of an upgrade. However, current use will not allow the two roles to be assigned to
the same user.
In the past, when a unit was upgraded, the accessmgr role was assigned to the admin user, and the accessmgr user was disabled.
In this situation, to configure the accessmgr and admin, log in as admin and enable the accessmgr user, then log in as accessmgr (the default initial password
isguardium), and remove the accessmgr role from the admin user.
Certificates
Check certificates periodically to avoid loss of function. Use CLI commands to obtain and install new certificates.
Certification Expiration
Expired certificates will result in a loss of function. Run the show certificate warn_expire command periodically to check for expired certificates. The command displays
certificates that will expire within six months and certificates that have already expired. The user interface will also inform you of certificates that will expire. To see a
summary of all certificates, run the command show certificate summary.
New Certificates
To obtain a new certificate, generate a certificate signed request (CSR) and contact a third-party certificate authority (CA) such as VeriSign or Entrust. Guardium does not
provide CA services and will not ship systems with different certificates than the ones that are installed by default. The certificate format must be in PEM and include
BEGIN and END delimiters. The certificate can either be pasted from the console or imported through one of the standard import protocols.
You can generate a certificate signed request (CSR) with one of the following commands:
create csr alias - This command creates a certificate request with an alias.
create csr gui - This command creates a certificate request for the tomcat.
create csr sniffer - This command creates a certificate request for the sniffer.
Note: Do not perform this action until after the system network configuration parameters have been set.
To install a new certificate through the command line interface, use one of the following commands:
store certificate gim - This command stores GIM certificates in the keystore.
store certificate gui - This command stores tomcat certificates in the keystore.
store certificate keystore - This command asks for a one-word alias to uniquely identify the certificate and store it in the keystore.
store certificate mysql - This command stores mysql client and server certificates.
store certificate stap - This command stores S-TAP certificates.
store certificate sniffer - This command stores sniffer certificates.
To install a new certificate key through the command line interface, use one of the following commands:
store cert_key mysql - This command stores the certificate key of a mysql client and server.
store cert_key sniffer - This command stores the sniffer certificate key.
Changes in Commands
Some certificate commands have been changed.
New Commands
The following commands are available for use.
Deprecated Commands
The following commands have been deprecated.
csr
store certificate console
store system key
show system key
store system certificate
show system certificate
Open the unit utilization reports by clicking Manage > Reports > Unit Utilization, and then selecting one of the reports.
Utilization Parameters
Most parameters are averaged for a specific unit over a specific time range. The number of restarts is a count of the sniffer restarts during a specific time range based on
the different PIDs.
Number of restarts
Sniffer memory
Percent MySQL memory
Free buffer space
Analyzer queue
Logger queue
Restriction: There is a limit of 500 SQLs in the logger queue. If more than 500 SQLs try to fill this queue at the same time, any additional SQLs beyond the queue
limit will log RA=-1.
MySQL disk usage
System CPU load
System var disk usage
Number of requests
Number of full SQL
Number of exceptions
Number of policy violations
Quick search disk usage
Quick search number of documents
Flat log requests
Thresholds
For each parameter there are two thresholds defined that separate three utilization levels: Low, Medium, and High.
Utilization levels:
There is also an overall utilization level for each unit. For each period of time, this level is the highest level for all levels during that period.
Reporting
View the available unit utilization reports by clicking Manage > Reports > Unit Utilization.
The Unit Utilization Levels tracking option allows you to create custom queries and reports.
Using aliases is recommended when using unit utilization data in custom and predefined reports. Otherwise, utilization levels will display as numbers: 1, 2, 3, instead of
Low, Medium, High.
Note: Each parameter has a value and a level which is calculated based on the value and the thresholds.
listUtilizationThresholds
updateUtilizationThresholds
reset_unit_utilization
CLI commands:
For a standalone system, viewing unit utilization information only requires scheduling the processing of unit utilization data. There is no need to schedule data upload for a
central manager buffer usage monitor when working with a standalone system.
Procedure
Results
After completing these steps, navigate to Manage > Reports > Unit Utilization to view unit utilization reports. In a centrally managed environment, data will be available for
the central manager and its managed units. For a standalone system, data will only be available for that individual system. If you did not use the Run Once Now option
when defining the schedules, you must wait until those processes run before the unit utilization reports will update with the latest data.
Parent topic: Unit Utilization Level
Customer Uploads
Database Activity Monitor Content Subscription (previously known as Database Protection Subscription Service) supports the maintenance of predefined assessment
tests, SQL based tests, CVEs, APARs, and groups such as database versions and patches.
Uploads are used to keep information current and within industry best practices to protect against newly discovered vulnerabilities. Distribution of updates is done on a
quarterly basis.
Use Customer Uploads to upload the following: DPS update files; Oracle JDBC drivers; MS SQL Server JDBC drivers; and, DB2 for z/OS license jar.
Note: If a custom group exists with the same name as a predefined Guardium® group, the upload process will add Guardium in front of the name for the predefined
group.
1. Open Customer Uploads by clicking Harden > Vulnerability Assessment > Customer Uploads.
2. For DPS Upload, click Browse to locate and select the file to be uploaded.
Note: Reference the Import DPS pane to see what files have been uploaded.
3. For Upload DB2 z/OS License jar, click Browse to locate and select the file.
4. Use Upload Oracle JDBC driver or Upload MS SQL Server JDBC driver to upload open source drivers. After uploading, you will see the databases added to the
Datasource finder. Upload one driver at a time.
Note: There are two instances where open source drivers are recommended over Oracle Data Direct drivers or MS SQL Data Direct drivers.
a. To support Windows Authentication for MS SQL Server. In all other uses, the Data Direct driver pre-loaded in the Guardium appliance is sufficient.
b. When using the Value Change Tracking application for Oracle version 10 or higher, the open source driver is recommended in order to support using streams
instead of triggers.
Use keywords to search and download open source JDBC drivers (for example: open source JDBC driver for MS SQL).
5. Use the Central Manager to distribute the .jar file to managed units. After the file is successfully uploaded, the GUI needs to be restarted on the Central Manager
and the managed units.
Note:
If you will be exporting and importing definitions from one unit to another, be aware that subscribed groups are not exported. When exporting definitions that reference
subscribed groups, you must ensure that all referenced subscribed groups are installed on the importing unit (or central manager in a federated environment).
When uploading DB2® z/OS® license jar files, the license will take effect after restart of the GUI.
Note: If the DPS stops for any reason (for example, a server restart or a GUI restart), it is recommended to wait 30 minutes before starting the DPS upload process again.
Enable ASO on the Oracle server using latest Oracle DataDirect driver
Refer to the following when enabling ASO on the Oracle server using the latest Oracle DataDirect driver.
SQLNET.CRYPTO_CHECKSUM_SERVER = required
SQLNET.ENCRYPTION_SERVER = required
#SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA256)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA1)
The Oracle JDBC driver will work and does not require specifying a connection property.
If you continue to use Oracle DataDirect driver, then you need to specify a connection property to the datasource.
Use the following when defining the Oracle DataDirect driver connection property:
DataIntegrityLevel=required;EncryptionLevel=required;DataIntegrityTypes=(MD5,SHA1)
Note: The current Oracle DataDirect driver does not support SHA-256. So SHA-1 has to be used. That is why sqlnet.ora reference
(#SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA256)) had to be commented out. However, if a Guardium customer must connect using SHA-256, they
should use the Oracle JDBC driver instead.
https://www.progress.com/documentation/datadirect-connectors
Download the Oracle database JDBC User' Guide PDF for a list of command references.
Use a TAB Delimited file (.TXT) when creating and saving a Datasource Upload file from the Customer Upload functionality
If you choose to use a comma delimited file structure (.CSV), it will not behave as intended if any column value contains a comma.
Create Datasource for CSV uploaded via the Upload CSV menu
Follow the proceeding steps to create a Tab Delimited .TXT formatted file containing datasource information. This Tab Delimited .TXT file can then be used with the
Customer Upload function in the Guardium application to many datasource types.
Use the function to import datasources was not always compatible with each Guardium Software Release. This procedure will enable the uploading of any datasource.
The following is a list of Header Columns that should be added to an Excel spreadsheet when creating the .TXT tab delimited datasource upload file:
Table 1. create_datasource
Parameter Description
application Required. Identifies the application for which the datasource is being defined. It must be one of the following:
ChangeAuditSystem
Access_policy
MonitorValues
DatabaseAnalyzer
AuditDatabase
CustomDomain
Classifier
AuditTask
SecurityAssessment
Replay
Stap_Verification
compatibilityMode Compatibility Mode: Choices are Default or MSSQL 2000. The processor is told what compatibility mode to use when monitoring
a table.
conProperty Optional. Use only if additional connection properties must be included on the JDBC URL to establish a JDBC connection with this
datasource. The required format is property=value, where each property and value pair is separated from the next by a comma.
For a Sybase database with a default character set of Roman8, enter the following property: charSet=utf8
customURL Optional. Connection string to the datasource; otherwise connection is made using host, port, instance, properties, etc. of the
previously entered fields. As an example this is useful for creating Oracle Internet Directory (OID) connections.
dbInstanceAccount Optional. Database Account Login Name (software owner) that will be used by CAS
dbInstanceDirectory Optional. Directory where database software was installed that will be used by CAS
dbName Optional. For a DB2 or Oracle datasource, enter the schema name. For others, enter the database name.
name Required. Provides a unique name for the datasource on the system.
owner Required. Identifies the Guardium user account that owns the datasource.
password Optional. Password for owner. If used, user must also be used.
serviceName Required for Oracle, Informix®, DB2, and IBM® ISeries. For a DB2 datasource enter the database name, for others enter the
service name.
severity Optional. Severity Classification (or impact level) for the datasource.
shared Optional (boolean). Set to true to share with other applications. To share the datasource with other users, you will have to assign
roles from the GUI.
type Required. Identifies the datasource type; it must be one of the following:
DB2
DB2 for i
Informix
MS SQL Server
MySQL
NA
Netezza
Oracle (DataDirect)
Oracle (SID)
PostgreSQL
Sybase
Sybase IQ
Teradata
TEXT
TEXT:FTP
TEXT:HTTP
TEXT:HTTPS
TEXT:SAMBA
user Optional. User for the datasource. If used, password must also be used.
region Required for cloud database service protection. The AWS region.
objectLimit Required for cloud database service protection. The maximum number of objects found in the classification process that are
added automatically to the list of audited objects. See Cloud database service protection
primaryCollector Relevant for cloud database service protection. The collector that extracts the audit data from the cloud database.
Notes:
1. Each of the column names must be included in the Excel spreadsheet SAVED as a TAB delimited (.TXT) file.
2. The Created Datasource name (what is shown when looking for the datasource) is made up of both the name column and the type column.
Steps to create and upload txt file in a Text CSV format file and add Datasource Data
1. Create the Excel spreadsheet file save as a Tab Delimited .TXT file with the following headers and datasource data to support the datasource import capability.
2. Create and save your .txt file to your PC or UNIX/Linux device for uploading into the Guardium application.
3. Login as admin and open Customer Uploads by clicking Harden > Configuration Change Control (CAS Application) > Customer Uploads
4. From Upload CSV to Create/Update Datasources, click Browse and select the .txt file containing the tab delimited datasource information.
5. Click Upload.
1. New: Per file upload (if save file and added New Datasource member(s), these members will be have the status of NEW.
2. Update: Upload SAME datasource that you made changes on will give an Update status.
Say that you set up a policy that sends a real-time alert whenever there are more than three failed log-ins in 5 minutes. To protect against this possible intrusion, you must
make sure that the policy was installed, and that the alerter is on.
Use the Services Status panel to verify that both of these services are configured properly.
Clicking any service takes you to its configuration page, where you can, as relevant, turn the service off or on, restart a service, configure a service, etc.
If for some reason the policy didn't install correctly, click Setup > Tools & Views > Policy Installation to go to Policy Installer, view the currently installed policies, and
make the necessary changes.
Service is running/scheduled:
Service is paused:
Service is off:
Data Archive and Results Archive can be found by clicking Manage > Data Management.
Data Archive backs up the data that has been captured by the Guardium system, for a time period. When configuring Data Archive, a purge operation can also be
configured. Typically, data is archived at the end of the day of everyday to ensure that in the event of a catastrophe, only one day of data is lost. The purging of data
depends on the application and is highly variable, depending on business and auditing requirements. In most cases, data can be kept on the Guardium systems for
more than six months.
Results Archive backs up audit tasks results (reports, assessment tests, entity audit trail, privacy sets, and classification processes) as well as the view and sign-off
trails and the accommodated comments from workflow processes. Results sets are purged from the system according to the workflow process definition.
In an aggregation environment, data can be archived from the collector, from the aggregator, or from both locations. Most commonly, the data is archived only once, and
the location from where it is archived varies depending on your requirements.
Scheduled export operations send data from Guardium® collector units to a Guardium aggregation server. On its own schedule, the aggregation server executes an
import operation to complete the aggregation process. On either or both units, archive and purge operations are scheduled to back up and purge data regularly (both to
free up space and to speed up access operations on the internal database).
Archive files can be sent using SCP or FTP protocol, or to an EMC Centera or TSM storage system (if configured). You can define a single archiving configuration for each
Guardium system.
Guardium’s archive function creates signed, encrypted files that cannot be tampered with. DO NOT change the names of the generated archive files. The archive and
restore operations depend on the file names that are created during the archiving process.
Archive and export activities use the system shared secret to create encrypted data files. Before information encrypted on one system can be restored on another, the
restoring system must have the shared secret that was used on the archiving system when the file was created.
Whenever archiving data, be sure to verify that the operation completes successfully. To do this, open the Aggregation/Archive Log by clicking Manage > Reports > Data
Management > Aggregation/Archive Log. There should be multiple activities that are listed for each archive operation, and the status of each activity should as completed.
Perform System Backup tasks by clicking Manage > Data Management > System Backup. You can also perform backup tasks from the CLI. See File handling CLI
commands for further information.
Default Purging
The default value for purge is 60 days
The default purge activity is scheduled every day at 5:00 AM.
For a new install, a default purge schedule is installed that is based on the default value and activity.
When a unit type is changed to a managed unit or back to a standalone unit, the default purge schedule is applied.
The purge schedule will not be affected during an upgrade.
When purging a large number of records (10 million or higher), a large batch size setting (500k to 1 million) is the most effective way to go. Using a smaller batch
size or NULL causes the purge to take hours longer. Smaller purges finish quickly, so a large batch size setting is only relevant for large purges.
Note: Setting batch size is not available in the UI. Use the GuardAPI command grdapi set_purge_batch_size batchSize to set batch size.
IMPORTANT: The Purge configuration is used by both Data Archive and Data Export. Changes that are made here apply to any executions of Data Export and vice
versa. In the event that purging is activated and both Data Export and Data Archive are run on the same day, the first operation that runs will likely purge any old
data before the second operation's execution.
For this reason, any time that Data Export and Data Archive are both configured, the purge age must be greater than both the age at which to export and the age at
which to archive.
9. If purging data, use the Purge data older than field to specify a starting day for the purge operation as a number of days, weeks, or months before the current day,
which is day zero. All data from the specified day and all older days are purged, except as noted. Any value that is specified for the starting purge date must be
greater than the value specified for the Archive data older than value. In addition, if data exporting is active, the starting purge date that is specified here must be
greater than the Export data older than value. See the IMPORTANT note.
Note:
There is no warning when you purge data that has not been archived or exported by a previous operation.
The purge operation does not purge restored data whose age is within the do not purge restored data timeframe that is specified on a restore operation.
10. Use the Scheduling section to define a schedule for running this operation on a regular basis.
11. Click Save to save the configuration changes. The system attempts to verify the configuration by sending a test data file to that location.
If the operation fails, an error message is displayed and the configuration will not be saved.
If the operation succeeds, the configuration is saved.
12. Click Run Once Now to run the operation once.
1. For Host, enter the IP address or host name of the host to receive the archived data.
2. For Directory, identify the directory in which the data is to be stored. How you specify this depends on whether the file transfer method used is FTP or SCP.
For FTP: Specify the directory relative to the FTP account home directory.
For SCP: Specify the directory as an absolute path.
3. For Port that can be used to send files over SCP and FTP. The default port for ssh/scp/sftp is 22. The default port for FTP is 21.
Note: Seeing a zero (0) for port indicates that the default port is being used and that there is no need to change.
4. For Username and Password, enter the credentials for the user logging on to the SCP or FTP server. This user must have write/execute permissions for the directory
that is specified in Directory.
1. Establish account with an EMC Centera on the network (IP addresses and a ClipID are needed)
2. Configure the data and/or configuration files from a Guardium system
3. Define and export a library
4. Confirm that your files are stored on the EMC Cetera storage system.
CLI action
From the CLI, run these commands:
1. For Retention, enter the number of days to retain the data. The maximum is 24855 (68 years). If you want to save it for longer, you can restore the data later and
save it again.
2. For Centera Pool Address, enter the Centera Pool Connection String; for example: 10.2.3.4,10.6.7.8?/var/centera/us1_profile1_rwe.pea txt
Note: This IP address and the .PEA file comes from EMC Centera. Â The question mark is required when configuring the path. The .../var/centera/... path name is
important as the backup might fail if the path name is not followed. The .PEA file gives permissions, username, and password authentication per Centera backup
request.
3. Click Upload PEA File to upload a Centera PEA file to be used for the connection string. The Centera Pool Address is still needed.
Note: If the message Cannot open the pool at this address.. appears, check the size of the Guardium system host name. A timeout issue has been
reported with Centera when using host names that are fewer than four characters in length.
4. Click Save to save the configuration. The system attempts to verify the Centera address by opening a pool using the connection string specified. If the operation
fails, you will be informed and the configuration will not be saved.
5. Click Run Once Now to perform the backup using the downloaded .PEA file.
Confirm that your files have been copied to the EMC Centera. The name of the files and a ClipID are required for this task.
1. For Password, enter the TSM password that this Guardium system uses to request TSM services, and re-enter it in the Re-enter Password box.
2. Optionally, enter a Server name matching a servername entry in your dsm.sys file.
3. Optionally, enter an As Host name.
4. Click Save to save the configuration. When you click the Save button, the system attempts to verify the TSM destination by sending a test file to the server using the
dsmc archive command. If the operation fails, you will be informed and the configuration will not be saved.
5. Return to the archiving or backup procedure to complete the configuration.
Restore Data
If this system is not the system that generated the archive to be restored, you must create a location entry in the catalog via Catalog Archive, then click Add (reference:
Guardium catalog) or GuardAPI (reference: CLI and API > GuardAPI Reference > GuardAPI Catalog Entry Functions). When the Data Restore is started this information is
used to transfer the file to the system before processing the data.
Before restoring from TSM, a dsm.sys configuration file must be uploaded to the Guardium system, via the CLI. Use the import tsm config CLI command.
Before restoring from EMC Centera, a pea file must be uploaded to the Guardium system, via the Data Archive panel.
Before restoring or importing a file that was encrypted by a different Guardium system, make sure that the system shared secret used by the Guardium system that
encrypted the file is available on this system (otherwise, it will not be able to decrypt the file). See About the System Shared Secret in System Configuration.
Before restoring on a Guardium collector run the CLI command stop inspection-core to stop the inspection-core process.
Note: The data cannot be captured during the restore process.
To restore data:
1. Open Data Restore by clicking Manage > Data Management > Data Restore.
2. Enter a date in From to specify the earliest date for which you want data.
3. Enter a date in To to specify the latest date for which you want data.
4. For Host Name, optionally enter the name of the Guardium system from which the archive originated.
5. Click Search.
6. In the Search Results panel, check the Select check box for each archive you want to restore.
7. In the Don't purge restored data for at least field, enter the number of days that you want to retain the restored data on the system.
8. Click Restore.
9. Click Done when you are finished.
Amazon S3 (Amazon Simple Storage Service) provides a simple web service interface that can be used to store and retrieve any amount of data, at any time, from
anywhere on the web. It gives any developer access to the same highly scalable, reliable, secure, inexpensive infrastructure that Amazon uses to run its own web sites.
Prerequisites
1. An Amazon account.
3. Amazon S3 credentials are required in order to access Amazon S3. These credentials are:
Access Key ID - identifies user as the party responsible for service requests. It needs to be included it in each request. It is not confidential and does not
need to be encrypted. (20-character, alphanumeric sequence).
Secret Access Key - Secret Access Key is associated with Access Key ID calculating a digital signature included in the request. Secret Access Key is a secret,
and only the user and AWS should have it (40-character sequence). This key is just a long string of characters (and not a file) that is used to calculate the
digital signature that needs to be included in the request.
Data Archive backs up the data that has been captured by the system, for a given time period.
Results Archive backs up audit tasks results (reports, assessment tests, entity audit trail, privacy sets, and classification processes) as well as the view and sign-off
trails and the accommodated comments from work flow processes.
When Guardium data is archived, there is a separate file for each day of data.
<time>-<hostname.domain>-w<run_datestamp>-d<data_date>.dbdump.enc
Guardium's archive function creates signed, encrypted files that cannot be tampered with. The names of the generated archive files should not be changed. The archive
operation depends on the file names that are created during the archiving process.
System backups are used to backup and store all the necessary data and configuration values to restore a server in case of hardware corruption.
All configuration information and data is written to a single encrypted file and sent to the specified destination, using the transfer method that is configured for backups on
this system.
<data_date>-<time>-<hostname.domain>-SQLGUARD_CONFIG-9.0.tgz
<data_date>-<time>-<hostname.domain>-SQLGUARD_DATA-9.0.tgz
Use the Aggregation/Archive Log report in Guardium to verify that the operation completes successfully. Open the Aggregation/Archive Log by clicking Manage > Reports >
Data Management > Aggregation/Archive Log. There should be multiple activities that are listed for each Archive operation, and the status of each activity should be
Succeeded.
Regardless of the destination for the archived data, the Guardium catalog tracks where every archive file is sent, so that it can be retrieved and restored on the system
with minimal effort, at any point in the future.
A separate catalog is maintained on each system, and a new record is added to the catalog whenever the system archives data or results.
Catalog entries can be transferred between appliances by one of the following methods:
Aggregation - Catalog tables are aggregated, which means that the aggregator will have the merged catalog of all of its collectors
Export/Import Catalog - These functions can be used to transfer catalog entries between collectors, or to backup a catalog for later restoration, etc.
Data Restore - Each data restore operation contains the data of the archived day, including the catalog of that day. So, when restoring data, the catalog is also being
updated.
When catalog entries are imported from another system, those entries will point to files that have been encrypted by that system. Before restoring or importing any such
file, the system shared secret of the system that encrypted the file must be available on the importing system.
Amazon S3 archive and backup option is not enabled by default in the Guardium GUI. To enable Amazon S3 via Guardium CLI, run the following CLI commands:
Amazon S3 requires that the clock time of Guardium system to be correct (within 15-minutes). Otherwise, this results in an Amazon error. If there is too large a difference
between the request time and the current time, the request will not be accepted.
If the Guardium system time is not correct, set the correct time using the following CLI commands:
User Interface
Use the System Backup to configure the backup. Open the System Backup by clicking Manage > Data Management > System Backup.
Access Key ID
1. Log onto AWS Management Console using your email address and password.
http://aws.amazon.com/console/
1. Click S3.
As user CLI, check if the database is full with this CLI command:
If this comes back with 10% or less, the database is 90% full or more.
To check if /var partition (filesystem) is 90% full or more. run a must gather command from the CLI:
You should be able to use fileserver to check the df -k output within the system_output.txt file that can be seen in fileserver
must_gather/system_logs/system_output.txt
The later Guardium versions have a safety catch/feature that will stop the main processes from collecting any more data when the database or filesystem reaches a
certain level .
The default is to stop the processes when the database and /or the filesystem reaches a 90% full level. as per this example v10.1 documentation. You can check the
current value of the safety catch via CLI:
Note: If the auto_stop_services_when_full is switched off the appliance may go on to fill the system to 100% preventing you from accessing the system at all.
You should never need to or want to set the auto_stop_services_when_full to OFF unless used temporarily in the specific circumstance described in the answer below
when you should then use it as described before switching it back to ON once you have resolved the space problem.
Note: You must stop inspection-core before switching the auto_stop off - this will avoid the system filling any further .
So in this case the system will automatically stop inspection-core and other processes when the filesystem or database is 90% full. This includes the GUI interface - so
you won't be able to connect to the GUI at that point.
If you attempt to restart stopped services with this command below then the system (and GUI interface) is likely to stop again after 5 minutes for the same reason. restart
stopped_services
Note: This command should only be used once you are sure that space has been recovered.
Before the database or the filesystem fills to the "auto stop" level you should receive warnings in the system log (messages file)
Alerts can be made to email you about the space problems before the auto stop is triggered. see Guardium Full database Alert
You can run a must_gather command and look inside the compressed file that gets created to check the latest messages file within
>>>Purging Data from the internal database when the GUI is down
If the auto stop has been triggered then this stops services such as the GUI - which stops you from making an emergency purge of data via the "Run Once Now " purge
option
Make sure that the inspection-core is switched off on Collectors to stop more data flooding into the appliance
stop inspection-core
Check that NO database commands are running except the show processlist - (if needed let any running commands finish before the next step )
You should be able to simply restart gui to gain access to the GUI to perform the purge as per What can I do if I see my Guardium Appliance getting full?
If there is a problem where GUI keeps going down every 5 minutes - then you can then consider switching the auto_stop_services_when_full to off TEMPORARILY to allow
you to restart gui and purge some data. Just restarting GUI on its own might only stay running for 5 minutes - the main nanny process might stop the services again before
enough data is purged or before you've had time to set the purge going.
Note: If the auto_stop_services_when_full is switched off the appliance may go on to fill the system to 100% preventing you from accessing the system at all.
You should never need to or want to set the auto_stop_services_when_full to OFF unless used temporarily in the specific circumstance described here when you should
then use it as described before switching it back to ON once you have resolved the space problem.
You must stop inspection-core before switching the auto_stop off - this will avoid the system filling any further )
Now you can go to the GUI and then Data Management Archive and set a purge running to clear some data.
Keep checking the database full and the Aggregation Archive log will show when the purge process is finished.
Once it is finished and you have space on the system you should set the auto_stop back on and then restart the stopped services thus
store auto_stop_services_when_full on
If the system has filled up it usually means that too much activity is being recorded.
Guardium catalog
When you archive data from your Guardium system, the Guardium catalog tracks where every archive file is sent, so that it can be retrieved and restored.
Aggregation: catalog tables are aggregated, which means that the aggregator has the merged catalog of all of its collectors.
Export/Import Catalog: these functions can be used to transfer catalog entries between collectors, or to back up a catalog for later restoration.
Data Restore: each data restore operation contains the data of the archived day, including the catalog of that day. When you restore data, the catalog is also
updated.
You can archive a catalog, export a catalog to external storage, or import a catalog that has been stored.
When catalog entries are imported from another system, those entries point to files that have been encrypted by that system. Before you restore or import any such file,
the system shared secret of the system that encrypted the file must be available on the importing system. You can use the aggregator backup keys file and aggregator
restore keys file CLI commands to copy the shared secrets from one Guardium system to another.
Archiving a catalog
Procedure
For FTP: specify the directory relative to the FTP account home directory
For TSM: Specify the directory as an absolute path of the original location.
Exporting a catalog
Procedure
Importing a catalog
Procedure
Value-added: Best Practices. Protect your data from loss. Make your data readily accessible for auditing purposes.
Use the System Backup function to define a backup operation that can be run on demand or on a scheduled basis.
System backups are used to back up and store all the necessary data and configuration values to restore a server in case of hardware corruption.
There are two archive operations available. Go to Manage > Data Management to select the Data Archive or Results Archive functions:
Data Archive backs up the data that has been captured by the Guardium system, for a given time period. When configuring Data Archive, a purge operation can also
be configured. Typically, data is archived at the end of the day on which it is captured, which ensures that in the event of a catastrophe, only the data of that day is
lost. The purging of data depends on the application and is highly variable, depending on business and auditing requirements. In most cases data can be kept on the
machines for more than six months.
Results Archive backs up audit tasks results (reports, assessment tests, entity audit trail, privacy sets, and classification processes) as well as the view and signoff
trails and the accommodated comments from workflow processes. Results sets are purged from the system according to the workflow process definition.
In an aggregation environment, data can be archived from the collector, from the aggregator, or from both locations. Most commonly, the data is archived only once, and
the location from where it is archived varies depending on the customer's requirements.
Whenever archiving data, be sure to verify that the operation completes successfully. To do this, log in as admin user, and open the Aggregation/Archive Log by clicking
Manage > Reports > Data Management > Aggregation/Archive Log. There should be multiple activities listed for each Archive operation, and the status of each activity
should be Succeeded.
Data backup
There are three types of recommended data backups:
1. Full/system backups:
a. Weekly or daily full backups of the Central Manager unit (assuming a standalone Central Manager).
2. Daily archives (think of these archives as incremental backups) for aggregators and collectors. The archive files from the aggregators are much larger than those
from the collectors. For example, if an aggregator has ten collectors sending data to it, the starting point for the size of the archive file is equal to those of all ten
collector archive files. However, it is much larger than the entire combined collector archives because the aggregator archive files contain extra data that is not sent
by the collectors every day.
Data retention
The data backup and archive files serve two purposes: disaster recovery, and historical investigation or auditing.
The following suggestions can be modified based on your corporate data retention policy. For example, some organizations are mandated to keep all backups for 18
months.
Keep a rolling 2-weeks worth of daily archives from the managed collectors
Note: If you have stand-alone collectors, the daily archives should be kept according to your data-retention policy.
All daily archives from the aggregators for the period required by your auditing or corporate data-retention policies.
Storage capacity
The following are only estimates/ranges of backup and archive file sizes for auxiliary storage capacity planning purposes.
The actual sizes vary depending on (1) the volume and granularity of the database activity that is logged on the Guardium collectors, and (2) the retention period of the
backup files.
Daily Archives
Collector: approximately 40 MB (privileged user monitoring) to 1 GB (Comprehensive monitoring with full details logged on all traffic).
Aggregator: a rough multiple of the number of collectors, for example, Number of collectors multiplied by 40 MB.
Monthly System Backups – assuming a 50% full database on a Dell R610 or IBM xSeries 3550 M4 (600 GB Disks)
Note: The backup gets roughly a 1:8 compression for the backup file.
Collector: 7 – 10 GB
Aggregator: 16 – 20 GB
Results Archives
This control is primarily achieved in the policy rules, and via the inspection engine configuration.
Identify all trusted applications and batch programs (these programs generally generate the bulk of the database activity) and if possible, ignore/skip their activity
by using the Ignore STAP Session or Skip Logging actions.
If possible, use the Selective Audit policy (with the Ignore S-TAP session rules) to minimize network traffic.
If no extrusion rules are used, for example, result sets are not examined, consider using the Ignore Responses per Session action to eliminate result sets being sent
to the Guardium system.
Establish a process to periodically review and update policy rules, including groups, to accommodate new databases and applications.
Establish a process to periodically monitor SQL Errors and provide to the DBA and Application development teams for remediation.
Scheduling
The following tables provide a summary of the key schedules to be configured on your Guardium systems. Following the tables is a brief explanation of each process.
Use the Aggregation/Archive log to record the time and status of these processes to assist with adjusting your scheduling times.
The following table lists a schedule of tasks for a Guardium system that is deployed as a collector.
Function Schedule
Data Archive and Purge Daily: 01:30 AM AND Purge for 15 days
CSV/CEF export to the SCP/FTP Server Daily: 05:00 AM, if configured in the Audit jobs AND after the audit jobs complete.
The following table lists a schedule of tasks for a Guardium system that is deployed as an aggregator.
Function Schedule
Data Archive and Purge Daily: 4:00 AM AND Purge for 30 days
CSV/CEF export to the SCP/FTP Server Daily: 05:15 AM, if configured in the Audit jobs AND after the audit jobs complete.
The daily Data Archive should be set to Archive data older than 1-Day and Ignore data older than 2-days. The first run archives all data in the database and subsequent
processes will only archive yesterday's data.
The amount of data kept online is constrained by the size of the database on each Guardium system, so the Purge process helps to manage how much data is kept online,
and it works with the Daily Archive. Guardium recommends keeping the minimum amount of data necessary to avoid filling up the database and help with database
performance.
For collectors, Guardium recommends 15 days for the collector and 30 days for the aggregator. The actual length, however, depends on how much data is recorded (for
example, numbers of S-TAPS, policy rules, and collectors).
The previous day’s logged activities are exported daily (a push process) from the collectors to their assigned aggregators for aggregated-reporting. This activity is the
counterpart to the Data Import on the aggregator.
Note: For convenience, purge can be configured on either the Archive or Export setup screens.
The Data Import process is scheduled only on an aggregator. It imports and processes the previous day’s data exported from the collectors.
Monthly Backups
As noted previously, the system backups are full backups and used for disaster recovery. Here is an example of the monthly schedule for the first Sunday of each month
starting at 6:00 AM.
CEF/CSV files that are created by workflow processes can also be written to syslog. When that happens, those files are not available to be exported by the means
described here. Those files should be accessed from syslog by other means.
1. Open the Results Export (files) by clicking Manage > Data Management > Results Export (Files).
2. Choose an option from the Protocols radio buttons: SCP, FTP, Amazon S3, or Softlayer.
3. For Host, enter the IP address or DNS host name of the host to receive the files.
4. For Directory, identify the directory in which the data is to be stored. How you specify this directory depends on the protocol you selected.
For FTP: Specify the directory relative to the FTP account home directory.
For SCP: Specify the directory as an absolute path.
5. Change the Port that can be used to send files over SCP and FTP. The default port for SSH, FTP, and SFTP is 22. The default port for FTP is 21.
6. For Username and Password, enter the credentials for the user logging in to the host machine. This user must have write/execute permissions for the directory that
is specified in the Directory field.
7. Use the Scheduling section to define a schedule for running this operation on a regular basis.
8. Click Save to save the configuration. The system attempts to verify the configuration by sending a test data file to that location. If the operation fails, it displays an
error message.
9. Click Run Once Now to run the operation once.
10. To verify that files have been exported, check the Aggregation/Archive Log. There should be a Send activity for each CSV or CEF file exported.
To define a default separator, open the Global Profile by clicking Setup > Tools and Views > Global Profile.
To enter a label to be included in all file names, go to Tools > Audit Process Builder.
Note:
The Syslog maximum message size is 4000. CSV results are truncated if they exceed this limit.
Set the encoding to UTF-8 no matter what application is used to read .CSV files. Excel defaults to a different character set and can corrupt the .CSV files. Also, when using
Excel, import the .CSV file and select UTF-8 encoding instead of just opening the file and having Excel launch based on file association.
You can export one type of definition (reports, for example) at a time. Each element that is exported can cause other referenced definitions to be exported as well. For
example, a report is always based on a query, and it can also reference other items, such as IP address groups or time periods. All referenced definitions (except for
security roles) are exported along with the report definition. However, only one copy of a definition is exported if that definition is referenced in multiple exported items. An
export of policies or queries exports only the groups that are referenced by the exported policies or queries. Previously an export of policies or queries would export all
groups.
Export/Import Definitions
Export and Import Definitions are used to save and then restore functional data from a given Guardium system. For example, this function enables you to create a
report on one Guardium system and then import that same report onto another server with the same Guardium installed version.
Note: This function is not the same as a full backup of the server. Backups should still be defined and run on a scheduled or manual basis.
Export Definitions - Are used to save and share defined functional values such as Reports/Queries, CAS data, Classifier Data, and so on. The export types are saved
onto your PC as a .sql file type.
Import Definitions - This function is used to import the exported definitions onto servers that use the SAME Guardium Software version. For example, if you export
definitions from a Guardium V10 system, then you can import those definitions only onto another V10 system.
Note:
When you export graphical reports, the presentation parameter settings (colors, fonts, titles, and so on) are not exported. When imported, these reports use the
default presentation parameter settings for the importing system.
Subscribed groups are not exported. When you export definitions that reference subscribed groups, the user must ensure that all referenced subscribed groups are
installed on the importing appliance (or Central Manager in a federated environment).
The logs of Export/Import Definitions have the same retention period than the monitored database activity logs.
Comments are not included in export.
When audit process definitions of scheduled runs (including schedule time) are exported to another system, the ACTIVE check box in Audit Process Builder is not
checked (INACTIVE).
Schedule Start Time of an audit process defined on one appliance and exported to another (unrelated) appliance - In the case that the original schedule start time is
defined, it is retained. If the original schedule start time is not defined (empty), then the imported schedule start time is set to the time it was imported.
When you export a datasource with an open source driver, the open source driver is not included in the export. The user needs to first upload the open source driver
into the new system before importing the datasource definition that was created using it, otherwise the data direct driver will be substituted for the open source
driver when it is imported.
Large complex imports can take a very long time and can exceed the length of the user's session. If this happens and the session times out, the import continues to
run in the background until it completes.
When you export the definition of classifier policies - any custom evaluation classes associated with the policies are not exported with the definition. For the
imported policies to work custom evaluation classes must be uploaded separately.
Exporting/Importing definitions between different languages does not work. For example, trying to export a file from a Guardium® system with a language of
Simplified Chinese and import that file to a Guardium system of English will not be successful.
The XACML (eXtensible Access Control Markup Language) is a declarative access control policy language that is implemented in XML and a processing model, describing
how to interpret the policies.
The export/Import to standard XACML is used as a bidirectional interface to transfer policies rules between Optim Designer and Guardium.
Optim Designer can convert data values for various purposes and through various means. In the core Optim runtime (z/OS and Distributed) this is achieved through the
invocation of data privacy functions that are declared within column maps. In Optim Privacy this is specified, by the user, as the application of a data privacy policy on an
attribute, referenced by an entity within a data access plan.
Customers who bought both products, Optim Privacy and Guardium, will be able to Export to XACML the policies and privacy information from one product and Import to
the other product.
Note: XACML imports from previous versions of Guardium are not supported.
To export Guardium policies to XACML, follow these steps:
To Import an XACML file from another Guardium system or Optim Privacy, open the Definitions Import by clicking Manage > Data Management > Import.
Importing Groups
When you import a group that already exists, members may be added, but no members will be deleted.
Importing Aliases
When you import aliases, new aliases may be added, but no aliases will be deleted.
In addition, imported user definitions are disabled. This means that imported users can receive email notifications that are sent from the importing system, but they are
not able to log in to that system, unless and until the administrator enables that account.
If a user definition exists on the importing system, it may not be for the same person that is defined on the exporting system. For example, assume that on the exporting
system the user jdoe with the email address john_doe@aaa.com is a recipient of output from an exported alert. Assume also that on the importing system, the jdoe user
already exists for a person with the email address jane_doe@zzz.com. The exported user definition is not imported, and when the imported alert is triggered, email is sent
to the jane_doe@zzz,.com address. In either case, when security roles or user definitions are not imported, check the definitions on both systems to see if there are
differences. If so, make the appropriate adjustments to those definitions.
A check box in the Definitions export screen will Exclude group members. See description in Group line item.
Auto-discovery Process Â
CAS Hosts Â
Classifier Policy Â
Custom Domain Â
Custom Table Â
Datasource Â
Event Type Â
Group  A check box in the Definitions export screen will Exclude group members. This check box is visible only for
data sets that have groups somewhere in the export hierarchy (for example, export of an alert includes also
the query of the alert and the query might include groups in the query conditions). If the export of
datasource does not include groups, the checkbox is not visible. When that checkbox is set, the export file
includes groups (if groups are linked to the exported definition) but members of the groups are not exported.
The checkbox is not set by default, its state is not persistent, and only applies to the current export.
Named Template Â
Privacy Set Â
Query Â
Replay Â
Report A check box in the Definitions export screen will Exclude group members. See description in Group line item.
Â
Role Â
Security Assessment Â
User Â
Users Hierarchy Â
Import Definitions
1. Open the Definitions Import pane by clicking Manage > Data Management > Import.
2. Click Browse to locate and select the file.
3. Click Upload. You are notified when the operation completes and the definitions contained in the file are displayed. Repeat to upload additional files.
4. Use the Fully synchronize group members checkbox to set the behavior of how to add new group members imported directly or via other datasets such as queries
or policies. If not checked, new members that are in the import are added, but members not in the import are not removed. If checked, then group members not in
the import are removed. Use the Set as default button next to the checkbox to save the checkbox setting.
5. Click Import this set of Definitions to import a set of definitions, or click Remove this set of Definitions without Importing to remove the uploaded file without
importing the definitions.
6. You will be prompted to confirm either action.
Note: An import operation does not overwrite an existing definition. If you attempt to import a definition with the same name as an existing definition, you are
notified that the item was not replaced. If you want to overwrite an existing definition with an imported one, you must delete the existing definition before
performing the import operation.
Distributed Interface
Use this configuration screen to define the Distributed Interface and upload the Protocol Buffer (.proto) file to the DIST_INT database.
From this database, Query Domain metadata is built automatically. After the metadata is built, the user can go to the Custom Domain Builder to modify or clone the data
and build custom reports. The distributed interface data uses protocol buffers. Protocol buffers are a flexible, efficient, and automated mechanism for serializing
structured data.
For Universal Feed type 3, upload the protocol definition file for configuration of DIST_INT database by clicking Manage > Data Management > Distributed Interface.
Note: Click Maintenance to manage the table engine type and table index. The table engine types for universal feed tables (InnoDB and MyISAM) will appear for all
universal feed tables as the data stored on the Guardium internal database is MYSQL-based. See External Data Correlation for further information on InnoDB and MyISAM
maintenance.
enum Data_type {
DOUBLE = 1;
LONG = 2;
INT = 3;
FLOAT = 4;
DATE = 5;
BOOLEAN = 6; // convention is to store it
as 0 and 1 in the double_value
STRING = 7; // stored in string_value
}
optional Data_type dataType = 5;
optional string unit = 6; // unit for the value
}
message AssetRelationEvent {
optional AssetRelationID unique_key__ = 1;
required string relationshipType = 2;
repeated RelationshipProperty property = 3;
optional bool deleted = 4;
}
message RelationshipProperty {
optional RelationPropertyID unique_key__ = 1;
optional string value = 2;
}
message RuleEvent {
optional string ruleName = 1;
optional bool enabled = 2;
}
// --- Metadata --- All unique identifier must be defined here
message Identifier {
optional InfoPropertyID infoPropertyId = 1;
optional MetricPropertyID metricPropertyId = 2;
optional AssetID assetId = 3;
optional AssetRelationID assetRelationId = 4;
optional RelationPropertyID relationshipPropertyId = 5;
}
1. Click Manage > Activity Monitoring > SSH Public Key Management, and do one of the following:
To create a key, click New.
To generate a key, click Generate.
To modify a key, select it from the list and click Modify.
To remove a key, select it from the list and click Remove.
2. Fill in the appropriate information on the SSH Public Key Edit panel and click Apply to save.
There is a problem with this website's security certificate. The security certificate presented by this website was issued for a
different website's address. Security certificate problems may indicate an attempt to fool you or intercept any data you send to
the server.
See Certificate CLI Commands for more information on all the certificate commands.
Note: One prerequisite is that you must provide a public certificate from a CA you will be using to sign your certificates (Verisign, Thwate, Geotrust, GoDaddy, Comodo,
within-your-company, etc).
Note: Guardium does not provide CA services and will not ship systems with different certificates than the one installed by default. A customer that wants their own
certificate will need to contact a third-party CA.
Note: If the certificate is not self-signed, you MUST obtain also the public certificate for each signer up to the lowest level (for example, the certificate that is self-signed).
You can use the command, openssl x509 -in t.pem -text -noout, to show contents of a x509 certificate.
Procedure
1. Have available the public certificate from the CA (Certificate Authority) you will be using to sign your certificates (from Verisign, Thwate, Geotrust, GoDaddy,
Comodo, in-house, etc).
2. Log into the CLI on the individual Guardium system you wish to have a signed certificate on.
Before executing the command, obtain the appropriate certificate (in PEM format, not binary format) from your CA, and copy the certificate, including the Begin and
End lines, to your clipboard.
3. Enter the command, store certificate keystore. The following prompt will be displayed:
Please paste your CA certificate, in PEM format. Include the BEGIN and END lines, and then press CTRL-D.
Paste the PEM-format certificate to the command line, then press CRTL-D. You will be informed of the success or failure of the store operation.
Now the CA you will sign with is set as trusted on the Guardium system.
4. Next, from the CLI command prompt, type: create csr gui.
Fill in the requested information. If the CN (common name) of the certificate is not set to the hostname.domain of the box, certificate errors from the browser will
result.
There are no parameters, but you will be prompted to supply the organizational unit (OU), country code (C), and so forth. Be sure to enter this information correctly.
The last prompt is as follows:
DSA, or the Digital Signature Algorithm, is a federal information processing standard (FIPS) for digital signatures. RSA is a public-key cryptosystem that involves key
generation, encryption, and decryption. The default encryption algorithm is RSA.
This is the generated CSR: Certificate Request: Data: Version: 0 (0x0) Subject: C=US, ST=MA, L=Littleton, O=XYZCorp,
OU=Accounting, CN=g2.xyz.com -----BEGIN NEW CERTIFICATE REQUEST-----
MIICWjCCAhcCAQAwVDELMAkGA1UEBhMCVVMxEDAOBgNVBAgTB1dhbHRoYW0xETAPBgNVBAoTCEd1
YXJkaXVtMRUwEwYDVQQLEwxndWFyZGl1bS5jb20xCTAHBgNVBAMTADCCAbgwggEsBgcqhkjOOAQB
MIIBHwKBgQD9f1OBHXUSKVLfSpwu7OTn9hG3UjzvRADDHj+AtlEmaUVdQCJR+1k9jVj6v8X1ujD2
y5tVbNeBO4AdNG/yZmC3a5lQpaSfn+gEexAiwk+7qdf+t8Yb+DtX58aophUPBPuD9tPFHsMCNVQT
WhaRMvZ1864rYdcq7/IiAxmd0UgBxwIVAJdgUI8VIwvMspK5gqLrhAvwWBz1AoGBAPfhoIXWmz3e
y7yrXDa4V7l5lK+7+jrqgvlXTAs9B4JnUVlXjrrUWU/mcQcQgYC0SRZxI+hMKBYTt88JMozIpuE8
FnqLVHyNKOCjrh4rs6Z1kW6jfwv6ITVi8ftiegEkO8yk8b6oUZCJqIPf4VrlnwaSi2ZegHtVJWQB
TDv+z0kqA4GFAAKBgQCONsEB4g4/limbHkuZ5YnLn9CGM3a2evEnqjXZts4itxeTYwPQvdkjdSmQ
kaQlBxmNUsZOJZrq5nC5Cg3X9spa+BzFr+PgR/5zka17nHcxKXCjVjLk451L67KllXv61TUfv/bU
PKmiaGKDttsP2ktG4dBFXQdICJEGo0aNFCYn6qAAMAsGByqGSM44BAMFAAMwADAtAhUAhHTY5z9X NiBAuyAC9PS4GzleYakCFF2kcfxfjX1BFy5I228XWMAU0N95
-----END NEW CERTIFICATE REQUEST-----
Note: For Common Name, use hostname in FQDN format (fully qualified domain name). But if you connect to the GUI normally using the short hostname (for
example, system1) instead of FDQN (system1.us.ibm.com), you will get a certificate error "Address Mismatch" you will either have to change the CN=system1 or
connect with https://system1.us.ibm.com:8443/sqlguard to make use of the certificate.
Note: Country Code must be 2 letters.
Note: Keysize can be 1024 or 2048.
5. Copy and paste the generated hash from ---Begin CSR---- to ---End CSR--- into a text document. Now send this off to your CA for them to return the signed key.
Before continuing, check the Subject line to verify that you have entered your company information correctly. From this point forward, use whatever procedure you
would normally use to obtain a server certificate from your CA.
Note: • When submitting the request to your CA make sure you request the certificate to be in PKCS#7 PEM format.
6. The CA signs the CSR and sends you back your signed key.
7. Now, go back to the CLI prompt on the Guardium system and have the signed key from the CA handy. Type the following: store certificate gui.
Enter the command exactly as shown. You will receive the following information and prompt:
Include the BEGIN and END lines, and then press CTRL-D.
Paste the PEM-format certificate to the command line, then press CRTL-D. You will be informed of the success or failure of the store operation.
8. For the final step, restart the UI using the command restart gui.
You have now successfully installed one certificate for one Guardium unit. Repeat the steps for every Guardium system on-site.
Self Monitoring
The Guardium solution monitors itself to minimize disruptions and correct problems automatically whenever possible.
Guardium uses a three-pronged approach to ensuring that it is available, functioning properly, has not been tampered with, and alerts users of problems:
Reports - Whether textual or graphical, reports are at the core of the Guardium® solution. By using Guardium’s Query Builder and Report Builder, a user can
effectively report on any of the self-monitoring data collected through associated domains and entities. Many of the predefined reports can be enhanced through
more detailed effort to provide higher levels of granularity. A specific query builder has been created (VA Test Tracking) to report on tests that are available for
security assessments.
Alerts - In addition to building reports, a user can define an alert against those reports through defined thresholds--indicating an exception or policy rule violation.
These alerts can either be real-time or determined through historical analysis. These alerts can then trigger notification to users through SMTP, SNMP, syslog, or a
custom Javaâ„¢ class.
Self-Monitoring Utility - Guardium has implemented an internal self-monitoring demon (always running) service utility on collectors and aggregators that wakes up
every 5 minutes and does system scan, checking components for optimal configuration, operational effectiveness, and repairs when necessary. For example if the
utility finds the Web Server down, it will first validate a complete shutdown of the service, restart the service, and then alerts an administrative user.
Components Monitored
Disk space(%full) Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain and Sniffer Buffer
Usage entity to create alerts
CPU Load Reports > Guardium Operational Reports > Buff Usage Monitor
Uptime and Reboots Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain and Sniffer Buffer
Usage entity to create alerts
Memory Usage
CPU Usage
Memory Usage
Alert: You can use the Queries and Correlation Alerts, utilizing the Guardium Login domain and Guardium
Users Login entity to create alerts
Lost requests Manage > Reports > Activity Monitoring > Dropped Requests
Alert: You can use the Queries and Correlation Alerts, utilizing the Exceptions domain and Exceptions
entity to create alerts
Change in data patterns Reports >Real-time Operational Reports > Values Changed Alert: See Viewing an Audit Process Definition
for alert: Data Source Changes - alert on any data source changes
Packets rates Reports >Guardium Operational Reports > Buffer Usage Monitor
Request rates Alert: You can use the Queries and Correlation Alerts, utilizing the Sniffer Buffer domain and Sniffer Buffer
Usage entity to create alerts
Ignored data
Scheduled Jobs Exceptions Reports >Guardium Operational Reports > Scheduled Job Exceptions, or See Predefined admin Reports:
Alert: You can use the Queries and Correlation Alerts, utilizing the Exceptions domain and Exception Type
entity to create alerts.
Audit processes status Reports >Guardium Operational Reports > Number of Active Audit Processes, or See Predefined admin
Reports.
Alert: You can use the Queries and Correlation Alerts, utilizing the Audit Process domain and Audit Process
entity to create alerts
Inspection Engine Changes Reports >Activity Monitoring > S-TAP Configuration Change History
Alert: See Viewing an Audit Process Definition for alert: Inspection Engines and S-TAP - alert on any
activity related to inspection engine and S-TAP configuration
Guardium Users Activity - Login/logout Reports >Guardium Operational Reports > Logins to Guardium, or See Predefined admin Reports
Alert: You can use the Queries and Correlation Alerts, utilizing the Guardium Login domain and SQL Guard
Login entity to create alerts
Failed Logins Reports >Guardium Operational Reports > Logins to Guardium, or See Predefined admin Reports
Alert: See Viewing an Audit Process Definition for alert: Failed Logins To Guardium - alert if have more than
5 failed logins in the last 11 minutes, or Select Tools > Report Building > drop-down Report Title:
Guardium Logins, See Reports for additional information
User Activity Audit Trail Reports >Guardium Operational Reports > User Activity Audit Trail, or See Predefined admin Reports
Alert: You can use the Queries and Correlation Alerts, utilizing the Guardium Activity domain and SQL
Guard User Activity Audit entity to create alerts
Note: User activity includes those instances where a user changes to the root shell -- providing a log of
their root activity.
Creation/Deletion of Users/Roles Reports >Guardium Operational Reports > User Activity Audit Trail, or See Predefined admin Reports
Alert: See Viewing an Audit Process Definition for alert: Guardium - Add/Remove Users - alert on any
Addition or Removal of Guardium User
Permissions monitoring Reports >Guardium Operational Reports > Guardium Users, Guardium Roles, or Guardium Applications
Alert: You can use the Queries and Correlation Alerts, utilizing the Application domain and Application
Data entity to create alerts
S-TAP® Info (Central Manager) Report: See S-TAP Reports. On a Central Manager, an additional report, S-TAP Info, is available. This report
monitors S-TAPs of the entire environment. Upload this data using the Custom Table Builder. This report is
the result of uploading data using remote sources on a Central Manager and using that data to see a
consolidated view of S-TAPs.
S-TAP info is a predefined custom domain which contains the S-TAP Info entity and is not modifiable like
the entitlement domain.
The nanny watches key components and critical resources within the Guardium system—guaranteeing their availability and reliability. These resources and components
include:
Web service monitoring - service port (default 8443) not responding or tomcat service is not up
syslog message
mail admin
will issue restarts of the web service
Inspection Engine activity - snif overloaded, not responding, or failure
syslog message
mail admin
mail guardium support (optional)
will try and fix by restarting the snif under certain conditions
will try and respawn snif if process dies
Diskspace utilization - alerts when > 75% on the critical partitions
syslog message
alert admin
will perform preventive action by cleaning temporary files when over 95% Â
Failed login (ssh) to the appliance - checks for ssh daemon's messages and alerts on failed ssh login attempts
mail admin  (it's already in syslog)
Monitor internal database (TURBINE) - verify service is up, status, and capacity utilization monitoring
syslog message
mail admin
restart service
File System utilization - every five minutes, Nanny.pl checks file system at /var, warning alert when > 75% in the /var directory, critical alert and services stopped
when >90% in /var directory
syslog message
alert admin
Admin clean-up required, using CLI commands: show filesystem usage, clear filesystem dir, and restart stopped_services
Alert users to issues that may affect system performance, such as: CPU utilization, database disk space, inactive STAPs, and no traffic situations.
The Sniffer Buffer Usage domain is the basis for most of the following alerts.
Create a Query using the Sniffer Buffer Usage domain with the columns and Fields as shown – there are no conditions.
The alert will then be setup to fire only if the utilization is exceeded for 360 times in a 24-hour period, for example, 25% of the day.
Note: The Sniffer buffer usage domain is populated once a minute, so there are 1440 entries in a 24-hour period.
To define the alert, click Protect > Database Intrusion Detectioin > Alert Builder..
1. Create a new Query with Sniffer Buffer Usage as the main entity.
1. Setup a new alert in the Alert Builder. Open the Alert Builder by clicking Protect > Database Intrusion Detection > Alert Builder.
Repeat the previous steps to create an alert for monitoring disk space on the collectors.
1. Create a Query.
For STAPs configured with a primary and secondary collector, if the STAP cannot communicate with the primary (for example, due to network issues), it will failover to the
secondary. Unless the former-primary collector is able to ping the STAP, it will then generate an inactive STAP alert.
No Traffic Alerts
This is a built-in alert and needs to be activated and scheduled.
This alert checks for traffic from an active inspection engine, from which the collector previously received traffic, AND for traffic that is processed by the policy. If both
conditions are not satisfied within 48 hours, an alert will be generated.
The following two reports should be scheduled, from the Central Manager, to run weekly on each collector.
Using the Sniffer Buffer Usage domain, create a report with the following fields:
This report displays the key parameters for ALL STAPs and inspection engines for a given collector. The report cannot be modified but can be run on each collector, or from
the Central Manager pointing to each collector in turn, or scheduled via the Audit process on each collector.
When querying, a value of -1 (minus one) indicates a NULL in the database. The table at the end of this section lists the available SNMP OIDs.
SNMP Examples
From a Unix session, you can display SQL Guard SNMP information using the snmpget or snmpwalk commands. (Use snmpget -h or snmpwalk -h to display command
syntax.) Various UI-based software packages are available for displaying SNMP information. Those alternatives are not described here.
Â
HOST-RESOURCES-MIB::hrStorageSize.101
Â
Â
UCD-SNMP-MIB::ssCpuRawNice.0 = Counter32: 11
Note: Adding the RawUser, RawSystem, and RawNice numbers provides a good approximation of total CPU usage.
Â
UCD-SNMP-MIB::dskAvail.1 Â
.1.3.6.1.4.1.2021.9.1.7.2 Disk space available in /var directory
UCD-SNMP-MIB::dskAvail.2 Â
.1.3.6.1.4.1.2021.9.1.8.1 Disk space used in / directory
UCD-SNMP-MIB::dskUsed.1 Â
.1.3.6.1.4.1.2021.9.1.8.2 Disk space used in /var directory
UCD-SNMP-MIB::dskUsed.2 Â
.1.3.6.1.2.1.25.2.3.1.5.1 Total memory available
HOST-RESOURCES-MIB::hrStorageSize.1 Â
.1.3.6.1.2.1.25.2.3.1.6.1 Memory in use
HOST-RESOURCES-MIB::hrStorageUsed.1 Â
.1.3.6.1.4.1.2021.8.1.101.1 Open monitored session count
UCD-SNMP-MIB::extOutput.1 Â
.1.3.6.1.4.1.2021.8.1.101.2 Requests logged by the current sniffer process (set to zero for each restart)
UCD-SNMP-MIB::extOutput.2 Â
.1.3.6.1.4.1.2021.8.1.101.3 Last session timestamp
UCD-SNMP-MIB::extOutput.3 Â
.1.3.6.1.4.1.2021.8.1.101.4 Last construct timestamp
UCD-SNMP-MIB::extOutput.4 Â
.1.3.6.1.4.1.2021.8.1.101.5 Memory used by the sniffer process
UCD-SNMP-MIB::extOutput.5 Â
.1.3.6.1.4.1.2021.8.1.101.7 Packets in on ETH1/ out on ETH2; usually only one number (inbound) when a SPAN port or TAP is used
UCD-SNMP-MIB::extOutput.7 Â
.1.3.6.1.4.1.2021.8.1.101.8 Packets in on ETH3/ out on ETH4; usually only one number (inbound) when a SPAN port or TAP is used
UCD-SNMP-MIB::extOutput.8 Â
.1.3.6.1.4.1.2021.8.1.101.9 Packets in on ETH5/ out on ETH6; usually only one number (inbound) when a SPAN port or TAP is used
UCD-SNMP-MIB::extOutput.9 Â
Other MIBs accessible in the machine are: SNMPv2-MIB, IF-MIB, RFC1213-MIB, and HOST-RESOURCES-MIB.
Open the Running Query Monitor by clicking Manage > Activity Monitoring > Running Query Monitor.
Set the query timeout for all reports and monitors that are running in a portlet. Other query processes, such as policy simulations, audit processes, and internal
processes are not affected by this timeout value. The default is 180 seconds (3 minutes).
Kill any currently running user query. Some queries that are listed in this panel–audit processes, for example–can exceed the query timeout specified. That is
expected, because the Report/Monitor query timeout applies only to reports and monitors running in a portlet.
We do not recommend setting the Query Timeout higher than the default setting (180 seconds) for an extended time. If you set this limit higher, it increases the chances of
overloading the system with ad-hoc reporting activity.
Groups
Using groups makes it easy to create and manage classifier, policy and query definitions, as well as roll out updates to your S-TAP's and GIM clients. Rather than having to
repeatedly define a group of data objects for an access policy, put the objects into a group to easily manage them.
Groups Overview
Group together similar data objects and use them in creating query, policy, and classification definitions. Use one of the many predefined groups, or create your own
group using the Group Builder.
Using the group builder
The group builder provides at-a-glance information about group membership and use and several convenient methods for populating groups.
Using the group builder (legacy)
Using groups in queries and policies
Short overview of conditional operators for queries and where to use groups in policies.
Example: Using groups to create rules and policies
Use groups to quickly specify rule conditions in a policy.
Predefined Groups
This section details the predefined groups in Guardium®.
Groups Overview
Group together similar data objects and use them in creating query, policy, and classification definitions. Use one of the many predefined groups, or create your own group
using the Group Builder.
There are many places where groups are practical to use. By grouping together similar data objects, you can use the whole set of objects in policies, classifications,
queries, and reports, rather than having to select multiple data objects individually.
If you need to make changes to a query or policy, rather than applying those changes to each individual object, you can apply those changes to the group.
S-TAPs and GIM also use groups to make it easier to roll out updates across managed servers.
Group Builder
The Group Builder allows you to create a new group or modify an existing group from the user interface.
The Group Filter screen allows you to easily sort through groups based on application type, group type, description or category.
Types of groups
The field Group Type refers to the type of data that will be grouped together. For example, Server IP expects data arranged as an IP address and Users expects to see
names of users on the application.
Tuple groups
A tuple group allows multiple attributes to be combined together to form a single composite group member. Three of an ordered set of values are called 3-tuple. An n-
tuple is one with an n-set of value attributes. This simplifies the specification of conditions for reporting and policy rules.
Tuple groups - Object/Command, Object/Field, Client IP/DB User, Server IP/DB User
3-tuple groups - Client IP/Source Program/DB User, DB User/Object/Privilege
5-tuple group - Client IP/Source Program/DB User/Server IP/Service Instance
7-tuple group - Client IP/Src App/DB User/Server IP/Svc. Name/OS User/DB Name
Tuple supports the use of one slash and a wildcard character (%). It does not support the use of a double slash (//).
Note: Tuple query - If the user tries to use LIKE GROUP condition and the data has '\' in it, the result may not be correct. The user should use IN GROUP instead, if data
has '\' in it.
Predefined groups
There are a number of predefined groups that are included with Guardium. Use the Group Filter and Group Type menu to browse the list of groups and find the one that
best suits your needs.
Group types DB User/DB Password are by default only available to admin users. Modify the group roles if you want to change this default setting.
For example, two predefined groups, Create Commands and DDL Commands, both have a member named CREATE TABLE. If you are querying for either of these groups, all
of the CREATE TABLE members from the reporting period will be counted in that group.
In some cases you may want to define a set of groups so that each member belongs to only one group. For example, suppose that for reporting purposes you need to
group database users into one of two groups: employees or consultants. You would define each of those groups with the same sub-group type (Employee-Status, for
Wildcards in members
Group members can include wildcard (%) characters for when the group is used in a query condition or policy rule.
aaazzz aaz
%bbb bbb,zzbbb bb
bbbzzz
%ccc% ccc cc
ccczz zzzcczzz
zzzccczzz
Use the group builder to create and populate groups from a variety of sources including CSV files, external datasources, and existing groupd. In addition, the builder
provides at-a-glance information about group membership and where groups are used in security policies, classifier policies, queries, and reports.
Tip:
Guardium V10.1.4 introduces the new group builder interface described in this information. The new group builder is accessible at Setup > Tools and Views > Group
Builder.
The original group builder is accessible at Setup > Tools and Views > Group Builder (Legacy) and described at Using the group builder (legacy).
Creating a group
Procedure
1. Open the group builder by navigating to Setup > Tools and Views > Group Builder.
Editing a group
Procedure
1. Open the group builder by navigating to Setup > Tools and Views > Group Builder.
2. Select a group from the Group Builder table and click the icon.
3. Use the Edit group dialog to modify group settings. To add members to the group or modify group membership, use the Members tab. For information about
populating groups, see Populating groups.
4. Click Save to finish editing the group.
The Members and Populated by columns of the Group Builder table summarize how many members are in a group and how the group is populated. The following
procedure describes how retrieve detailed information about group membership and the methods used for populating the group.
Procedure
1. Open the group builder by navigating to Setup > Tools and Views > Group Builder.
2. Open the Edit group dialog by selecting a group from the Group Builder table and clicking the icon.
3. View group membership on the Edit group dialog by clicking the Members tab.
The Used in classifier, Used in policy, and Used inquery columns of the Group Builder table provide an overview of where groups are used in Guardium. The following
procedure describes how retrieve detailed information about the policies, queries, and reports where a group is used.
Procedure
1. Open the group builder by navigating to Setup > Tools and Views > Group Builder.
2. Open the details panel by selecting a group from the Group Builder table and clicking Actions > View details.
Attention: The View details action is only enabled when the selected group is being used, for example by policies or queries.
3. Use the Policies and Queries tabs on the details panel to view where the selected group is used in security policies, classifier policies, queries, and reports.
Populating groups
The group builder supports several methods of adding members to groups.
Procedure
1. Click the icon to create a new group or select a group from the Group Builder table and click the icon to edit an existing group.
2. Select the Members tab of the Create new group or Edit group dialog.
3. Populate the group using one of the following methods:
Procedure
2. Use the Datasource menu to import data from a datasource. Click the icon to define a new datasource or the icon to edit an existing datasource.
3. Use the Table name and Column name fields to identify the location of data to import from your datasource.
4. Click OK to continue.
Results
Completing the Import from external datasource dialog automatically creates or updates the following Guardium artifacts:
Custom table
Custom datasource
Custom domain
Custom query
Group
These artifacts are available through standard Guardium tools using naming conventions described in the following table, where [table name] and [column name] are taken
from the Table name and Column name fields of the Import from external datasource dialog.
Custom table Custom Table Builder > Edit Data [table name]_[column name]_[datasource ID] USERS_ADMIN_12345 Â
Custom datasource Custom Table Builder > Upload Data [datasource name]_[datasource type](Custom user_repository(Custom Domain)
Domain)
Group Group Builder > Populate from Query  PCI Admin Users
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy).
2. Click Next to bypass the filter and create a new group.
3. In the Create New Group panel, select an option from the Application Type menu to determine which application you will use the group with.
4. Enter a unique Group Description for the new group - do not include apostrophe characters in this field.
5. Select a Group Type Description to choose which type of data you are grouping.
6. Enter a Category, which is an optional label that you can filter by and use to group items (that the filter has isolated) of policy violations and reports.
7. Enter a Classification, which is another optional label that you can filter by and use to group items for policy violations and reporting.
8. Select Hierarchical to create a group of groups, where the admin user has access and then passes it along to users in groups in the hierarchy.
9. Click Add to add the group.
Modifying a group
Make modifications to your group, such as adding a member or changing the category of the group. Exercise caution when modifying or deleting a group, as changes made
could possibly affect other users or policies.
Procedure
1. Open the Group Builder (Legacy) by clicking Setup > Group Builder (Legacy).
2. Use the Group Filter to find the group you want to modify, or leave the filter empty and click Next to look at the complete list of groups.
3. When modifying a group, a best practice is to clone the group , save it as a new group, and then modify the clone to prevent undesired effects on the rest of your
Guardium system.
Procedure
Select a group from the Group Members list, enter the new category name into the Category field and click Modify Category to save changes.
Procedure
If you have a new member you want to add to a group, enter the member's name into the Create & add a new Member named field and click Add.
Note: When adding to a group of objects, valid member names may be composed of object_name, schema.object_name, use a wildcard such as %object_name, or a
combination of all three.
The new member is now added to the Group Members list.
Procedure
1. Select the group member to be re-named from the Group Members list. This will also display the current group member name in Rename Selected Member to.
2. Change the name of the group member in the Rename selected Member to field and click Update.
Procedure
Populating groups
After creating a group or finding the one you want to work with, populate the group with members. Use the Group Builder (Legacy) to manually add members to a group, or
through several automated import methods.
The Guardium admin user account will not be changed in any way.
You have the option to clear existing members from a group before importing.
Existing user passwords will not be changed.
By default, new users are disabled when added, assigned the user role, and have blank passwords.
Note:
If you are scheduling an import, consider any other scheduled imports you may have at that time, as this will affect the behavior of existing scheduled imports.
Procedure
Configure your LDAP server with your Guardium system. Open the Group Builder by clicking Setup > Group Builder (Legacy), and fill out the required information.
a. For LDAP Host Name, enter the IP address or host name for the LDAP server to be accessed.
b. For Port, enter the port number for connecting to the LDAP server.
c. Select the LDAP server type from the Server Type menu.
d. Check the Use SSL Connection check box if Guardium is to connect to your LDAP server using an SSL (secure socket layer) connection.
e. For Base DN, specify the node in the tree at which to begin the search. For example, a company tree might begin like this: DC=encore,DC=corp,DC=root
f. For Attribute to Import, enter the attribute that will be used to import users (for example: cn). Each attribute has a name and belongs to an objectClass.
g. Check the Clear existing group members before importing check box if you want to delete all existing group members before importing.
h. For Log In As and Password, enter the user account information that will connect to the Guardium server.
i. For Search Filter Scope, select One-Level to apply the search to the base level only, or select Sub-Tree to apply the search to levels beneath the base level.
j. For Limit, enter the maximum number of items to be returned. We recommend that you use this field to test new queries or modifications to existing queries, so that
you do not inadvertently load an excessive number of members.
k. Optional: For Search Filter, define a base DN, scope, and search filter. Typically, imports will be based on membership in an LDAP group, so you would use the
memberOF keyword. For example: memberOf=CN=syyTestGroup,DC=encore,DC=corp,DC=root
l. Click Apply to save the configuration settings.
The Status indicator in the Configuration - General section will change to LDAP import currently set up for this group as follows and the Modify Schedule and Run
Once Now buttons will be enabled. You can now import from your LDAP server.
What to do next
Run or schedule an import.
Schedule an LDAP import by clicking Modify Schedule, filling out the schedule information, then clicking Save.
Note:
When you import on demand, you have the opportunity to accept or reject each entry returned from the LDAP server.
When you schedule an LDAP import, all of the LDAP entries that satisfy your search criteria will be imported.
Verify that members have been added to a group by selecting the group in the Group Builder, then clicking Modify , and looking at the group's membership.
For larger groups, it may be easier to verify members by using the Guardium Group Details report (Reports > Guardium Group Details).
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy). Use the filter to find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With a group selected, click the Populate From Query button to open the Populate Group From Query Set Up panel.
3. From the Query menu, select the query to be run.
a. Depending on the type of group being populated, different fields will appear. For most group types, the Fetch Member From Column menu will appear.
b. For paired attribute groups (Object/Command, Object/Field, or Client IP/DB User), two menus will appear: Choose Column for Attribute 1 and Choose
Column for Attribute 2.
c. Select the column (or columns) to be used to populate the group, and any additional parameters for the query. The run-time parameters for the query will
then be added to the pane.
4. Select the Clear existing group members before importing box to delete existing group content before importing new members.
5. Optional: Select a remote source (only available from a Central Manager).
6. Click Save to save the definition.
7. Click Run Once Now to run the query immediately, or click Modify Schedule to set a schedule for the query in the future.
By analyzing stored procedure source code. To use this option, Guardium® must access the database on which the stored procedures have been defined, and the
stored procedures must not be stored in encrypted format.
By analyzing stored procedures in database traffic that has been monitored and logged by Guardium. To use this option, the Guardium appliance must be inspecting
the appropriate database streams, and logging the information (as opposed to using ignore session or skip logging actions), and the analysis task must run while the
data is still on the unit (as opposed to, for example, after an archive/purge operation).
There are two groups involved when populating a group from stored procedures:
Note: Wildcards are not supported in the group members field for stored procedures.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy).
You must know where the stored procedures of interest are defined.
The sources must not be stored in encrypted format.
You must have access to the stored procedure sources on those databases.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy). Use the filter to find the group you want to populate, or click Next and find the group from the
list of all groups.
Note: This option can only be used with commands or objects group types.
2. With the group selected, click Auto Generated Calling Prox, and select the Using DB Sources option. This opens the Analyze Stored Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters. Some fields only apply to certain databases.
For Sybase, MS SQL Server, and Informix, enter a database name to restrict the operation to that database. If it is blank, all stored procedures in the
master database will be analyzed.
For MySQL, Oracle or DB2 only, enter a schema name to restrict the operation to databases owned by that schema. For MySQL only, the Schema Owner is in
the form user_name@host, where host can be a specific IP or it can be a % to specify all hosts. To get all hosts, enter the schema name followed by %.
For MySQL, Oracle or DB2 only, enter a stored procedure name in Object Name. Wildcard characters may be used. For example, if only interested in the
procedures beginning with the letters ABC, enter ABC% in the Object Name box.
5. In the Source Detail Configuration section, do one of the following:
Add members to an existing group by checking the Append check box, and then selecting a group from the Existing Group Name menu.
Add members to a new group by entering the new group name in New Group Name.
Note: Do not include apostrophe characters in a group name.
6. Select Flatten Namespace to create member names using wildcard characters, so that the group can be used for LIKE GROUP comparisons. For example, if sp_1, is
discovered, the member %sp_1% will be added to the group, and in a LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all match.
7. Click Analyze Database to begin populating the group. The operation may take an extended amount of time to complete.
An example of what a Qualified Objects group member looks like is 192.168.1.0+guardium+oracle+admin+fininacial object.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy). Use the filter to find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With the objects or qualified objects group selected, click Auto Generated Calling Prox, and select the Using Database Dependencies option. This opens the Analyze
Stored Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters.
5. In the Source Detail Configuration section, do one of the following:
Add members to an existing group by checking the Append box, and then selecting a group from the Existing Group Name menu.
Add members to a new group by entering the new group name in New Group Name.
Note: Do not include apostrophe characters in a group name, and make sure that the new group is fully qualified (includes five value attributes: server IP,
instance, DB name, owner and object).
6. Select Flatten namespace to create member names using wildcard characters, so that the group can be used for LIKE GROUP comparisons. For example, if sp_1, is
discovered, the member %sp_1% will be added to the group, and in a LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all match.
7. In the Include Types section, select database dependencies: Functions, Java classes, Packages, Procedures, Synonyms, Tables, Triggers and/or Views.
8. Click Analyze Database to populate the group. You will be informed of the results.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy). Use the filter to find the group you want to populate, or click Next and find the group from the
list of all groups.
Note: The Reverse Dependencies option is available only for Oracle.
2. With the group selected, click Auto Generated Calling Prox, and select the Using Reverse Dependencies option. This opens the Analyze Stored Procedures panel.
3. Click Add Datasource and select a datasource from the Datasource Finder. The selected datasource will appear in the Datasources pane.
4. Optional: Fill in the Query parameters.
5. In the Source Detail Configuration section, do one of the following:
To add members to an existing group, select Append, and then select the group from the Existing Group Name list.
To add members to a new group, enter the new group name in New Group Name.
Note: Do not include apostrophe characters in a group name.
6. Select Flatten namespace to create member names using wildcard characters, so that the group can be used for LIKE GROUP comparisons. For example, if sp_1, is
discovered, the member %sp_1% will be added to the group, and in a LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all match.
7. In the Include Types section, select database dependencies: Functions, Java classes, Packages, Procedures, Synonyms, Tables, Triggers and/or Views.
8. Click Analyze Database to populate the group. You will be informed of the results.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy). Use the filter to find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With the starting group selected, click Auto Generated Calling Prox, and select the Using Observed Procedures option. This opens the Analyze Observed Stored
Procedures panel.
3. To edit an existing configuration, select it from the Source Details menu. To create a new configuration, leave the selection on New.
4. In the Access Information section, select all of the database servers to be analyzed. You can choose any combination of the check-boxes.
5. In the Source Detail Configuration section, do one of the following:
Add members to an existing group by checking the Append box, and then selecting a group from the Existing Group Name menu.
Add members to a new group by entering the new group name in New Group Name.
Note: Do not include apostrophe characters in a group name.
6. Select Flatten namespace to create member names using wildcard characters, so that the group can be used for LIKE GROUP comparisons. For example, if sp_1, is
discovered, the member %sp_1% will be added to the group, and in a LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all match.
7. Click Save to save the configuration.
8. Set a schedule for the group by doing one of the following:
To run the query immediately and get results now, Click Run Once Now.
To define a schedule for the operation, click Modify Schedule.
Procedure
1. Open the Group Builder by clicking Setup > Group Builder (Legacy). Use the filter to find the group you want to populate, or click Next and find the group from the
list of all groups.
2. With the starting group selected, click Auto Generated Calling Prox, and select the Generate selected object option. This opens the Analyze Observed Stored
Procedures panel.
3. To edit an existing configuration, select it from the Source Details menu. To create a new configuration, click New.
4. In the Access Information section, select all of the database servers to be analyzed. You can choose any combination of the check-boxes.
5. In the Source Detail Configuration section, enter a name, and choose an option from the Verb menu.
6. Do one of the following:
Add members to an existing group by checking the Append box, and then selecting a group from the Existing Group Name menu.
Add members to a new group by entering the new group name in New Group Name.
Note: Do not include apostrophe characters in a group name.
7. Select Flatten namespace to create member names using wildcard characters, so that the group can be used for LIKE GROUP comparisons. For example, if sp_1, is
discovered, the member %sp_1% will be added to the group, and in a LIKE GROUP comparison, the values sp_101, sp_102, sss_sp_103, etc. would all match.
8.
9. Click Save to save the configuration.
10. Set a schedule for the group by doing one of the following:
To run the query immediately and get results now, Click Run Once Now.
To define a schedule for the operation, click Modify Schedule.
Queries
Queries use conditional operators with groups. Here are examples of each conditional operator:
IN GROUP - If the value matches any member of the selected group, the condition is true. IN ALIASES GROUP, this operator works on a group of the same type as
IN GROUP, however assumes the members of that group are aliases. Note that the IN GROUP/IN ALIASES GROUP operators expect the group to contain actual
values or aliases respectively. Query Builder will look for records with database values matching the aliases value in the group.
NOT IN GROUP - If the value does not match any member of the selected group, the condition is true. NOT IN ALIASES GROUP, this works on a group of the same
type as NOT IN GROUP, however assumes the members of that group as aliases.
IN DYNAMIC GROUP - If the value matches any member of a group that will named as a run-time parameter, the condition is true. IN DYNAMIC ALIASES GROUP,
this works a group of the same type as IN DYNAMIC GROUP, however assumes the members of that group as aliases.
NOT IN DYNAMIC GROUP - If the value does not match any member of a group that will named as a run-time parameter, the condition is true. NOT IN DYNAMIC
ALIASES GROUP, this works a group of the same type as NOT IN DYNAMIC GROUP, however assumes the members of that group as aliases.
Note: The group may contain either aliases or actual values according to the operator used (IN GROUP OR IN ALIASES GROUP) can not be used at the same time.
LIKE GROUP - If the value is like any member of the selected group, the condition is true. This condition enables wildcard (%) characters in the group member
names.
Note: A like member value uses one or more wildcard (%) characters, and matches all or part of the value. For a like comparison, alphabetic characters are not case
sensitive. For example, %tea% would match tea, TeA, tEam, or steam.
Anywhere there is a Group drop-down menu on the rule definition pane you can select a group.
Further, if you want to create or modify a group on the fly, click the Groups icon to open a Group Definition window and make your desired changes.
For example: if you want to capture activity occurring on your production servers, rather than typing in full IP addresses each time, you could create a group Production
Servers and use that.
2. Create a new policy by clicking the icon to open the Policy Definition window.
3. Define the policy definition, then click Apply to save the policy.
4. Click Edit Rules to open the Policy Rules window and begin adding rules to the policy.
5. Click Add Rules > Add Access Rule to add a new rule to the policy.
6. Begin by providing a Description for the rule. Optionally provide Category and Classification labels.
7. Specify where to look for data. From the Server IP row, select the (Public) PCI Authorized Server IPs group. The rule will apply to all activity from all PCI servers.
Note: You can view the members of any group or modify any group by going to the Group Builder.
8. Specify unauthorized users. From the DB User row, mark the Not check box and select the (Public) Authorized Users group. The rule will apply to all users who are
not in the (Public) Authorized Users group.
9. Specify sensitive objects. From the Object row, select the (Public) PCI Cardholder Sensitive Objects group. The rule will now apply to all unauthorized users on PCI
servers looking to access PCI sensitive objects.
10. Add an action to the rule by clicking Add Action and selecting Action > LOG FULL DETAILS from the menu. Click Apply to save the rule. This action logs details of the
access, including an exact timestamp of the access.
11. Add another action to the rule by clicking Add Action and selecting Action > ALERT ONCE PER SESSION from the menu. Specify an alert destination, then click
Apply to save the rule. This action sends or logs an alert indicating that the rule was triggered.
12. Click Save to save the rule.
13. Install the policy.
a. Find the policy that you created. Click Back twice, or click Policy Builder to get to the Policy Finder and browse the list of policies.
b. With the policy selected, choose Install & Override from the installation action menu.
c. Click OK to confirm the policy installation, and then check Latest Logs and Violations to verify the policy was installed.
The policy is now installed and active. Any person not in the (Public) Authorized Users group attempting to access an object in the (Public) PCI Cardholder
Sensitive Objects groups will have their session logged and will trigger an alert indicating the access.
Predefined Groups
This section details the predefined groups in Guardium®.
The following table describes the predefined groups that are included with your Guardium system. To view the list of all groups, open the Group Builder by clicking Setup >
Group Builder. Select SQL_APP_NAME from the Applications menu, and click Next. From the next screen, manage members from Selected Groups. The term Group Type
refers to expectations on the type of data designated by the label. For example, the group type Server IP expects data arranged as an IP address (192.168.1.0) and the
group type Users expects to see names of users of the application.
Additional predefined groups do get added periodically and these additional predefined groups may not be described here. Open the Group Builder to see all existing
groups.
Predefined groups of group type DB User/DB Password are allowed only to users with the role of admin. Users can, if preferred, add other roles or even allow the groups to
all roles.
DB2® zOS Groups zOS Audit Dynamic SQL Group Type for DB2 commands
DB2 zOS Groups zOS Audit Query Group Type for DB2 commands
DB2 zOS Groups zOS Audit Updates Group Type for DB2 commands
DB2 zOS Groups zOS Audit Deletes Group Type for DB2 commands
DB2 zOS Groups zOS Audit Inserts Group Type for DB2 commands
DB2 zOS Groups zOS Audit Utilities Group Type for DB2 commands
DB2 zOS Groups zOS Audit Object Maintenance Group Type for DB2 commands
DB2 zOS Groups zOS Audit User Maintenance Group Type for DB2 commands
DB2 zOS Groups zOS Audit User Authorization Changes Group Type for DB2 commands
DB2 zOS Groups zOS Audit DB2 Commands Group Type for DB2 commands
DB2 zOS Groups zOS Audit Plan/ Package Maintenance Group Type for DB2 commands
IMSâ„¢ zOS Groups zOS IMS Audit Query Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Updates Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Deletes Group Type for IMS commands
IMS zOS Groups zOS IMS Audit Inserts Group Type for IMS commands
IMS zOS Groups zOS IMS Audit DB Commands Group Type for IMS commands
Security Assessment Builder DB2 Database Version+Patches Used for (specific) database version and patch level tests.
Netezza® Version+Patches
Postgress Version+Patches
Security Assessment Builder DB2 Allowed Grants to Public TUPLE, Object/Command Application 8 (Security assessment)
Informix Allowed Grants to Publics List of objects/commands for which grants to public are allowed.
MS-SQL Allowed Grants to Public These objects will be skipped on MS-SQL and Sybase tests that check grants
to public.
MYSQL Allowed Grants to Public
Note:
Netezza Allowed Grants to Public
Exceptions group can contain a regular expression or just a member. If
Oracle Allowed Grants to Public regular expression, the group member must start with (R) (case sensitive),
and the records in the detail will be checked against the regular expression
Postgres Allowed Grants to Public
after the (R).
Teradata Allowed Grants to Public
For example if a group member is:
If the member does not start with (R) the detail record will be considered an
exception only if it is equal to the group member.
Security Assessment Builder MS-SQL Extended Procedures Allowed Group Type is Objects
  Â
Public Account Management Commands Commands used to maintain accounts (users, roles, permissions),
examples: REVOKE, GRANT, ALTER/CREATE/DROP USER
Public Account Management Procedures Account Management Objects, stored Procedures used to maintain accounts
(users, roles, permissions)
Public Administration Objects Privileged Objects, objects that only DBA or Sys Accounts should access.
These accounts are locked for "public" by default.
Public Administrative Commands Privileged Commands, privileged Commands, should be executed only by
DBAs. Examples: GRANT, BACKUP, DDL commands
Public Administrative Programs Database utilities (clients) that come with database and usually reside on
the database server and could used by the server itself
Public ALTER Commands Examples, alter database, alter procedure, alter profile, alter session, alter
user
Public Application Privileged Commands Public privileged commands that should be revoked from "public", but not
revoked since they are used by the application
Public Application Privileged Procedures Application Privileged Objects, public privileged procedures that should be
revoked from "public" but not revoked since they are used by the application
Public Application Schema Users Application Users, database user used by the application to maintain/user
the application tables
Public Connection Profiling List Group Type is Client IP/Src App/DB User/Server IP/SVC. Name
Public CREATE Commands Examples, create context, create database link, create function, create
statistics, create type, create user
Public Credentials Related Entities Guardium Audit Types, Self-Monitoring, examples, allowed_role,
LDAP_config, Turbine_user_group_role
Public Data Transfer Commands Backup Commands, commands dealing with backup/restore of database
data
Public Data Transfer Procedures Data Transfer Objects, procedures dealing with backup/restore of database
data (mostly on MSS and SYB)
Public DB Predefined Users Either non-admin predefined users or all predefined users, including
administrative ones
Public DDL Commands Data Definitions Language, schema-privileged commands, examples, ALTER,
CREATE, DROP
Public DW All Object-Field There are five predefined reports that use monitored data to show object
names. These reports all start with the prefix DW (Data Warehouse). See the
DW All Objects help topic, How to report on dormant tables/columns, for further
information on how to use these predefined reports.
DW Execute Accessed Objects
Public GRANT Commands Examples, grant, grant objectives, grant system privileges
Public Guardium Audit Categories for Detailed Reporting Guardium patches, TURBINE_USER_GROUP_ROLE
Public Javaâ„¢ Commands Examples, alter java, create java, drop java
Public Masked_SP_Executions_MS_SQL_SERVER For MS SQL Server, a group that includes a collection of stored procedures
(SP) names. If there is an execution of an included procedure, than
everything will be masked, even if in quotes. Predefined as empty.
Public Masked_SP_Executions_Sybase For Sybase, a group that includes a collection of stored procedures (SP)
names. If there is an execution of an included procedure, than everything
will be masked, even if in quotes. Predefined as empty.
Public Peer Association Commands Commands dealing with links/replications of data, examples, links, log
shipping, replications, snapshots
Public Peer Association Procedures Peer Association Objects, procedures dealing with links/replications of data
Public Performance Commands Examples, analyze, create statistics, update all statistics
Public Procedural Commands Examples, begin, call, execute, exit, repeat, set
Public PROCEDURE DDL Examples, alter procedure, create procedure, drop procedure
Public Public selectable object Select-only Objects, tables that by default granted access to public
Public REVOKE Commands Examples, revoke object privileges, revoke system privileges
Public System Configuration Commands Database configuration commands (subset of Administrative Commands)
Public System Configuration Procedures System Configuration Objects (subset of Administration Objects)
Public Vulnerable Objects (with wildcards) Database objects with reported vulnerabilities
Security Roles
Security roles are used to grant access to data (groups, queries, reports, etc.) and to grant access to applications (Group Builder, Report Builder, Policy Builder, CAS,
Security Assessments, etc).
By default, when a component is initially defined, only the owner (the person who defined it) and the admin user (who has special privileges) are allowed to access and
modify that component.
You can allow other users to access the components you define by assigning security roles. For example, if you assign a security role named DBA to an audit process, all
users assigned the DBA role will be able to access that audit process.
Note: In order to configure LDAP user import, accessmgr user must have the privilege to run the Group Builder. In certain situations, when changes are made to the role
privilege, accessmgr's privilege to Group Builder can be taken away. This results in an inability to save or run successfully LDAP user import. Go to the access management
portal, select Role Permissions. Choose the Group Builder application and make sure that there is a checkmark in the all roles box or a checkmark in the accessmgr box.
1. Login as accessmgr and open the User Role Browser by clicking Access > Access Management > User Role Browser.
2. At the end of the role browser, click Add Role.
3. In the Role Form panel, enter a new Role Name and click Add Role.
1. Login as accessmgr and open the User Role Browser by clicking Access > Access Management > User Role Browser.
2. Click Delete for any role, and then click Confirm Deletion.
Notifications
Use the Alerter and Alert Builder to create notifications. When email or other notifications are required for alerting actions, follow this procedure for each type of
notification to be defined.
Alerter configuration
1. Before you choose alerting actions, you must be configure the email SMTP settings in theAlerter
2. Open the Alerter by clicking Protect > Database Intrusion Detection > Alerter.
3. Fill out the SMTP and/or SNMP information.
4. After filling out each section, click Test Connection, and verify that the connection is working. You will receive a message stating the connection is unreachable if the
connection is not working.
5. Click Apply to save the configuration.
6. At a minimum, IP Address/Host name, port, and return email address must be specified.
7. Select Mail from the Notification Type menu. If the Severity of the message is HIGH, the Urgent flag is set.
8. Select a user (which can be an individual or group) from the Alert Receiver list. Additional receivers for real-time email notification are Invoker (the user that
initiated the actual SQL command that caused the trigger of the policy) and Owner (the owner/s of the database). The Invoker and Owner are identified by retrieving
user IDs (IP-based) configured by using the Guardium® APIs.
Build an alert
1. After configuring the Alerter, open the Alert Builder by clicking Protect > Database Intrusion Detection > Alert Builder.
2. Fill out the information in the Settings, Alert Definition, Alert Threshold, and Notification sections and click Apply.
3. Choose who will receive the notifications by clicking Add Receiver.. and choosing a user.
1. Create a policy
2. Add rules to the policy
3. Install the policy
4. Setup a real-time alert when the policy is enacted
Prerequisites
Configure SMTP in the Alerter. Open the Alerter by clicking Protect > Database Intrusion Detection > Alerter, and then fill out the SMTP information.
Note: Policy violations can also be seen as a report in Incident Management See Policies for complete information.
Procedure
1. Create a policy.
a. Open the Policy Builder by clicking Setup > Tools and Views > Policy Builder for Data or Applications.
b. Click New, or modify an existing policy by selecting the policy from the Policy Finder and clicking Modify.
c. Fill out the required information and click Apply to save the policy.
2. Add rules to the policy.
a. After saving the policy, click Edit Rules to see the existing policy rules.
b. Click Add Rules... and then you are presented with five rule options.
c. Choose Add Exception Rule and fill out the required information.
The Exception Rule Definition screen begins with the following items:
Before you can use a custom class, you must upload it onto the Guardium system. Click Setup > Custom Classes > Alerts > Upload Alerting Class to upload a
custom alerting class. Click Browse to select a file, then Apply to save.
Predefined Alerts
Table describing the predefined alerts found in the Alert Builder.
Guardium comes with a set of predefined alerts that can be found in the Alert Builder. Open the Alert Builder by clicking Protect > Database Intrusion Detection > Alert
Builder. When you open the Alert Builder, you are presented with a list of all existing alerts in the Alert Finder. Select an alert from the finder and click Modify to edit it.
In the Modify Alert screen, modify any part of the alert, such as receivers or threshold.
You cannot modify the default queries that the alerts are based on. If you want to modify a query, click the Edit this Query icon for any query to open the Query Builder.
Once in the builder, clone any query, and then modify the clone to suit your needs.
Active S-TAPs Changed Checks for changes to Active S-TAP® inspection engines done during the last accumulation interval. The alert will
trigger if at least one inspection engine has been changed during the period. By default the alert checks every 1/2 hour
and checks the last hour.
Aggregation/Archive Errors Alert once a day on all aggregation or archive tasks that did not complete successfully.
Connection Profiling Alert Alert runs every 60 minutes and sends notice to predefined group, Connection Profiling List - Name List of allowed
connections
CAS Instance Config Changes Alert once a day on any CAS instance configuration changes.
CAS Templates Changes Alert once a day on any CAS template configuration changes.
Data Source Changes Alert once a day on any data source definition changes.
Database disk space Alert every 10 minutes if internal database is more than 80% filled. See the Self Monitoring help topic for more
information on Disk Space (% full) and the Guardium® Nanny process.
Enterprise No Traffic Enterprise No Traffic Alert runs only on Central Manager systems. It is based on a query similar to the query on the No
Traffic alert and retrieves the records with: timestamp between X and Y, when X is a query parameter and Y is query
from date generated by the alert mechanism based on the accumulation interval (same way the existing no traffic alert
works).
Enterprise S-TAPs changed This alert will only run Central Manager systems.
Failed Logins to Guardium Every 10 minutes alert if there have been more than 5 failed login attempts on the Guardium appliance.
Guardium - Add/Remove Users Alert once a day if any Guardium users have been added or removed.
Guardium - Credential Activity Alert once a day if there have been any Guardium credential changes, including LDAP configuration changes.
Inactive Managed Unit Alert runs 30 minutes and sends a notice once a day to the predefined group that is called "Managed Units Alert".
Inactive S-TAPs Since Alert once an hour on all S-TAPs that have not been heard from.
Inspection Engines and S-TAP Alert once a day on any activity related to inspection engine and S-TAP configuration.
No Traffic Alert to Indicate whether there is no traffic from specific database servers. This alert will alert when there is no traffic
collected from a server from which the Guardium system was collecting traffic at some point during the last 48 hours.
The alert will trigger when there is no traffic within the period defined in the accumulation interval.
For example if the accumulation interval is 60 minutes the alert will send an email if there was no traffic from a specific
database server in the last hour but there was some traffic in the last 48 hours. Â The alert will send an email (by
default) only every 24 hours. Parameters such as accumulation interval, notification interval, run frequency etc. can be
customized. Parameters such as Threshold, Per Line, operator, query etc. should not be changed, as changes to these
parameters will cause the alert not to work properly. Â Note the No Traffic query should not be cloned.
No Traffic by Server/Protocol Similar to the regular No traffic alert with the following differences: The alert is per service Name/Net Protocol, and will
report per line. There is a new additional parameter: Active Traffic Interval that determines when the last request from
each server was received. The alert will trigger under the following conditions: There was No traffic during the alert
interval from each server/net protocol but there was traffic since: Active Traffic Interval for that combination.
Unlike the regular No traffic alert that will trigger if there was no traffic during the alert interval but there was traffic in
the previous 48 hours per server IP.
Policy Changes Alert Alert once a day if there have been any security policy changes.
Queries Running Long Time Notify if a query takes more than 900 seconds to run.
Scheduled Job Exceptions Alert every 10 minutes on any scheduled job exception (including assessment jobs).
Parent topic: Managing your Guardium system
Scheduling
The general purpose scheduler is used to schedule many different types of tasks (archiving, aggregation, workflow automation, etc.).
Note: Be aware of scheduling anomalies that can occur when scheduling tasks during Daylight Savings Time. Â
If you selected Day/Week from the Schedule by list, mark each day of the week you want the task run, or click Every day to select all days (or to clear all days if they
are already selected).
OR
If you selected Month from the Schedule by list, do one of the following:
Pause a Schedule
Note: Note that not all types of scheduled tasks provide a pause option.
Remove a Schedule
After a schedule has been defined, a Remove button appears in the Schedule Definition panel.
1. Click Define Schedule or Modify Schedule to open the Schedule Definition panel.
2. Click the Delete button.
Aliases
Create synonyms for a data value or object to be used in reports or queries.
Aliases Overview
An alias is used to display a meaningful or user-friendly name for a data value.
For example, Financial Server might be defined as an alias for IP address 192.168.2.18. Once an alias has been defined, users can display report results, formulate
queries, and enter parameter values using the alias instead of the data value.
Through the IP-to-Hostname Aliasing tool - use this tool to generate aliases for discovered client and server IPs.
Click Protect > Database Intrusion Detection > IP-to-Hostname Aliasing to open the IP-to-Hostname Aliasing tool.
Through the Alias Builder – use this method to define aliases manually.
Open the Alias Builder by clicking Comply > Tools and Views > Alias Builder.
Through a query.
While using the Group Builder, with the Alias Quick Definition.
Note: Aliases changes on the Central Manager or managed units will not be available on other systems until either GUI is restarted or any aliases changes are made
through their GUI.
IP-to-Hostname Aliasing
1. Open the IP-to-Hostname Aliasing tool by clicking Protect > Database Intrusion Detection > IP-to-Hostname Aliasing.
2. Check the Generate Hostname Aliases for Client and Server IPs (when available) check box.
3. Check the Update existing Hostname Aliases if rediscovered check box if you want the tool to continually look for and update hostname aliases.
4.
5. Click Apply to save your configuration, then schedule the operation.
Click Run Once Now to start the tool immediately.
Click Define Schedule... to schedule the tool in the future.
Click Pause to pause the generation of client and server IPs aliases.
Alias Builder
Use this method to manually create an alias.
1. Open the Alias Builder by clicking Setup > Tools and Views > Alias Builder.
2. Select the attribute type for which you want to define aliases.
3. Filter your search on that attribute type using the Value and Alias fields and click Search.
4. If any results match your search, they will display in the value and alias table. Click Apply for the search results, or add a new alias by specifying a Value and Alias
name, then clicking Add.
5. Add a comment to an alias by clicking the Item Comments icon . This can be helpful for quickly referencing what an alias refers to in the future.
1. Open the Alias Builder by clicking Setup > Tools and Views > Alias Builder.
2. Select the attribute type for which you want to define aliases from the Alias Finder and click Populate from Query to open the Builder Alias From Query Set Up
panel.
3. Fill out the required information and click Save to save the alias.
Select the query to be run from the Query menu.
Choose a value for both Choose Column for Value Column and Choose Column for Alias Column.
After selecting column values, more fields display that you must fill in (From Date, To Date, Remote Source, and any additional parameters for the selected
query).
Check the Clear existing group members before Importing check box to delete the existing content of the group before populating from query.
Click Save to save.
With the query saved, the Scheduling buttons become active. Click Modify Schedule to run the query in the future, or click Run Once Now to run it
immediately.
1. Open the Group Builder by clicking Setup > Group Builder. Select any group from the list, and click Modify.
2. Click Aliases... to open the Alias Quick Definition window. Type in an alias for any group(s), and save the alias by clicking Apply.
grdapi create_alias
grdapi update_alias
grdapi delete_alias
There are two tools that are used to populate date fields: a calendar tool to select an exact date, and a relative date picker to select a date that is relative to the current
time (now -1 day, for example). In addition, exact or relative dates can be entered manually.
Be aware that when selecting or entering dates, the date on the system on which you are running your browser may not be the same as the date on the Guardium®
appliance to which you are connected.
Timestamps in Queries
Caution need to be taken when including Timestamps in queries.
First, be aware of the distinction between a timestamp (lowercase t) and a Timestamp (uppercase T).
A timestamp (lowercase t) is a data type containing a combined date-and-time value, which when printed displays in the format yyyy-mm-dd hh:mm:ss (e.g., 2005-
07-17 15:40:25). When creating or editing a query, most attributes with a timestamp data type display with a clock icon in the Entity List panel.
A Timestamp (uppercase T) is an attribute defined in many entity types. It usually contains the time that the entity was last updated.
Including a Timestamp attribute value in a query will produce a row for every value of the Timestamp. This may produce an excessive amount of output. To get around this,
use the count aggregator when including the Timestamp in a query, and then drill down on a report row, to view the individual Timestamp values for the items included in
When displaying a Timestamp value in a query that contains Timestamp attributes in multiple entities, be careful to select the Timestamp attribute from the appropriate
entity type for the report. For example, if the query will display information from both the Client/Server and the Session entities, with the Session selected as the main
entity, you can display a Timestamp attribute from one or both entities. If you include the Client/Server Timestamp, you will see the same value printed for every Session
for a given client-server connection – it will always be the time at which that particular Client/Server was last updated. If you include the Timestamp attribute from the
Session, you will see the time that each Session listed was last updated.
Tip: If your report displays times that are all the same when you expect them to be different, you have probably included a Timestamp attribute from an entity too high in
the entity hierarchy for the level of detail you want on the report.
1. Click the Calendar button for the field where you want to insert a date. This opens a calendar in a separate window.
Click the arrow buttons to display the previous or next month in the calendar window.
2. Click on any date to select that day. The calendar window will close and the selected date will be inserted into the date field next to the calendar tool that was
clicked.
Note: The default time for a date selected using the calendar is always 00:00:00 (the start of the day). To specify any other time of day, type over this value, entering
the desired time in 24-hour format: hh:mm:ss, where hh is the hour of the day (0-23), and mm and ss are minutes and seconds respectively (both 0-59).
1. Click the Relative Date Picker button next to any field where a relative date is allowed. This opens the Relative Date Picker window.
2. Select Now, Start, or End from the list. Regardless of your choice, the display changes to provide for additional selections.
3. From the middle list, select this, last, or previous, which is relative to the unit (day, week, month, or day of the week selected in the next list) as follows:
This is the current unit
Last is the current unit minus one
Previous is current unit minus two
4. Select the day, week, month, or a specific day: Monday-Friday.
5. Click the Accept button when you are done. The relative date will be inserted into the field next to the Relative Date Picker button that was clicked.
6.
There are three general formats you can use to enter a relative date:
OR
The Start or End of the current, last or previous day, week, or month
OR
The Past or Previous day of the week (Sunday, Monday, Tuesday, etc.)
Relative to NOW
1. Click in the field where you want to enter the relative date.
2. Enter the keyword NOW.
3. Enter a negative integer specifying the relative number of hours, days, weeks, or months (no space is allowed between the minus sign and the integer). Â
4. Enter a keyword for the units used: HOUR, DAY, WEEK, or MONTH. Be aware that the plural (hours, days, etc.) is not allowed. Example: now -14 day
Time Periods
Use the Time Period Builder to create time periods that can be used for policy rules and query conditions.
When monitoring database activity, use time periods to specify when you want to monitor. Use the Time Period Builder to create new time periods or modify existing ones.
Time Periods
Policy rules and query conditions can test for events that occur (or not) during user-defined time periods.
There is a set of pre-defined time periods (7x24, After Hours Work, Before Hours Work, Evening, Regular Work Day, Saturday, Sunday, and Week End), and users can
define their own.
The following two time periods both begin 09:00 Monday and end 17:00 Friday:
Workweek is defined Contiguous.
Workday is defined Non-Contiguous.
The first time period, Workweek, defines a single 164-hour period beginning at 9 AM on Monday and ending at 5 PM on Friday, whereas the second time period,
Workday, defines five separate eight-hour time periods (9 AM – 5 PM), on five consecutive days (Monday – Friday)
5. Enter a beginning time in hours (00-24) and minutes (00-59) in the Hour From box.
6. Enter an ending time in hours (00-24) and minutes (00-59) in the Hour To box.
7. Select a beginning day of the week in the Weekday From box.
8. Select an ending day of the week in the Weekday To box.
9. Optionally click the Comments button to add comments (see Commenting).
10. Click the Add button.
Comments
Comments apply to definitions and to workflow process results.
Comments can be added or viewed in several places throughout the UI. You can add a comment to a group or alias for reference purposes, or add a comment to report to
ease auditing requirements. For example, an auditor may want to know why a configuration change was made on a certain date. Use a comment to easily reference the
reason why the change was made.
Comments apply to definitions (groups, aliases, reports, policies), and to workflow process results. You can add multiple comments to a component, and you can add
comments to comments, but you cannot modify or delete existing comments.
Report Comments
View a report of all user comments by clicking Comply > Reports > User Comments.
The Local Comments entity is used in a Central Manager environment only. Local comments remain local to the system on which they were defined, and are not
stored on the Central Manager.
The Comments entity contains comments that are stored on the Central Manager.
This how-to topic uses a combination of commands from the CLI and choices from the GUI to help you install the latest Guardium patch. The Guardium system must be
rebooted after installing a patch.
Important: Patches downloaded in ZIP format must be unzipped outside the Guardium system before uploading and installing. Observe the following restrictions for any
patch with database structure changes:
Perform or schedule the patch installation during quiet time on the Guardium system to avoid conflicts with long-running processes such as heavy reports, audit
processes, backups, and imports.
The exact time required for patch installation depends on database utilization, data distribution, and other considerations.
Install patches in a top-down manner, first patching a central manager before patching aggregators and finally collectors.
In the procedure below, you will follow these steps from the Guardium system that is designated and configured as the Central Manager:
1. Backup the system profile, using the CLI command store backup profile.
2. Enter the CLI command store system patch install to install a single patch or multiple patches to the Central Manager from a network location.
3. Click Setup > Tools and Views > Patch Distribution to move patches from the CM to managed units.
Procedure
Backup the system profile
1. Using a SSH client, log into the IBM Security Guardium Central Manager as the CLI user.
2. Enter the following command: store backup profile
3. The following dialog will appear:
4. Use the following CLI command if the patch installation failed, patch revert failed, and the automatic restore failed or disabled. The following command gets the
pre-patch backup file and restore it on the system. If the pre-patch backup file is currently located on the system, enter the file name. Otherwise, the pre-patch
backup profile information is used to get the file.
CLI>show backup profile patch backup flag is 1 patch backup automatic recovery flag is 1 patch backup dest host is
patch backup dest dir is patch backup dest user is patch backup dest port is patch backup dest pass is CLI>restore pre-patch
backup
Note: A compressed patch file may contain multiple patches, but only one patch can be installed at a time. To install more than one patch, choose all the patches that
need to be installed, separated by commas. Internally the CLI submits requests for each patch on the list (in the order specified by the user) with the first patch taking the
request time provided by the user and each subsequent patch three minutes after the previous one. In addition, CLI will check to see if the specified patch(es) are already
requested and will not allow duplicate requests.
Use the UI to move the patch(es) from Central Manager to managed units
The Patch Distribution button will open a new screen, display an available patch list with dependencies, and allow for the selecting of a patch and installing it to all
selected units. The list of available patches is constructed out of the available patches and evaluating the currently installed patches on each of the selected units
along with the dependency list of available patches. Patches available but not installable (a dependent patch is missing) are shown in the list as grayed out and
cannot be selected. The selection of patch to install is a single selection - only one patch can be installed at a time. Once a patch is selected and the install button
pushed a command is sent to all selected units to install that patch; this process of installing patches will happen in the background.
Results
The patched systems are now ready to be used; however, remember that the Guardium system must be rebooted after installing a patch.
Product integration
You can integrate IBM Guardium with other products.
Configure BIG-IP Application Security Manager (ASM) to communicate with Guardium system
Use the Big-IP ASM (from F5 Networks) together with Guardium’s real-time database activity monitoring to solve the problem of identity propagation between
web application and database application server layers.
Hadoop Integration
This topic introduces fundamental concepts and processes for monitoring Hadoop data with Guardium.
PIM Integration with Guardium DAM
Privileged Information Management (PIM) helps organizations to automate and track the use of shared privileged identities and monitor the usage of these shared
privileged identities.
QRadar and Guardium integration
QRadar and Guardium can work together in a two-way information flow to have the Guardium data protection policies updated automatically and nearly in real-time
in response to security intelligence events from QRadar.
OPTIM to Guardium Interface
An OPTIM to Guardium interface, using Protobuf (Universal Feed Agent), sends Optim activity logs to Guardium.
Combining real-time alerts and correlation analysis with SIEM products
Distribute contextual knowledge of database activity patterns, structures, and protocols directly to the third-party database of the SIEM system.
How to transfer sensitive data to InfoSphere Discovery
Take sensitive data information, identified and classified in IBM Security Guardium and transfer that information to InfoSphere® Discovery.
CEF Mapping
The CEF standard from ArcSight defines a set of required fields, and a set of optional fields.
LEEF Mapping
Log Event Extended Format (LEEF) from QRadar
Configure BIG-IP Application Security Manager (ASM) to communicate with Guardium system
Use the Big-IP ASM (from F5 Networks) together with Guardium’s real-time database activity monitoring to solve the problem of identity propagation between web
application and database application server layers.
This solution uses Google’s protocol buffers (.protobuf) as the wire format between BIG-IP ASM and the Guardium® system.
Information about configuring the integration between Big-IP ASM and Guardium real-time database activity monitoring is provided at the F5 website:
http://www.f5.com/pdf/deployment-guides/ibm-guardium-asm-dg.pdf.
Hadoop Integration
This topic introduces fundamental concepts and processes for monitoring Hadoop data with Guardium.
Capacity planning
The following sizing guidelines assume an average volume of audited traffic. Higher volumes of audited traffic may require additional resources.
It is also possible to size by the Processor Value Unit (PVU) of the nodes, but this may result in over-sizing if auditing low volumes of traffic. The capacity sizing guideline is
4000 PVU per collector.
Integration scenarios
If you are using SSL encryption with Cloudera, see Hadoop integration using Cloudera Navigator.
If you are using SSL encryption with a Hortonworks Hadoop cluster, see Hadoop integration using Hortonworks and Apache Ranger.
Note: Redaction of returned data using Hive is not supported. If you require data redaction with Hive, see Hadoop integration using a standard Guardium S-TAP.
If you do not require SSL encryption for your Hadoop cluster, see Hadoop integration using a standard Guardium S-TAP.
Capturing activity on these two components covers basic auditing requirements because all data except management console traffic goes through HDFS.
Be aware that HDFS activity is not auditor-friendly, as it is somewhat like monitoring file access in a relational database. Consider monitoring activity from other
components used in your environment, such as Hive, Big SQL, or Impala. These components support monitoring that more closely resembles database accesses.
For detailed instructions on using redaction and blocking policies with Hadoop, see the IBM Security Guardium Deployment Guide for Hadoop Systems.
Kerberos
Guardium supports the use of Kerberos secure clusters with some restrictions. In order to decrypt Kerberos user IDs, Guardium requires that keytab files be generated
and placed in a specific location. Detailed instructions are available in the IBM Security Guardium Deployment Guide for Hadoop Systems.
Attention: Kerberos configuration may be required only if you are using HBase or Hive.
Deployment recommendations
To avoid flooding the collector and to make problem diagnosis simpler, consider the followin tactics to reduce the amount and types of traffic processed by the Guardium
collector:
To limit data that must flow across the network to the appliance, restrict the number of inspection engines you configure.
To limit the amount of data that is logged on the collector, put conditions on the policy.
One strategy might be to configure and test with Hive command line queries before adding additional inspection engines and opening the policy to additional, higher-
volume traffic such as HDFS.
For each new inspection engine that is configured, you must restart S-TAP.
Remember to monitor the Guardium system as more services generate traffic. The Guardium deployment redbook includes details on how to monitor the system and
make sure the traffic is not excessive for the collector.
Limitations
The following restrictions apply when monitoring Hadoop with a standard Guardium S-TAP:
SSL encryption is not supported unless using Hortonworks with Ranger or Cloudera with Cloudera Manager. Ranger and Cloudera Manager integration is covered in
a separate section of this information.
UID chaining is not supported.
Blocking and redaction is only supported for Big SQL, Hive, and Impala.
Configuration audit system and sensitive data discovery are not supported at this time.
Guardium currently does not support administration command auditing, for example starting and stopping services.
Guardium load balancing and failover options are not supported when using Kerberos, however F5 or other load balancing in which a virtual IP address is used may
be an option.
If Kerberos or GPFS is used, you must configure a special communications exit on each Big SQL node. Guardium provides a dynamically loaded shared library that
interacts with Big SQL, and Big SQL will invoke functions within that library at run time when it performs SQL and utility requests.
Restriction: Only monitoring and auditing are supported using the exit methodology with Big SQL: redaction and blocking are advanced features that are only
supported using an S-TAP.
Attention: An S-TAP is recommended for edge nodes, particularly if you are using them as a landing zone for data.
Before configuring inspection engines, work with the Hadoop administrator to gather the following information for each Hadoop node to be monitored:
Determine the inspection engine protocol based on the Hadoop node type and service as indicated in the following table.
Table 1. Inspection engine protocols for Hadoop nodes and services.
Hadoop node Hadoop service Inspection engine protocol
Hive metastore Thrift protocol messages, used for getting Impala and HADOOP
Hive database user from Hue.
Note: Requires using a computed attribute.
Hue node Hue user interface with Oracle, MySQL, or PGSQL HUE
backend
Restrictions:
Additional examples and more detailed instructions are available in the IBM Security Guardium Deployment Guide for Hadoop Systems.
Parent topic: Hadoop integration using a standard Guardium S-TAP
For monitoring purposes, it is useful to think in terms of the user, the data object being monitored, and what actions or commands are being executed. In Guardium
terminology, these are the DB User, the object, and the verb or command, respectively. These entities can be used in policy rules to trigger particular actions, such as real
time alerts.
Guardium policy rule actions allow you to filter traffic for performance in addition to logging or alerting on policy violations. For Hadoop traffic, you cannot use session-
level filtering actions such as ignore S-TAP session. This is because Hadoop does not do session-management in the same way as relational databases where you log into
the database--which establishes a session--and then generate SQL traffic within that session before logging out. With Hadoop, each command is its own session and can
spawn many more sessions as work is distributed throughout the cluster.
Guardium cannot usually catch failed logins for command line components, although Guardium can see failed logins from Hue and through IBM BigSQL.
You will get permission exceptions on the file system level, so you report on those using the exceptions domain.
Begin creating policies from the built-in Hadoop policy to ensure traffic is being captured. It’s recommended that you test the default policy in a low traffic test
environment, and you may even add one more access rules to restrict traffic to a single server type--such as Hive--to reduce the amount of noise you see. Once you are
comfortable that traffic is flowing to the collector, you can clone the default policy and create one that aligns with your security and compliance requirements.
For detailed instructions and an example of policies for a production Hadoop environment, see the IBM Security Guardium Deployment Guide for Hadoop Systems.
Guardium includes several built-in reports for Hadoop. To see the list of available reports, navigate to My Dashboards > Create a new dashboard and click Add Report. In
the Add a Report window, type hadoop into the search field to see a list of available Hadoop reports.
Some of the built-in reports provide component-based reporting, which are useful when validating your configuration and that you are sucesfully catching traffic from the
component. Other reports are more focused on security and compliance, such as Hadoop - Permissions report, Hadoop - Privileged users accessing sensitive objects,
Hadoop - Exception report, and Hadoop - User login.
This section includes lists of objects and commands or verbs used with Hadoop. You can cut and paste the commands into a group in Guardium using the Group Builder
tool. You will also need to create groups of users and objects based on your own environment.
Hadoop objects
HDFS files/directories
Prior to MapReduce 2, the MapReduce job name was not logged as a separate object, but you could obtain it by using the built in MapReduce report and its
computed attributes to get the job name from the full message.
IBM Big SQL, Impala, Hive, HBase table and view names
HDFS commands
Read commands for HDFS:
getFileInfo
getBlockLocations
getFileLocation
getListing
addBlock
complete
create
delete
mkdirs
rename
HBase commands
Read commands for HBase:
list
scan
createTable
disableTable
deleteTable
multi
drop
Guardium supports auditing for Cloudera Hadoop using a standard S-TAP. For more information, see Hadoop integration using a standard Guardium S-TAP.
Guardium also provides the capability to subscribe to audit events when Cloudera Navigator is configured with Kafka as an alternative logging destination. Audited activity
is sent to a Kafka cluster where the Guardium S-TAP consumes the events and sends them to the Guardium collector appliance for parsing and logging. nce the data is in
Guardium, it is highly protected and all normal Guardium functions can be used such as real time alerting and integration with SIEM, reporting and workflow, and
analytics.
Compared to integration using a standard Guardium S-TAP, Cloudera Navigator integration supports SSL encryption for clients that access Hadoop data. When using
Cloudera Navigator integration, data is decrypted before the Guardium appliance receives it.
Restriction: Guardium-based blocking is not supported for any Hadoop components when using Cloudera Navigator integration.
Prerequisites
Guardium integration with Cloudera Navigator requires the following minimum software release levels:
Configuration is quite flexible in that you can install the S-TAP on a node in the Hadoop cluster or on a separate server outside of the Hadoop cluster as long as that server
has network connectivity to the Kafka cluster and the Guardium appliance. You can only specify one S-TAP per Kafka cluster, but that S-TAP can send traffic to multiple
Guardium systems using standard high availability or load balancing techniques.
In this configuration, Cloudera Navigator produces the log events for each Hadoop component, and the S-TAP consumes those events. Using the Guardium user interface,
you will be specifying the message topic identifier that Cloudera Navigator uses so that the Guardium S-TAP knows which events it is supposed to pick up.
Recommendation: Use a secure Kafka cluster to ensure that your audit events are secure.
Integrating with Cloudera Navigator requires gathering some information from the administrators responsible for Cloudera and Kafka as well as from the data security
team responsible for Guardium. Gather the following information before you begin:
For more information, see the Cloudera documentation and IBM Security Guardium Activity Monitoring for Cloudera Hadoop Using Navigator Integration.
The Kafka cluster you use for producing Cloudera audit events must not be configured to require SSL client authentication. For more information, see IBM Security
Guardium Activity Monitoring for Cloudera Hadoop Using Navigator Integration.
Use any available method to install the S-TAP on the designated server inside or outside of the Hadoop cluster. In Guardium, navigate to Manage > System View >
S-TAP Status Monitor to verify connectivity between the S-TAP and the Guardium system.
For a reference of Hadoop related S-TAP configuration parameters, see S-TAP configuration parameters for Hadoop.
The Navigator administrator or full administrator must do this task from Cloudera Manager. For more information, see IBM Security Guardium Activity Monitoring for
Cloudera Hadoop Using Navigator Integration.
After configuring the solution, return to Manage > System View > S-TAP Status Monitor and verify that the S-TAP status is still green. Inspection engine verification
is not supported for Hadoop sources and will always indicate an Unverified status.
For monitoring and auditing, there is virtually no difference in policy rules when using the Cloudera Navigator integration than when using the normal S-TAP
monitoring for Hadoop. To begin, install a Guardium policy or use the default policy, run HDFS or Hive commands on the Cloudera cluster, and verify that you can
see the traffic in a Guardium report. For more information, see IBM Security Guardium Activity Monitoring for Cloudera Hadoop Using Navigator Integration.
Procedure
1. Navigate to Setup > Tool and Views > Hadoop Monitoring and click the plus icon in the Add cluster information tile.
2. Use the S-TAP host name menu to select an S-TAP that is connected to the Guardium system.
3. Provide a Topic name for the Kafka cluster.
Unless this was changed in the Kafka cluster configuration settings, use NavigatorAuditEvents (default value).
4. Use the Bootstrap servers section to specify one or more Kafka nodes to take the initial connection from the Guardium S-TAP.
Any nodes that are leaders of a partition for the topic will handle consumer requests. For the initial connections, it's best to specify more than one server to provide
a failover in case one of the bootstrap servers is down.
5. If your Kafka cluster is configured with TLS, check the Enable TLS check box.
Restriction: Guardium does not support Kafka clusters configured to require SSL client authentication.
6. If the Kafka cluster requires Kerberos authentication, check the Use Kerberos check box.
a. Use the Principal filed to provide the Kerberos principal name for the S-TAP.
b. In the Path to keytab file field, provide the full path to the Kerberos keytab file on the S-TAP server.
For example, /etc/krb.keytab. Make sure the keytab is owned by the S-TAP user and group and is only readable by the user.
7. Click Save.
The resulting tile will show that you have configured Hadoop monitoring and the S-TAP staus should be green.
The audit data is written to both HDFS and to Solr (recommended). Guardium can integrate with Ranger in two ways:
Unlike Hadoop integrations that rely on a standard Guardium S-TAP for monitoring and blocking, integration with Ranger supports SSL encryption between clients and
Hadoop data. With Ranger integration, the data is decrypted before it is sent to the Guardium system for auditing. In addition to SSL support, Ranger integration using
dynamic policies enables blocking support for more components than is supported using standard S-TAP.
Although you can use both inspection engines and Ranger integration in the same cluster, it is unlikely that you would use both approaches simultaneously. See Hadoop
Integration for more information about selecting an integration path.
Prerequisites
Integration with Ranger requires the following:
You must configure the S-TAP, by specifying log4j_reader_enabled=1, to turn on the Ranger integration.
The configuration is quite flexible in that you can install S-TAPs on more nodes. You can configure Ranger to send all component traffic to one S-TAP or you could specify,
for example, that all HBase traffic goes to one S-TAP and Hive and HDFS goes to another.
Blocking is implemented by extending Ranger access control policies to honor blocking policy rules that are specified on the Guardium appliance. The actual
implementation of blocking is performed as an access denial from Ranger. For more information about how blocking fits into the architecture and data flow and guidance
for implementing blocking, see IBM Security Monitoring and Blocking for Hortonworks Hadoop Using Apache Ranger Integration.
Some customers prefer to have one S-TAP for each component. At a minimum, we recommend one S-TAP for HBase and one S-TAP for everything else.
Tip: An S-TAP is not required to sit on the same node as any particular component. It's possible--and even advisable if supporting Hadoop HA--to establish a dedicated
Linux box for an S-TAP.
When configuring the number of connections for an S-TAP, use the following rule of thumb:
Attention:
For blocking, verify access to all HBase region servers, since you will need to copy the Guardium plugin JAR file to each of these region servers.
For configuring high availability failover scenarios, record the failover node IP addresses or host names.
Install the S-TAP and set it up on a system that is not part of the Hadoop cluster
This provides a simple configuration where, when the components fail over, the new node automatically uses the S-TAP as a remote logger. No changes are needed
to any configurations or S-TAPs.
Use localhost for HDFS and Hive S-TAP and a separate system for HBase
Install an S-TAP for HDFS and Hive using localhost in the S-TAP host field, then use a separate system such as an edge node for HBase. This provides an
alternative to installing S-TAPs on all nodes and region servers and is the recommended approach.
Install the S-TAP on the nodes in the cluster
In this model, you install an S-TAP on the primary and standby node for each component.
Ambari
A user ID and password who has privileges to update and save the log4j configuration, such as a Service Administrator account. For simplicity, refer to this as
the admin account and password.
Port and IP address or hostname.
Cluster name.
Ranger
The following information is only needed if configuring blocking. For more information about configuring blocking, see IBM Security Monitoring and Blocking for
Hortonworks Hadoop Using Apache Ranger Integration.
A Service Administrator account that can update and save the log4j configuration.
Port and IP address or hostname.
For monitoring, open port 5555 between the node(s) that S-TAP is on and the Ranger server.
For blocking, open port 5556 to allow communication between S-TAP and all nodes in the cluster that have the Guardium plugin.
Two Hadoop auditing configuration settings are missing from documentation. Add the following steps to the install manual:
HDFS
xasecure.audit.destination.log4j=true
xasecure.audit.destination.log4j.logger=xaaudit
Hive
xasecure.audit.destination.log4j=true
xasecure.audit.destination.log4j.logger=xaaudit
Configuring Ranger using the Python scripts is recommended over configuring Ranger from the GUI.
What to do next
Once you have completed these setup steps, install Guardium and Ranger policies. For monitoring and auditing, there is virtually no difference in policy rules when using
Ranger than when using standard S-TAP monitoring for Hadoop. For more information, see IBM Security Monitoring and Blocking for Hortonworks Hadoop Using Apache
Ranger Integration.
Parent topic: Hadoop integration using Hortonworks and Apache Ranger
Procedure
1. Navigate to Setup > Tools and Views > Hadoop Monitoring.
2. Click the in the Add cluster information section to begin defining a new configuration.
3. Use the Name field to provide a name for the configuration.
4. Select Hortonworks from the Hadoop distribution menu.
5. In the Host name/IP field, provide the host name or IP address of the Ambari server.
6. In the Port number field, provide the Ambari server port number. If you leave this field blank, the configuration will use the default port of 8080.
7. In the Cluster name field, provide the Hadoop cluster name.
8. In the User name field, provide an Ambari administrator user name.
9. In the Password field, provide a password for the Ambari administrator account.
10. Click the Test Connection button to verify the configuration.
11. Click Save to save the configuration.
Results
The new configuration will be available from the Hadoop Monitoring page.
Parent topic: Configure the solution for monitoring
Next topic: Install and configure S-TAPs
Procedure
1. Install S-TAPs and enable them for the Ranger integration. You may need more than one S-TAP to handle the traffic, for example configure one S-TAP on the name
node for HDFS, Hive and Kafka traffic and one S-TAP on the HBASE master node for all HBase traffic.
2. Configure guard_tap.ini for auditing.
a. Open guard_tap.ini in a text editor. You must edit the file directly, as there is no UI or GIM support for these settings.
b. Add the parameters listed below. Update the values to reflect you
Procedure
1. Navigate to Setup > Tools and Views > Hadoop Monitoring.
#‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌â
€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œ
‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌â€
Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ€Œâ
Also have the Hadoop administrator verify the following settings in custom ranger-<service>-audit:
xasecure.audit.destination.log4j=true
xasecure.audit.destination.log4j.logger=xaaudit
Results
From the Hadoop Monitoring page, verify that the enabled services are marked with a green check mark icon.
Parent topic: Configure the solution for monitoring
Previous topic: Install and configure S-TAPs
The idea is to integrate PIM activity data with Guardium DAM data, in order to allow visibility to the actual user (person) that logged in to the database.
Provide visibility in the Guardium appliances to PIM data such as Lease history (who used the shared accounts), credentials and databases managed by PIM.
Provide DAM information correlated with PIM information, for example, Guardium can show today's Database user along with actual requests issued by a specific
user. This integration will allow use of both the Database user and the actual PIM user that leased the shared ID.
Installation
Guardium patch (v10.1p103) can be used to install PIM integration functionality. PIM integration can be used on standalone Guardium systems as well as in
federated environments.
Select a datasource and then select from the Guardium UI: Reports > Report Configuration Tool > Custom Table Builder.
Locate and select three PIM predefined tables and, for each one of them, schedule Automatic Data Upload.
If using a Guardium Central Manager, select from the Guardium UI: Manage > Central Manager > PIM Data Distribution. Do this to schedule data distribution
from the Central Manager to all managed units.
2. Once data is brought to the managed units, use this CLI command, store pim_correlation_mode, to enable correlation of PIM data with Guardium session
data.
CLI command
store pim_correlation_mode
Show command
show pim_correlation_mode
3. To run correlation , select from the Guardium GUI:Comply > Custom Reporting > PIM data correlation.
IBM QRadar is a security intelligence tool that provides threat protection by monitoring security information and events, using customizable rules to detect anomalies, as
well as providing tools for incident forensics and vulnerability management.
The QRadar and Guardium solution leverages the QRTrigger framework for triggering actions in response to QRadar security events. Based on configuration settings,
QRadar events will cause new members to be added to Guardium groups based on information carried in the event itself. Furthermore the Guardium policy associated
with the group is automatically reinstalled so that membership change takes effect immediately.
Note that the QRadar and Guardium solution can be used to update a single Guardium collector, or a group of them being controlled by a Guardium Central Manager (CM).
Failed logins
Unauthorized access
Now QRadar and Guardium can work together in a two-way information flow.
Increase audit levels for access by a privileged shared user ID that was on-boarded in a Privileged Identity Management (PIM) system
Note that Guardium version 10.1 and later has three predefined groups designed to support this integration:
QRadarBlockingConnection
QRadarAlertingConnection
QRadarLogConnection
There is a predefined Guardium policy called "QRadarPolicy" with three rules: A blocking rule, an alerting rule, and a logging rule. Each rule is tied to its respective group
from the list above.
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W746177d414b9_4c5f_9095_5b8657ff8e9d/page/QRGuardium
Setting up Guardium
In order for the QRadar and Guardium solution to be able to authenticate to the Guardium REST API, a client ID must be registered in Guardium and the associated client
secret retrieved.
Registering a client ID is done using the grdapi command line utility of Guardium. This operation is performed only once. The result of the client ID registration is a JSON
entry containing details for the new client, including the client secret.
Troubleshooting logs
guardiumEvents_audit.log This an audit log of all changes made to Guardium based on QRadar events. Each line is a JSON object that includes identifiers,
timestamp and details of the Event handled.
QRListener.log Log output from the Listener process that receives forwarded event data from QRadar.
HANDLER_<event name>.log Log output from the dedicated handler AL for a specific Event.
RESPONSE_<event name>.log Log output from a custom response AL if this AL implements logging based on its AssemblyLine name. For example this can be
done by setting the Log Appender File Path parameter to be computed using this Javascript:
return “logs/â€
+ task.getShortName()
+ “.log†;
The objective of this interface is to use Guardium auditing capabilities for OPTIM activities. The auditing capabilities include: Reporting tools (user-defined queries and
reports); Audit Processes (workflow automation that enables assigning a task to a role/user/group, user-defined status-flow process, escalation, export...): and,
Thresholds Alerts.
The Optim-audit activity information includes the access details, session number, activity type (verb), table (object), details (fields), execution time (response time) and
number of errors (records affected).
Enabling OPTIM auditing requires enabling via OPTIM and the steps required in Guardium are: (1) link user to Optim Audit Role; (2) add the predefined reports to the
appropriate pane; (3) enable sniffer; and, (4) set policy action to Log Data With Values.
This interface includes an optim-audit role, a default layout (psml file) for the optim-audit role, and seven predefined reports.
Note: When creating the optim-audit role and user, only one tab OPTIM Audit will display. Similar to roles with custom layouts that customers can generate, this is a role
layout that is meant to be used alone (the optim-audit user has no interest in the other user role tabs) but since the user role is required, layout merging has been turned
off when the user has the optim-audit role so that they get only the items of optim interest. Other roles that work in this same way are "review-only" and "inv".
Note: After creating and saving the optim-audit role, click the Generate Layout selection within the User Browser menu and click Reset to get the layout associated with
the role. Do this again if changing roles within the User Browser.
This Guardium SIEM (Security Incident Event Manager) integration can be done in one of the following ways:
Syslog forwarding (the most common method for alerts and events)
Using the CLI command, store remotelog, to specify the Syslog forwarding to facility/priority, and host (destination).
Using Guardium templates for ArcSight, Envision, and QRadar
SCP/FTP (CSV or CEF Files sent to an external repository and the SIEM system must upload and parse from this external repository.)
Guardium distributes its contextual knowledge of database activity patterns, structures, and protocols directly to the third-party database of the SIEM system (Guardium
has credentials to the SIEM system. It can also write directly to the SIEM database in the SIEM schema. Contact Guardium support as Guardium's entities must be
mapped to the third-party schema.
Note: The SIEM system must enable remote logging as well to know to listen for the correct facility/priority which is defined within syslog.
By combining Guardium's real-time security alerts and correlation analysis with SIEM and log management products, companies can enhance their ability to:
Security Information and Event Management (SIEM) solutions, also referred to as Security Event Management (SEM) solutions, are offered by companies such as QRadar,
ArcSight, CA, Cisco MARS, LogLogic, RSA enVision and SenSage. SIEM products are complementary to Guardium's database activity monitoring solution. They can also use
Guardium's filtering and preprocessing of database events to provide 100% visibility and database analytics for SOX, PCI-DSS, and data privacy.
SIEM technology provides real-time analysis of security alerts that are generated by network hardware and applications. It helps companies to respond to network attacks
faster and to organize the massive amounts of log data that is generated daily. SIEM solutions are log-based correlation engines.
SIEM solutions are primarily focused on detection and security, but not on auditing. They assemble data from other logs and analyze it at a high level. They correlate much
more data such as IP addresses and routers but have little database visibility. They do not have forensics-quality, digitally signed, audit monitoring capabilities so they can
be used for immediate information, but not historical proof.
Security information and event management (SIEM) users are faced with the challenge of importing raw logs that are generated by internal DBMS utilities. The
performance of DBMS logging utilities, the unfiltered information that they produce, and the lack of necessary granular information create challenges.
Through the Guardium user interface, Guardium can be configured easily to integrate with various SIEM tools.
Note: With SIEM integration, the reports and policies do not change on the Guardium system. Users can continue with their existing policies and reports, trigger alerts, and
send reports to the SIEM system.
For SIEM-Guardium Integration, there are predefined templates for QRadar, Envision, and ArcSight so you do not need to define them. You can select the appropriate
message template within the rule action.
You can change the default message template, specify the parameters for syslog forwarding, and create the CSV or CEF file to export.
Note: CEF is only used for ArcSight. The other SIEM products have a different format and do not use CEF.
In order for the SIEM product to recognize the information that is being sent, the message template must be changed through the Global Profile. This formatting
agreement between the SIEM solution and Guardium allows SIEM products to parse incoming messages and update its own database with the new event/data.
1. To open the Global Profile, click Setup > Tools and Views > Global Profile.
2. Click Edit to Named template.
The Guardium appliance can be configured to send Syslog messages to remote systems. Specific types of Syslog messages can be sent to specific hosts. The Syslog
message type is determined from the facility-priority of the message.
Reports containing information that can be used by other applications or reports that contain large amounts of data can be exported to a CSV file format. Report, Entity
Audit Trail, and Privacy Set task output can be exported to CSV (Delimiter-separated Value) files. Additionally, CSV file output can be written to Syslog. If the remote Syslog
capability is used, the output CSV file is forwarded to the remote Syslog locations.
Each record in the CSV or CEF files represents a row on the report. Contact Guardium Support for a tool that permits the reformatting of CSV files before export.
The Guardium appliance can be configured to send Syslog messages to remote systems, using the store remotelog CLI command. Specific types of Syslog messages can
be sent to specific hosts. The Syslog message type is determined from the facility-priority of the message.
Examples of facility are: all, auth, authpriv, cron, daemon, ftp, kern, local0, local1, local2, local3, local4, local5, local6, local7, lpr, mail, mark, news, security, Syslog, user,
uucp. Examples of priority are: alert, all, crit, debug, emerg, err, info, notice, warning.
Reports containing information that can be used by other applications, or reports containing large amounts of data, can be exported to a CSV file format. Report, Entity
Audit Trail, and Privacy Set task output can be exported to CSV (Delimiter-separated Value) files. Additionally, CSV file output can be written to Syslog. If the remote Syslog
capability is used, this action results in the immediate forwarding of the output CSV file to the remote Syslog locations.
Each record in the CSV or CEF files represents a row on the report.
To send Syslog messages and export reports to CSV files, complete the following steps.
Note: Do not zip the file within the audit process definition so that the SIEM vendor can parse it correctly.
1. To open the Audit Process Finder, click Comply > Tools and Views > Audit Process Builder.
2. Click the Icon to add a process or select an existing process from the drop-down list.
3. Click New Audit Task under Audit Tasks.
4. Enter a description and select Report.
5. Select a report from the drop-down list and enter the CSV/CEF File Label.
6. Select Export CSV file and Write to Syslog. Choose a named template from the drop-down list.
7. Under Task Parameters, choose the Enter Period From >= and Enter Period To <= by using the calendar icon.
8. Click Apply.
CSV/CEF files can also be exported on a schedule to the SIEM host. Modify or add an audit task.
1. Click Comply > Tools and Views > Audit Process Builder to open the Audit Process Finder and modify or add an audit task.
2. Choose Export CSV file or Export CEF file.
Note: ACCESS reports can be saved and forwarded in CEF or LEEF format but other reports, such as Guardium Logins, Aggregation Activity Log, and CAS events
cannot be mapped to CEF or LEEF.
3. Uncheck the Write to Syslog. Otherwise, Syslog messages will be generated instead of a file.
4. Open the CSV/CEF Export menu by clicking Manage > Data Management > Results Export (Files).
5. Select either the SCP or FTP Protocol. Then enter the Host, Directory, Username, Port, and SCP/FTP password.
6. In the Scheduling section, define the Start Time, Restart frequency, Repeat frequency, Schedule by Day/Week or Month, Schedule Start Time. Check the box to
automatically run dependent jobs.
7. Click Save to commit the changes or Reset to clear the fields.
To have a policy alert that is routed to Syslog, exception rules, access rules, and extrusion rules must be modified to trigger notifications to be sent to Syslog. This action
can be accomplished by going to the Policy Builder. Policy rules can be sent as email or sent to Syslog and forwarded.
1. To open the Policy Builder, click Setup > Tools and Views > Policy Builder.
2. Select the policy and click Edit Rule.
3. Click Add Rule... > Add Exception Rule.
4. Enter the Description, Category, Classification, and select a Severity level from the drop-down list.
For every policy rule violation logged during the reporting period, the Policy Violations report provides the Timestamp from the Policy Rule Violation entity, Access Rule
Description, Client IP, Server IP, DB User Name, Full SQL String from the Policy Rule Violation entity, Severity Description, and a count of violations for that row. With this
report, users can group violations and create incidents, set the severity of each violation, and assign incidents to users.
Both IBM Guardium and InfoSphere Discovery have the capability to identify and classify sensitive data, such as Social Security Numbers or credit card numbers.
A customer of the IBM Guardium product can use a bidirectional interface to transfer identified sensitive data information from one product to another.
Note: In IBM Guardium , the Classification process is an ongoing process that runs periodically. In InfoSphere Discovery, Classification is part of the Discovery process that
usually runs once.
Note: The data will be transferred via CSV files.
Export from Guardium - Run the predefined report (Export Sensitive Data to Discovery) and export as CSV file.
Import to Guardium - Load to a custom table against CSV datasource; define default report against this datasource.
1. Export from Guardium - Export Classification Data from IBM Guardium to InfoSphere Discovery
2. As an admin user in the Guardium® application, go to Tools > Report Building >Classifier Results Tracking > Select a Report > Export Sensitive Data to Discovery
(See screenshot).
Note: Add this report to the UI pane (it is not by default).
9. In Upload Data screen, click on Add Datasource, click on New button, define the CSV file imported from Discovery as new datasource (Database Type = Text). See
the following screenshot of CSV Datasource definition.
The report result has the classification data imported from InfoSphere Discovery. Double click to invoke APIs assigned to this report. The data imported from Discovery
can be used for the following:
Type DB2®
Host 9.148.99.99
Port 50001
dbName (Schema name for DB2 or Oracle, db name for others) cis_schema
Datasource URL Â
TableName MK_SCHED
ColumnName ID_PIN
ClassificationName SSN
CEF Mapping
The CEF standard from ArcSight defines a set of required fields, and a set of optional fields.
The latter are called extensions in the CEF standard. Data is mapped to these fields from Guardium® configuration information and reports. Note that not all Guardium
fields map to a CEF field, so there may not be a one-to-one relationship between the rows of a printed report and the CEF file produced for that report. Also note that this
facility is intended to map data from data access domains (Data Access, Exceptions, and Policy Violations, for example), and not from Guardium self-monitoring domains
(Aggregation/Archive, Audit Process, Guardium Logins, etc. ).
Note: Analyzed Client IP has a map for CEF source. If the query used for the CEF does NOT contain the Client IP but contains the analyzed client IP, the analyzed client IP
will be used for the source. If both included in the query, then Client IP takes precedence.
Version 0 (zero); Currently the only version for the CEF format
Signature ID ReportID
Severity Numeric severity code in the range 0-10, with 10 being the most important event. Â If not reset in the report, 0 (zero, which
translates to Info for Guardium).
The CEF extension fields are optional, and will be present only when the mapping applies. For example, if the report does not contain an access rule description, the act
field (the first extension field) will not be present. For more detailed information about the Guardium entities and attributes, see the appropriate entity reference topic.
LEEF Mapping
Log Event Extended Format (LEEF) from QRadar
The LEEF format consists of an optional syslog header, an LEEF header and a collection of attributes describing the event.
Syslog_Header(optional) LEEF_Header|Event_Attributes
The LEEF header is pipe (‘|’) separated and attributes are tab separated
Example
LEEF: Version Version Integer identifying the version of LEEF used for the log message
Vendor String identifying the vendor of the device or application sending the event log
Product Product String identifying product sending the event log Note: The combination of vendor and product must be unique
Version String identifying the version of the device or application Sending the event log
Attributes 1..N A set of key value pairs attributes for the event separated by the tab character. Â Order is not enforced. Â
A pre defined set of keys are defined and should be used when possible. Â
LEEF format is extensible and allows for additional key value pairs to be added to the event log. Â
Character Encoding
UTF8
Predefined Attributes
Table 2. Predefined Attributes
Key Name Data Type Max Length Description
devTimeFormat string  Defined by the java SimpleDateFormat.  This is only required if using a
customized date format. Â See Date Format section for further details.
srcPort integer  Source Port. The valid port numbers are between 0 and 65535.
dstPort integer  Destination Port. The valid port numbers are between 0 and 65535.
srcPreNat IPv4 or IPv6 address  Source address for the message before Network Address Translation (NAT)
occurred
dstPreNat IPv4 or IPv6 address  Destination address for the message before Network Address Translation (NAT)
occurred
srcPostNat IPv4 or IPv6 address  Source address for the message after Network Address Translation (NAT)
occurred
dstPostNat IPv4 or IPv6 address  Destination address for the message after Network Address Translation (NAT)
occurred
srcPreNATPort integer  Source Port. The valid port numbers are between 0 and 65535.
dstPreNATPort integer  Destination Port. The valid port numbers are between 0 and 65535.
srcPostNATPort integer  Source Port. The valid port numbers are between 0 and 65535.
dstPostNATPort integer  Destination Port. The valid port numbers are between 0 and 65535.
identHostName string 255 Host name associated with the event. Typically, this parameter is only
associated with identity events
identNetBios string 255 NetBIOS name associated with the event. Typically, this parameter is only
associated with identity events
identGrpName string 255 Group name associated with the event. Typically, this parameter is only
associated with identity events.
Custom Attributes
In some cases custom attributes may be required to identify more information about the event being generated. In these cases vendors may define their own custom
attributes and include them in the event log. Custom attribute fields should be used only when there is no acceptable mapping in to a predefined field.
Custom attributes may be used for viewing in the QRadar Event Viewer by creating custom properties.
Custom attributes may be used by the QRadar reporting engine by creating customer properties.
Note: Add databaseName=%%DBname to the LEEF template in order to capture the MS-SQL database name. Update the existing LEEF template or make a new template
by cloning.
Date Formats
You can use any of these predefined formats:
If these formats are not suitable, you can define a custom date format in the dTime field by specifying the date format using the dTimeFormat key.
For further information on specifying a date format, visit the SimpleDateFormat page at: http://java.sun.com/javase/6/docs/api/java/text/SimpleDateFormat.html
Troubleshooting problems
To isolate and resolve problems with your IBM products, you can use the troubleshooting and support information. This information contains instructions for using the
problem-determination resources that are provided with your IBM products, including IBM Guardium.
The first step in the troubleshooting process is to describe the problem completely. Problem descriptions help you and the IBM technical-support representative know
where to start to find the cause of the problem. This step includes asking yourself basic questions:
The answers to these questions typically lead to a good description of the problem, which can then lead you to a problem resolution.
The following questions help you to focus on where the problem occurs to isolate the problem layer:
Is the problem specific to one platform or operating system, or is it common across multiple platforms or operating systems?
Is the current environment and configuration supported?
Do all users have the problem?
(For multi-site installations.) Do all sites have the problem?
If one layer reports the problem, the problem does not necessarily originate in that layer. Part of identifying where a problem originates is understanding the environment
in which it exists. Take some time to completely describe the problem environment, including the operating system and version, all corresponding software and versions,
and hardware information. Confirm that you are running within an environment that is a supported configuration; many problems can be traced back to incompatible levels
of software that are not intended to run together or have not been fully tested together.
Responding to these types of questions can give you a frame of reference in which to investigate the problem.
Does the problem always occur when the same task is being performed?
Does a certain sequence of events need to happen for the problem to occur?
Do any other applications fail at the same time?
Answering these types of questions can help you explain the environment in which the problem occurs and correlate any dependencies. Remember that just because
multiple problems might have occurred around the same time, the problems are not necessarily related.
However, problems that you can reproduce can have a disadvantage. If the problem is of significant business impact, you do not want it to reoccur. If possible, recreate the
problem in a test or development environment, which typically offers you more flexibility and control during your investigation.
Procedure
To find and install fixes:
1. Obtain the tools that are required to get the fix. If it is not installed, obtain your product update installer. You can download the installer from Fix Central. This site
provides download, installation, and configuration instructions for the update installer.
2. Select Guardium as the product, and select one or more check boxes that are relevant to the problem that you want to resolve.
3. Identify and select the fix that is required.
4. Download the fix.
a. Open the download document and follow the link in the Download Package section.
b. When downloading the file, ensure that the name of the maintenance file is not changed. This change might be intentional, or it might be an inadvertent
change that is caused by certain web browsers or download utilities.
5. Apply the fix.
a. Follow the instructions in the Installation Instructions section of the download document.
b. For more information, see the Installing fixes with the Update Installer topic in the product documentation.
6. Optional: Subscribe to receive weekly email notifications about fixes and other IBM Support updates.
Procedure
To contact IBM Support about a problem:
1. Define the problem, gather background information, and determine the severity of the problem. For more information, see the Getting IBM support topic in the
Software Support Handbook.
2. Gather diagnostic information.
3. Submit the problem to IBM Support in one of the following ways:
Online through the IBM Support Portal: You can open, update, and view all of your service requests from the Service Request portlet on the Service Request
page.
By phone: For the phone number to call in your region, see the Directory of worldwide contacts web page.
Results
If the problem that you submit is for a software defect or for missing or inaccurate documentation, IBM Support creates an Authorized Program Analysis Report (APAR).
The APAR describes the problem in detail. Whenever possible, IBM Support provides a workaround that you can implement until the APAR is resolved and a fix is
delivered. IBM publishes resolved APARs on the IBM Support website daily, so that other users who experience the same problem can benefit from the same resolution.
Parent topic: Techniques for troubleshooting problems
Related information:
How to upload data to a support ticket (PMR) (video)
Guardium troubleshooting and support (video)
Use support must_gather commands, which can be run through the CLI to generate specific information about the state of any Guardium system. This information can
also be collected through the Guardium GUI.
This information can be uploaded from the Guardium system and sent to IBM Support whenever a Problem Management Report (PMR) is logged.
The must_gather commands can be run at any time by the user through the CLI. Complete the following steps.
1. Open a putty session (or similar) to the appropriate collector, aggregator, or Central Manager.
2. Log in as user cli.
3. Depending on the type of issue, paste the relevant must_gather commands into the CLI prompt. More than one must_gather command might be needed to
diagnose the problem. The commands are listed and described in the following list.
support must_gather agg_issues (aggregation process)
support must_gather alert_issues (alerts)
support must_gather app_issues (application)
support must_gather audit_issues (audit process)
support must_gather backup_issues (backup process)
support must_gather cm_issues (Central Manager)
support must_gather datamining_issues (data mining)
support must_gather miss_dbuser_prog_issues (system database user)
support must_gather en (entitlement optimization)
support must_gather network_issues (network architecture)
support must_gather ocr_issues
support must_gather patch_install_issues (patch installation and upgrades)
support must_gather purge_issues (purge process)
support must_gather scheduler_issues (scheduler function)
support must_gather sniffer_issues (sniffer function)
support must_gather system_db_info (Guardium system database or operating space performance)
The output is written to the must_gather directory with a file name such as the following example:
must_gather/system_logs/.tgz
By using fileserver <ip address>, you can upload the .tgz files and send to IBM Support.
Send the file through email or upload to ECUREP by using the standard data upload. Specify the PMR number and file to upload.
Explanation of guard_diag:
General Overview:
There is now a diagnostics script (guard_diag) that runs out of /usr/local/guardium/guard_stap/guard_diag when S-TAP logging is set to level 7 from the GUI. It is also
possible to transfer this script to a machine that is running S-TAP.
The script prompts for the location if the script cannot automatically determine where S-TAP is installed. The run time is about 1.5 minutes and if no output directory is
specified, the script places the generated .tar file in /tmp. When the script runs and enables logging from the GUI, the .tar file is placed in /var/tmp. The file name is
derived from the machine name, and the time/date run; it always starts with diag.ustap.
Uname -a
List of kernel modules installed
Output for one cycle
Uptime
Processor number and type
Dump of most recent syslog
Netstat output
IPC list
S-TAP version
Contents of guard_tap.ini
Ls -l on the K-TAP device nodes
30s trace of S-TAP
K-TAP statistics
List of all the files in the installation directory
K-TAP khash
Verbose debug log for K-TAP (2) and S-TAP(4)
Known Issues:
Tusc is not installed on all HP-UX operating systems, so tracing the S-TAP PID does not work.
gzip isn't always installed on the system. The fall back is to compress (final extension of .tar.Z) and failing that, the .tar file is placed in the output directory.
Topas output on AIX is best interpreted by the terminal since it contains control codes that makes it mostly unintelligible when it is opened in an editor.
The non-root S-TAP has a number of issues concerning the diagnostics script.
In Linux, /var/log/messages is only readable by the root.
Some Solaris operating systems might not be configured correctly and causes netstat to print an error.
The path for the non-root user is rather basic, and as a result, some commands might not run at all. Notably, this known issue happens on HP-UX with gzip.
Platforms Supported:
Linux
HP-UX
AIX
Solaris
stap.txt
tasks.txt
system.txt
evtlog.txt or evtlog2008.txt
reg.txt
Notes:
Content of %system%guard_tap.ini.
The Guardium S-TAP installation log
All running tasks
List of all installed kernel drivers
OS information that is collected from the system information utility
ipconfig /all
netstat -nao
Ping and trace results from the database server to the Guardium system
CPU usage for guardium_stapr
Overall system CPU usage
Guardium_stapr process handle count and memory usage
Event log messages that are generated by S-TAP
System event log messages
The following registry entries:
HKLMSOFTWAREMicrosoftWindowsCurrentVersionUninstall?
HKLMSYSTEMCurrentControlSetServices?
HKLMSYSTEMCurrentControlSetControlGroupOrderList?
HKEY_LOCAL_MACHINESOFTWAREMicrosoftMSSQLServer
ID=0
function parameters :
commandsList - String -required - Constant values list
description - String
email - String
maxLogLength - Integer - Constant values list
pmrNumber - String
runDuration - Integer - Constant values list
startRun - Date
To get a Constant values list for a parameter, call the function with --get_param_values=<param-name>
The --commandsList requires a string. The --description is also a required string. The --runDuration indicates how long the must_gather runs. Type in an email address to
send the must_gather report. The --maxLogLength parameter is a required integer that sets the maximum length of the log report. The --pmrNumber is the problem
management report number that is used by IBM Support to track and resolve customer reports. The --startRun is a required date such as now. You can get a list of values
for each parameter by calling the function grdapi must_gather --get_param_values=<param-name>.
Procedure
All of these data exchange methods are explained on the IBM Support website.
Ensure that your IBM technical-support representative provided you with the preferred server to use for downloading the files and the exact directory and file names to
access.
Procedure
1. Use FTP to connect to the site that your IBM technical-support representative provided and log in as anonymous. Use your email address as the password.
2. Change to the appropriate directory:
a. Change to the /fromibm directory.
cd fromibm
cd nameofdirectory
4. Use the get command to download the file that your IBM technical-support representative specified.
get filename.extension
quit
RSS feed 1
RSS feed 2
RSS feed 3
For general information about RSS, including steps for getting started and a list of RSS-enabled IBM web pages, visit the IBM Software Support RSS feeds site.
My Notifications
With My Notifications, you can subscribe to Support updates for any IBM product. (My Notifications replaces My Support, which is a similar tool that you might have
used in the past.) With My Notifications, you can specify that you want to receive daily or weekly email announcements. You can specify what type of information
you want to receive (such as publications, hints and tips, product flashes (also known as alerts), downloads, and drivers). My Notifications enables you to customize
and categorize the products about which you want to be informed and the delivery methods that best suit your needs.
Procedure
To subscribe to Support updates:
Results
Until you modify your RSS feeds and My Notifications preferences, you receive notifications of updates that you have requested. You can modify your preferences when
needed (for example, if you stop using one product and begin using another product).
Parent topic: Techniques for troubleshooting problems
Related Information
IBM Software Support RSS feeds
Subscribe to My Notifications support content updates
My Notifications for IBM technical support
My Notifications for IBM technical support overview
User Interface
Policies
Reports
Assess and Harden
Configuring your Guardium system
Access Management
Aggregation
Central Management
S-TAPs and other agents
GIM
File activity troubleshooting
Installing Your Guardium System
Symptoms
When you add an inspection engine, the new settings remain for a few minutes and then disappear.
Causes
There is an error in one or more parameter values with either the new inspection engine or a different inspection engine in the S-TAP configuration file guard_tap.ini.
Environment
The Guardium collector user interface is affected.
Symptoms
When you refresh the IBM Security Guardium GUI from the system main page, you receive in the following error:
Causes
The cause is a feature in Guardium designed to prevent Cross-Site Request Forgery (CSRF). CSRF protection is enabled by default.
Environment
All Guardium configurations (collector, aggregator, central manager) are affected.
Note: If you turn off CSRF protection, the security level of the Guardium system is reduced.
The following command enables protection against Cross-Site Request Forgery. It is enabled by default: store gui csrf_status on
You can check the status by running this CLI command: show gui csrf_status
Java.lang.IllegalStateException
If you receive a java.lang.IllegalStateException error, clean up the Java servlets.
Symptoms
You receive the following error message.
Causes
The error is raised when a method is invoked and the Java VM is in a state that is inconsistent with the method. There might also be corrupted Java servlets that are
caused by deadlocks.
Environment
The Guardium system is affected.
To clean up the Java servlets, run the command support clean sevlets.
If the problem is not resolved, please collect the following tomcat logs and contact IBM Security Guardium Technical Support.
tomcat_log/localhost.<date_stamp>.log
tomcat_log/catalina.<date_stamp>.log
Symptoms
You might see a blank screen or other errors. The problem appears to happen with certain browsers on specific systems but not with others.
Causes
The cause might be restricted to a localized browser or there is a Java virtual machine issue.
Environment
The collector, aggregator, and central manager are affected.
Policies
Query does not appear in the co-relation alert definition
If the query does not appear in the co-relation alert definition, check the count field and sort by time stamp.
Rule does not trigger
If a rule with a value in the policy command field does not trigger as expected, reconfigure the rule.
Redact function causes overly masked result
If the redact function causes an overly masked result, use the regular expression [\x0c]{1}[0-9]{8}([0-9]{4}).
SSH sessions and automated CRON jobs that log in to your Oracle database are shown as failed logins
If SSH sessions and automated CRON jobs that log in to your Oracle database are shown as failed logins, amend the policy.
The Guardium internal database is filling up
If the Guardium internal database is filling up, you can purge the data manually or as part of the regular purge strategy.
Symptoms
You created an access query for creating a co-relation alert. However, in the co-relation alert definition, this query does not appear in the drop-down list.
Causes
The co-relation alert search in the report is based on the time stamp.
Symptoms
Rules with a value in the policy Command field do not trigger as expected.
Causes
The cause is a misconfiguration in the command field. The Guardium parser does not consider the command modifiers to be a part of a command.
Environment
Guardium Collectors. The command field in the policy rule is also affected when it is used with wildcard (%).
GRANT
GRANT%
GRANT% TO PUBLIC
%GRANT% ADMIN OPTION%
ADMIN OPTION and TO PUBLIC do not match and cannot trigger a rule because the Guardium parser does not recognize them as a part of a command. Generally, the
parser does not consider command modifiers to be part of a command. Instead, create a report to inspect the traffic that the policy monitors and include the SQL Verb
field from the Command entity in that report. Anything that is listed in the SQL Verb field is recognized by the parser and can be used in the Command field of a policy rule.
Several commands can be added to a group and the group can be used in the rule instead of a single command. In this case, each group member must match an entry in
SQL Verb. Guardium includes several such command groups that you can use or clone.
Parent topic: Policies
Parent topic: Reports
Symptoms
The redact function causes an overly masked result or an ORA-03106 error in Oracle traffic.
Causes
The redact function in the Guardium policy rule is doing a pattern match with the result set. It has a feature to replace the matched string with the user specified character.
Environment
Guardium collectors are affected.
SSH sessions and automated CRON jobs that log in to your Oracle database are shown as
failed logins
If SSH sessions and automated CRON jobs that log in to your Oracle database are shown as failed logins, amend the policy.
Symptoms
SSH sessions and automated CRON jobs that log in to your Oracle database through SQLPLUS and RMAN with /as sysdba show as failed logins.
This error triggers the failed login alert. For example, if the database user WRONGLOGIN is a member of the DBA group, and logs as sqlplus WRONGLOGIN as sysdba,
the database authentication of WRONGLOGIN fails. This failure causes the ORA-01-17 error alert to trigger and is reflected in the Guardium log. However, users with
sysdba privileges can connect to the database without database authentication so the session is allowed to continue. Both events are captured and recorded.
Environment
Guardium collectors are affected.
This rule skips the failed login alerts that are caused by the ORA-01-17 error but are still logged. To filter the failed login alerts out of the reports, add these conditions to
the end of the conditions list:
AND
(
client IP<>server IP OR
src prg <> SQLPLUS OR
db user NOT IN group of trusted OR
os user NOT IN group of oracle DBAs OR
net protocol <>BEQUEATH (if this is local BEQUEATH, not TCP )
)
Symptoms
The Guardium internal database is filling up and most of the data is in the GDM_POLICY_VIOLATIONS_LOG table.
Causes
A change to the policy can cause a policy violation rule to be triggered frequently. You might find that most of the data is stored in the GDM_POLICY_VIOLATIONS_LOG
table.
Environment
The Guardium collector is affected.
The excess data in the GDM_POLICY_VIOLATIONS_LOG table is purged as part of the regular purge strategy. However, if you would like to manually clean data from
GDM_POLICY_VIOLATIONS_LOG table, you can use the command support clean DAM_data policy_violations<start_date><end_date>.
Reports
Cannot modify the receiver table for an Audit Process after it has been executed at least once
If you cannot modify the receiver table for an audit process, clone the audit process and replace the original.
Cannot see multi-byte characters
If you export a Guardium report to PDF and the characters are not correct, switch the PDF font configuration.
File system is almost full
If the Guardium file system is almost full, change the log rotation strategy.
Guardium audit reports viewed in Microsoft Excel have rows with unexpected characters
If you view an Audit report in .csv and see rows with unexpected characters, use another .csv viewer or view it as a .pdf file.
Reports show IP address as 0.0.0.0
Cannot modify the receiver table for an Audit Process after it has been executed at least once
If you cannot modify the receiver table for an audit process, clone the audit process and replace the original.
Symptoms
After an audit process runs at least once, you can neither remove nor add a receiver. You can also not modify the following properties for a receiver.
Action Req.
Cont.
Appv. if Empty
Causes
After an Audit Process runs at least once, the receiver table is locked and you cannot modify most of the properties.
Environment
All Guardium configurations (collector, aggregator, central manager) are affected.
Symptoms
You can view reports in the GUI. However, when you export the report to PDF, the characters are not correct or missing. The characters appear as question marks or other
symbols in the PDF report.
Causes
The default font in Guardium PDF exports does not show multi-byte characters correctly. For example, Greek, Cyrillic, and Chinese characters do not display correctly.
Environment
The collector, aggregator, and central manager are affected.
3. Select 2 Multi-language.
Causes
Alerts and reports are sent to the syslog and can fill up the file system.
Environment
The collector or aggregator might be affected.
Guardium audit reports viewed in Microsoft Excel have rows with unexpected characters
If you view an Audit report in .csv and see rows with unexpected characters, use another .csv viewer or view it as a .pdf file.
Symptoms
When you view an Audit report (in .csv format) in Microsoft Excel, you notice that certain rows are filled with unexpected characters. The characters might look similar to
what you find in the full SQL column. The problem is not seen in .pdf reports or in GUI reports.
Causes
Microsoft Excel has a limit on what a cell can contain of 32,767 characters. If your captured SQL is longer than this limit, it will spill over onto the next row.
Environment
The Collector, Aggregator, and Central Manager are affected.
Causes
While Guardium is decrypting the traffic, the IP address is initially recorded as 0.0.0.0 because the sniffer does not know what the actual IP address is. After the
decryption is completed, a separate thread repopulates the session tables with the correct IP address.
Environment
Any database that encrypts the database traffic is affected.
Symptoms
When you run a report in Guardium, you receive the following error message.Request was interrupted or quota exceeded.
Causes
Environment
The collector and aggregator are affected.
Divide the report into pieces of a shorter reporting interval. This action is the most recommended method. If a report exceeds 4 GB, it causes a MYSQL table data
pointer size exhaustion.
Increase the query timeout value to a larger value. Click Manage > Activity Monitoring > Running Query Monitor to open the Running Query Monitor.
Uninstall and reinstall the browser. Type a number of seconds in the Report/Monitor Query Timeout box, and click Update.
Run the report in the background. Reports that run in the background are not subject to the query timeout.
Run the report as an audit process.
Symptoms
Rules with a value in the policy Command field do not trigger as expected.
Causes
The cause is a misconfiguration in the command field. The Guardium parser does not consider the command modifiers to be a part of a command.
Environment
Guardium Collectors. The command field in the policy rule is also affected when it is used with wildcard (%).
GRANT
GRANT%
GRANT% TO PUBLIC
%GRANT% ADMIN OPTION%
ADMIN OPTION and TO PUBLIC do not match and cannot trigger a rule because the Guardium parser does not recognize them as a part of a command. Generally, the
parser does not consider command modifiers to be part of a command. Instead, create a report to inspect the traffic that the policy monitors and include the SQL Verb
field from the Command entity in that report. Anything that is listed in the SQL Verb field is recognized by the parser and can be used in the Command field of a policy rule.
Several commands can be added to a group and the group can be used in the rule instead of a single command. In this case, each group member must match an entry in
SQL Verb. Guardium includes several such command groups that you can use or clone.
Parent topic: Policies
Parent topic: Reports
Symptoms
You receive the same message in the Scheduled Jobs Exceptions report at regular short intervals, typically every 5 minutes. This interval is the same as the polling interval
that anomaly detection runs on.
An example of the Scheduled Jobs Exceptions report might look like the following.
Causes
One of the active alerts is causing the error.
Environment
Guardium collectors and the Aggregator are affected.
3. Check to see whether the errors stop with that alert deactivated.
If you find the alert that is causing the problem and need assistance to understand or stop the error, contact IBM Guardium Technical Support and provide the following
items:
2. Output of the following CLI commands. If requested, specify the length of one polling interval.
Symptoms
You receive the following message. Merge required, delay executing Process. You might receive several of these messages over a short period.
Causes
The audit process requires the merge process to finish before it can run.
Environment
The aggregator is affected.
The database user is not shown correctly in Guardium reports when you monitor Teradata
If Guardium reports do not show the database user correctly when you monitor Teradata, configure the Teradata Database.
Symptoms
When you view records from the monitored Teradata Database in Guardium reports, the database user name field does not show up as expected. The user name is
truncated or missing.
Causes
The Teradata Database is not enabled to return the full user name.
Environment
Any Guardium collector that captures data from the Teradata database is affected.
Note: This setup returns the user name in unencrypted form. If encryption is enabled, the system returns an error message.
Parent topic: Reports
Symptoms
You see results in your reports that you do not expect or that you believe should be filtered out by the policy. Conversely, you do not capture statements that you expect to
capture.
Causes
The SQL usually has several objects and commands that are embedded in the statement. The policy or report definition is not configured to deal with objects or
commands at different depths.
Environment
Guardium collectors are affected.
Note: Tuple supports the use of one slash and a wildcard character (%). It does not support the use of a double slash.
Parent topic: Reports
Symptoms
Guardium CAS works with older Java versions but not with Java 1.7.
Causes
msvcr100.dll is missing from <GUARDIUM STAP directory>\cas\bin\
Environment
Guardium CAS on Windows is affected.
1. Find the path where Java 1.7 is installed on your system such as C:\Program Files (x86)\Java\jre7\bin
2. Find the location of the library jvm.dll within the Java path found in the previous step.
3. Edit the cas.cfg file in the <CAS directory>\conf directory. For example, C:\Program Files (x86)\GUARDIUM_STAP\cas\conf\cas.cfg is a typical file path.
4. Find the line corresponding to the JVM such as ;JVM=c:\program files\java\jre1_2_3\bin\client\jvm.dll.
5. Remove the semicolon from the beginning of the line. Then, set the JVM to the path of the library jvm.dll in step 2. JVM=C:\Program Files
(x86)\Java\jre7\bin\server\jvm.dll.
6. Copy msvcr100.dll from the bin folder in your Java 7 installation directory to your <CAS directory>\bin folder. For example, copy C:\Program Files
(x86)\Java\jre7\bin\msvcr100.dll to C:\Program Files (x86)\Guardium\GUARDIUM_STAP\cas\bin\msvcr100.dll.
7. Restart the change audit system.
Note: This is only needed for Java version 1.7. For older versions of Java, this step is not needed.
Parent topic: Assess and Harden
Symptoms
Some members of a test exception group appear in the details field when you run a vulnerability assessment. The group contains members with a backslash character and
a REGEX tag such as (R)US\John Doe.
Causes
Special characters can trigger errors when Guardium parses the exception group.
Environment
Guardium collectors are affected.
US\John Doe
(R)US\\John Doe
The REGEX tag (R) is used to trigger a regular expression search of the details field to remove any string that matches the regular expression. A backslash or any other
character that has a meaning in a regular expression needs a backslash escape sequence to avoid parsing errors. If you do not use the (R) tag, the group member must
exactly match the entire line in the details field for Guardium to make a match. To pass the vulnerability test, the details field of the test must be empty.
Symptoms
After you upgrade S-TAP using the Guardium Installation Manager (GIM), you cannot configure the database path parameters in the Inspection Engine in Guardium even
though the installation results for the module show as successful.
Causes
K-TAP is not properly upgraded if the new S-TAP is installed as a fresh module. Because the old K-TAP module is not removed, there is a protocol mismatch between the
old K-TAP module and the new S-TAP.
Environment
S-TAP installed in UNIX and Linux such as AIX, HP-UX, Linux, and Solaris.
The modules log file lists the old K-TAP. For example: ktap_24276 338760 0
Symptoms
Guardium fails to recognize the network device VMXNET x during the installation on VMware. You receive the error eth0: unknown interface: No such device
when you install Guardium on VMware as a guest. The error message appears after you restart the system.
Causes
VMXNET x virtual network adapter requires a specific driver that is only contained in VMware tools and no operating system has the driver. Guardium is running on Linux
and the installer does not have a driver for VMXNET x.
Environment
The Guardium system is affected.
1. Create a virtual machine on VMware by using a default network adapter such as E1000 or Flexible.
2. Install Guardium on the virtual machine.
3. Install the current GPU cumulative patch for Guardium.
4. After the installation, log on to the CLI console and run the command setup vmware_tools install to install VMware tools.
5. Shut down the Guardium system from the CLI console with the command stop system.
6. Edit the virtual machine settings with a VMware client tool such as VMware Infrastructure Client. Select the current network adapter and remove it.
7. Add the network adapter called VMXNET.
8. Restart the Guardium system.
Symptoms
After a hardware repair such as replacing the system board on the Guardium appliance, the network connectivity is lost. The following error message occurs for each
network interface when the appliance is rebooted.
Causes
After you replace the system board, the MAC address will change. This change causes a disparity between the actual MAC address and what is stored in the interface
configuration files.
Environment
Any Guardium appliance (collector, aggregator, or central manager) on which the system board has been replaced and all Guardium versions are impacted.
If the problem is still not resolved, contact Guardium Support for manual intervention.
Symptoms
Causes
The MAC address assigned to the virtual machine by the virtual environment does not match the MAC address in Guardium.
Environment
The collector, aggregator, and central manager are affected.
SSLv3 is enabled
If you receive a warning that SSLv3 is enabled, disable SSLv3 to prevent the POODLE exploit.
Symptoms
You receive the following warning: SSLv3 is enabled.
Causes
SSLv3 contains a protocol vulnerability known as Padding Guardium® On Downgraded Legacy Encryption (POODLE). If SSLv3 is enabled on your system, this vulnerability
allows attackers to force an SSL/TLS fallback to SSLv3, break the encryption, and intercept network traffic in plaintext. The vulnerability is detailed in the National
Vulnerability Database as CVE-2014-3566.
Guardium recommends disabling SSLv3 on all systems to prevent the POODLE exploit, and SSLv3 is disabled by default on new Guardium systems. However, older
systems and some upgrade scenarios may leave SSLv3 enabled.
This topic describes how to check the status of SSLv3 and disable it if necessary.
Attention: Disabling SSLv3 can disrupt connectivity between a Guardium v10 Central Manager and some managed units running Guardium v9 before GPU 500. If you have
a mixed environment with managed units running Guardium v9 before GPU 500, either upgrade the managed units to GPU 500 or apply patch 9501 before disabling
SSLv3.
If the output indicates SSL setting is disabled, SSLv3 is disabled. No additional steps are required to disable SSLv3.
If the output indicates SSL setting is enabled, SSLv3 is enabled. Continue with this procedure to disable SSLv3.
2. Disable SSLv3 using the following CLI command: store sslv3 off. The command output should be similar to the following:
3. Verify that SSLv3 is now disabled: show sslv3. The output should now indicate SSL setting is disabled.
Access Management
Symptoms
You are unable to log in to Guardium with any user except admin or accessmgr. You see an invalid user name or password error despite using the correct user and
password as defined by accessmgr. You receive the following error message. Invalid user name and/or password. Please reenter your credentials..
Causes
The authentication setting is not configured as local.
Environment
The collector, aggregator, and central manager are affected.
Symptoms
You lost the Guardium accessmgr password and cannot log in to the GUI. The account is also locked after successive failed attempts.
Causes
Guardium prohibits multiple failed login attempts.
Environment
The collector, aggregator, and central manager are affected.
You can use <N> or random where <N> is a number in the range of 10000000 - 99999999. Random automatically generates a number in the range of 10000000 -
99999999. Open a PMR with IBM Guardium support and send the following output.
Aggregation
Cannot convert Guardium collector to aggregator
If you cannot convert a Guardium collector to a Central Manager aggregator, reinstall Guardium and select aggregator during installation.
Data Export configuration change from a Guardium managed system's GUI fails with error
If a Data Export configuration change fails, make sure that the shared secret key is the same on the collector and aggregator.
Difference between audit process results and report
If there is a difference between your audit process results and the report, check that all appliances are set to the same timezone.
HY000 errors after restoring the configuration in an aggregator
If you receive HY000 errors after you restore the configuration in an aggregator, run a dummy import.
Symptoms
You try to convert a Guardium collector to an aggregator with the command store unit type manager aggregator.
However, the following command shows that the unit type is still listed as manager.
Causes
A collector cannot be converted to an aggregator with a CLI command.
Environment
Guardium collectors are affected.
Data Export configuration change from a Guardium managed system's GUI fails with error
If a Data Export configuration change fails, make sure that the shared secret key is the same on the collector and aggregator.
Symptoms
You attempt to save new settings for the data export and get the error when you click Apply to save the configuration:
Causes
Guardium attempts to log in with scp to the target host with the user and password that are specified in the Data Export configuration. Then, Guardium attempts to copy a
test file to the target directory. The shared secret on this system does not match the Shared Secret on the aggregator you are trying to set this system to export to.
Environment
The Guardium configurations: collector and aggregator are affected.
1. If you know the shared secret on the aggregator, set the shared secret on the collector to the same value. You can use one of these methods:
From CLI: use command store system shared secret to set the Shared secret key
From GUI, set the shared secret key under Setup > System > System Configuration.
2. Back up the current shared secret on the aggregator and restore it to the collector.
On the aggregator, run the CLI command.
For the file transfer operation, specify a user, host, and full path name for the backup keys file. The user that you specify must have the authority to write to
the specified directory.
On the collector, run this command to restore the shared secret key:
Symptoms
You set a report to run on the aggregator as part of an audit process with time parameters, for example, Start of Last Day and End of Last Day. When you look at the results
of that report, the first time stamps are always at a set tme after 00.00 for example, 02.00. Additionally the last time stamps are always at a set time before 23.59 for
example, 21.59. However, when you run the report interactively, the time stamps are shown as expected.
Causes
The collector and aggregator time zones might not be set the same.
Environment
The aggregator is affected.
Verify that the time is correct on the appliance with the following commands.
The datetime can also be synchronized by using an NTP server with the following commands.
Symptoms
When you restore the configuration of an aggregator or the Central Manager, you receive one or both of these messages.
ERROR 1031 (HY000) at line 1: Table storage engine for 'GUARD_USER_ACTIVITY_AUDIT' doesn't have this option
ERROR 1031 (HY000) at line 1: Table storage engine for 'AGGREGATOR_ACTIVITY_LOG' doesn't have this option
Causes
This error condition can occur if there is a temporary mismatch in the internal databases.
Environment
The collector and aggregator are affected.
Central Management
A user is disabled in a Guardium managed unit, but shows as enabled on Central Manager
If a user is disabled in a Guardium managed unit but shows as enabled on Central Manager, run the Portal User Sync.
Central Manager does not recognize the new version of upgraded units
If the Central Manager does not recognize the new version of upgraded units, select the upgraded units and refresh the page.
Scheduled tasks do not fire at the scheduled time
If scheduled tasks do not fire at the scheduled time, schedule the import time to run after the portal user sync.
Torque exception in Central Management view of GUI
If there is a torque exception in Central Management, delete the custom group and create a new group.
A user is disabled in a Guardium managed unit, but shows as enabled on Central Manager
Symptoms
A user is disabled in the managed unit. The user's account is re-enabled in the Central Manager but the user is still showing as disabled in the managed unit. The user's
account shows as enabled in the Central Manager.
Causes
The user's account in the Central Manager is not synchronized with the managed unit.
Environment
A combination of the Central Manager, collector, or aggregator might be affected.
If the user's account between the managed unit and the Central Manager is still not synchronized, contact the IBM Guardium Technical Support for assistance.
Central Manager does not recognize the new version of upgraded units
If the Central Manager does not recognize the new version of upgraded units, select the upgraded units and refresh the page.
Symptoms
The Central Manager might not immediately recognize the new version of an upgraded aggregator or collector it manages. Pushing a patch from the Central Manager,
which requires the new version, can result in an error that shows the unit is still at the previous version.
The managed unit's old version still displays in the Central Management view of the GUI. The unit ping times in that view, which implies good communication between the
Central Manager and managed units.
Causes
The GUI needs to be refreshed to pull the new version information.
Environment
The Guardium Central Manager is affected.
Symptoms
Import fails and you receive the following message in agg_progress.log.
Causes
There is a conflict with the Central Manager portal user sync.
Environment
The aggregator is affected.
Symptoms
Selecting a certain custom group in the Central Management view of the Guardium GUI displays an error instead of the managed units in the group.
After the exception appears, it shows for any group or view under the Central Management tab. The exception even appears for groups that were previously working until
you log out of the GUI and log back in.
Causes
This torque exception might occur if one of the managed units in the group was unregistered from the managed unit instead of the Central Manager.
Environment
Guardium Central Manager is affected.
AIX 6.1 fails when you install or upgrade IBM Security Guardium S-TAP
If the operating system fails when you install or upgrade Guardium S-TAP on AIX 6.1, apply the Fix Packs AIX 6.1.
Symptoms
The operating system fails when you install or upgrade Guardium S-TAP on AIX 6.1. The AIX crash memory dump shows the following stack trace.
Symptom Information:
Crash Location: [0000000000473260] execvex_common+1880
Component: COMP Exception Type: 131
Stack Trace:
Causes
This crash is a known issue in AIX version 6.1 due to a system crash in the execvex_common code path.
Environment
Any S-TAP to be installed in AIX 6.1 Operating System is affected.
Error opening shared memory area when you configure Guardium COMM_EXIT_LIST for DB2
If you receive an error message when you configure Guardium COMM_EXIT_LIST, authorize the DB2 instance owner with the guardctl command.
Symptoms
After you configure DB2 COMM_EXIT_LIST to use Guardium libguard and restart the DB2 server, you get the following error in the DB2 diag log.
Causes
The following message indicates that the Guardium library was unable to create the shared memory device that it requires.
The DB2 instance owner must be added as an authorized user using the guardctl command.
Environment
Guardium collectors that use DB2 Exit (Version 10) Integration with S-TAP are affected.
If the Guardium Installation Manager (GIM) is not installed, authorize the DB2 instance owner with the following command.
If the Guardium Installation Manager (GIM) is installed, authorize the DB2 instance owner with the following command.
For example, if the DB2 instance owner is db2001 and GIM is installed in /usr/local/guardium, the command is
/usr/local/gim/modules/ATAP/current/files/bin/guardctl authorize-user db2001.
Symptoms
Guardium S-TAP does not collect shared memory traffic from Informix.
Causes
Environment
Any S-TAP collection from any Informix system can be affected.
ls -lrt /INFORMIXTMP/.inf.*
Informix: /INFORMIXTMP/.inf.sqlexec Applies to all Informix platforms but Linux. For Informix with Linux, example: /home/informix11/bin/oninit
For Linux servers using A-TAP, A-TAP must be configured to collect any shared memory traffic. Set the value to the same value as the --db-info parameter in the A-TAP
configuration before you activate A-TAP.
Symptoms
You observe a high CPU or I/O usage by the Guardium S-TAP process.
Causes
The following items are common causes.
1. An error in the configuration of one of the inspection engines. If there are errors in an inspection engine, the S-TAP process restarts frequently or tries to reconnect
to the inspection engine repeatedly.
2. The K-TAP portion of the S-TAP is sending connection information along with a confirmation request to the S-TAP. This step is causing delays.
3. ORACLE RAC is used, but the unix_domain_socket_marker parameter is not set in the S-TAP configuration file to avoid monitoring potentially large amounts of
Oracle RAC traffic.
4. The User ID Chain (UID chain) feature is enabled, for example, parameter hunter_trace=1 in the S-TAP configuration file. Hunter trace is used for UID chain and can
be quite CPU intensive for S-TAP.
5. The firewall is enabled (firewall_installed=1). This firewall forces S-TAP to request verdicts for each new session that is observed which can hurt S-TAP
performance.
Environment
S-TAP installed in AIX
1. Review the configuration for all of the inspection engines and make sure that there are no errors in any of the parameters. For example, make sure the database
installation directory, executable, ports, and any other parameters applicable to your inspection engine are correctly set with no misspellings or wrong values.
2. Set S-TAP configuration parameter ktap_fast_tcp_verdict to 1 (ktap_fast_tcp_verdict = 1 in the guard_tap.ini configuration file) and restart the S-TAP. Here are the
possible settings.
ktap_fast_tcp_verdict=0: KTAP confirms that the session is the database connection that the inspection engine configured by checking ports and Ips.
ktap_fast_tcp_verdict=1: KTAP does not send the request to S-TAP while the session's ports are in the range.
3. Disable the UID Chain feature if not needed by setting hunter_trace=0 and restarting the S-TAP.
4. Set firewall_installed=0 if SGATE is not needed and restart the S-TAP.
Symptoms
You encounter issues in Guardium relating to missing information from the login packet such as database user name, source program, or database name.
Causes
Login packets might miss information when the session is too short.
Environment
Refer to the Technotes in the Related URL section for details on collecting each of these traces.
Symptoms
A message similar to the following is reported one or more times in Guardium system log (messages) or Alerts:
Nanny process error condition. The nanny process killed the sniffer. VmData was number and was over the limit.
Causes
The sniffer memory usage reached over 90% of the available memory and the nanny process has restarted it, which is expected behavior of the product.
Environment
Guardium collector
If the message is observed on very few occasions, it is most likely a momentary spike in traffic. To resolve the message, identify the reason for the spike and avoid the
trigger. For example, you can review which processes were running at that time, identify the ones generating more traffic. If this message always coincides with a
particular process or processes running, reduce the concurrent traffic at that time. For example, you can move heaviest process to run at a different time, or ignore some
of this traffic through a policy.
Causes
The sniffer starts with six threads by default. When the number of threads exceeds the limitation, the sniffer cannot connect to the UNIX S-TAP because of undefined
behavior.
Environment
UNIX S-TAP is affected.
Symptoms
Causes
The S-TAP is unable to allocate enough memory to match the buffer file.
Symptoms
The S-TAP process does not automatically start on Linux even though the /etc/inittab file shows a correct U-TAP entry.
Causes
Various Linux distributions such as RedHat 6 deprecated the use of the traditional init daemon that uses the etc/inittab file. They replaced it with an init process called
upstart. Upstart uses the /etc/event.d and /etc/init directories for the automated start, stop, and respawn of processes such as U-TAP.
The S-TAP installer now checks for the existence of the /etc/event.d directory. If it exists, then entries in /etc/init are created for use by upstart. If it does not exist, then
entries in /etc/inittab are created for use by the traditional init daemon.
If /etc/event.d is missing for any reason on a system with upstart, the inittab file is populated instead. The S-TAP process does not start or respawn when needed.
Environment
S-TAPs running on Linux are affected.
If the /etc/event.d/ directory does not exist, complete the following steps to resolve the situation.
Symptoms
Supported: - Solaris X86 - Linux x86/64 - Linux x86/32 - Linux S390X - Linux IA64
Not Supported: - Solaris SPARC - AIX PowerPC - HPUX RISC - HPUX IA64 - Linux PowerPC
Causes
FIPS 140-2 is a U.S. government security standard for cryptographic modules. If you see this message, it indicates that the S-TAP configuration does not meet government
requirements.
Note: This message does not indicate that there is an error with the S-TAP.
Environment
Guardium S-TAP is affected.
Supported: Solaris X86; Linux x86/64; Linux x86/32; Linux S390X; Linux IA64
Not Supported: Solaris SPARC; AIX PowerPC; HPUX RISC; HPUX IA64; Linux PowerPC
You can change the configuration by using one of the following methods.
You can also edit the guard_tap.ini file on the DB server directly and restart the S-TAP.
The K-TAP kernel module is still present after the uninstallation of S-TAP
If the K-TAP kernel module is still present after the uninstallation of S-TAP, manually remove it.
Symptoms
The K-TAP kernel module is still present after the uninstallation of S-TAP on a Solaris server.
Causes
The server did not restart properly to remove the K-TAP kernel module on Solaris servers.
Environment
The Solaris server after the uninstallation of S-TAP is affected.
Symptoms
UNIX S-TAP reads only the first 16 port_range definitions in the inspection engine settings.
Causes
By design K-TAP can read only 16 port_range definitions.
Environment
UNIX S-TAP that uses K-TAP and defines more than 16 inspection engines is affected.
The following example defines listening ports 50000 - 50020 as target ports to be monitored.
[DB_0]
port_range_end=50020
port_range_start=50000
Otherwise, use PCAP for TCP connections by setting ktap_local_tcp=1 and devices=<device_name>.
[TAP]
ktap_local_tcp=1
devices=<Network Device Name>
Symptoms
The S-TAP on a Windows server does not start. The Windows event log shows errors from Guardium S-TAP with event ID 1000.
Causes
S-TAP cannot connect to the Windows system because the wrong SOFTWARE_TAP_IP is specified in the guard_tap.ini file.
Environment
Any Guardium S-TAP for Windows is affected.
Symptoms
z/OS S-TAP fails to show active on the Guardium system after you start it for the first time. The policy is correctly configured with a DB2 or IMS Collection Profile and
installed. The z/OS S-TAP is properly configured to use port 16022. All messages on the mainframe indicate connectivity.
Causes
If the collector has not been actively used as a collector since being built and configured, the sniffer appears to time out port 16022.
Environment
z/OS is affected.
GIM
Error installing the Guardium Installation Manager (GIM)
If GIM does not install properly, create the directory manually.
Guardium Installation Manager (GIM) service does not start in Windows
If the Guardium Installation Manager (GIM) service does not start in Windows, reinstall GIM in a folder that is reserved for 32-bit applications.
Symptoms
When you attempt to install the Guardium Installation Manager (GIM) on RHEL6, you see the following error message.
Causes
Environment
The Guardium Installation Manager (GIM) is affected.
Symptoms
After you successfully installed the Guardium Installation Manager (GIM) on Windows, you notice that the service is not running.
Causes
GIM is a 32-bit application. If you are using a Windows 64 bit, GIM might be installed in Program Files instead of Program Files(x86).
Environment
GIM is affected.
File activity
File activity is not logged in investigation dashboard or reports
File activity from removable disk is not logged in investigation dashboard
File activity appears in reports but not the investigation dashboard
Some files missing from classification results
Partial file discovery (entitlement) results in reports and investigation dashboard
Reports and investigation dashboard are not showing complete discovery (entitlement) results.
File classification results are missing from reports and investigation dashboard
FAM bundle fails to install
After installing the GIM client, the FAM bundle installation fails.
To send crawled data to quick search: grdapi enable_fam_crawler activity_schedule_interval=2 activity_schedule_units=MINUTE entitlement_schedule_interval=10
entitlement_schedule_units=MINUTE
To enable quick search (with option to also include violations): grdapi enable_quick_search includeViolations=true schedule_interval=2 schedule_units=MINUTE
Causes
The following are file types are not supported for classification: DAT, JPG, JPEG, GIF, TIF, TIFF, BMP, WAV, MOV, MP3, MP4, AVI, MPG, WMA, WMV, P7S, XFDL, XFD, FRM,
JAR
Symptoms
The discovery (entitlement) results that appear in reports and investigation dashboard is incomplete. Results for some files do not appear.
FAM_SCAN_EXCLUDE_FILES
FAM_SCAN_EXCLUDE_DIRECTORIES
FAM_SCAN_EXCLUDE_EXTENSIONS
FAM_SCAN_EXCLUDE_FILES
FAM_SCAN_MAX_DEPTH
File classification results are missing from reports and investigation dashboard
Symptoms
File classification results are missing from reports and investigation dashboard.
Causes
Classification is an additional process that goes beyond metadata discovery.
Symptoms
When attempting to install the FAM bundle, the system responds with a message similar to:
-1,GIM - Failure point : dependancy_violation (Dependancy violation (FAM) : Missing mandatory dependency - STAP at GIM.pm line
3176, <MYFILE> line 20.
Causes
The S-TAP bundle must be installed before installing the FAM bundle.
Symptoms
You receive an error similar to the following when you run the S-TAP installer to install Guardium S-TAP on UNIX or Linux.
./guard-stap-v81_r26808_1-aix-6.1-aix-powerpc.sh
Verifying archive integrity...Error in checksums: 2082112805 is
different from 3728267449
Causes
The installer file is corrupted. The file became corrupted when the file was transferred to the database server or when the product was downloaded.
Environment
S-TAP on UNIX or Linux is affected.
Symptoms
The S-TAP installation fails with the following error message.
A directory called 'guardium' containing Guardium software needs to be created under a path provided.
Enter the path prefix [/usr/local]? /opt/guardium
Directory /opt/guardium/guardium/guard_stap does not exist, would you like to create it [Y/n]? Y
Run STAP as root, or as user 'guardium' [R/u]? R
Please be patient... This might take more than a minute.
Causes
The path to/usr/bin/cp is different from what the installer expects.
Environment
The UNIX/Linux database server is affected.
If which cp returns a value other than /usr/bin/cp, run the command export PATH=/usr/sbin:/usr/bin:$PATH.
Symptoms
When you install a new patch it does not complete. The status column in the CLI command show system patch installed shows one of the following messages.
Causes
Tomcat, the inspection core, or another process on the machine interfered with the patch installation.
Environment
The Collector, Aggregator, and Central Manager are affected.
1. Delete the patch that is stuck by using the command delete scheduled-patch.
2. Restart the system by using the command restart system.
3. After the system restarts, stop the GUI and inspection core by using the commands stop gui and stop inspection-core.
4. Reinstall the patch and restart the GUI and inspection core by using the commands restart gui and start inspection-core.
Causes
There are many possible reasons why the K-TAP device creation can fail. The following are the most common causes.
You did not use the module files, including the K-TAP module for the Linux kernel.
You did not specify the Flex Loading option to load the K-TAP module from the module files.
A previous K-TAP module from an old installation is still running or installed.
Environment
All Linux and UNIX operating systems in which the IBM Guardium S-TAP product can be installed are affected.
2. Check whether the K-TAP device is now created with the command ls /dev/*ktap*. If it was created, issue is resolved. If not, continue to next step.
3. Stop the S-TAP process guard_stap if it is running. You can check whether it is running with command ps -ef | grep guard_stap.
4. Verify that the S-TAP process is not running with the command ps -ef | grep guard_stap.
5. Uninstall the S-TAP.
6. Confirm that the S-TAP directory is gone.
7. Check whether a K-TAP module is still running from an old installation. Use the appropriate command for your operating system.
KTAP-ALLOW_COMBOS=Y
KTAP_LIVE_UPDATE=Y
KTAP_ENABLED=Y
Symptoms
When you install the Guardium appliance in VMWare, you receive the following error:
Error Partitioning
Could not allocate requested partitions:
Partitioning failed: Could not allocate partitions as primary partitions.
Not enough space left to create partition for /boot.
Causes
When you install the Guardium system with VMWare, if you select Typical, VMWare uses configuration parameters that are predefined for the OS type in VMWare. These
configuration parameters might not be suitable for this installation.
Environment
All Guardium configurations (collector, aggregator, central manager) are affected.
Symptoms
Patch installation in Guardium fails with the error patch.reg: No such file or directory.
Causes
The following cases can cause the patch installation to fail.
The patch was not downloaded in binary mode and corrupted the file.
The compressed file itself was uploaded to the Guardium system.
The patch was received from Guardium support and has the PMR number prefixed to the file name.
The patch was uploaded to the Guardium system from a Windows FTP server.
Environment
If the compressed file itself was uploaded to the Guardium system, extract the compressed file and upload only the patch.
If there is a PMR number prefixed to the file name, remove the number and then upload the patch to the Guardium system.
If the patch is uploaded from a Windows FTP server, specify the exact file name with the correct case.
For data activity monitoring, the S-TAP monitors activity between the client and the database and forwards that information to the Guardium collector. The database traffic
is logged into the collector based on criteria specified in the security policy. It is also possible to reduce the amount of traffic that is originally sent to the collector by
ignoring trusted connections or ignoring traffic from specific IPs.
For file activity monitoring, unlike data activity, the policy rules are pushed down to the file server and thus only data that is specified in the security policy is forwarded to
the collector.
For example, you may want to track or perform one or more of the following:
This table covers the most common platforms, database types, and protocols, supported by Guardium's monitoring mechanisms. The table presents general guidelines.
There may be other combinations that are not presented here that are supported. Some of the supported setups presented here may be dependent on specific
configurations. Contact Technical Support to verify the best setup for your specific needs. Empty cells indicate that the combination is not supported.
OS Database Network traffic Local traffic Encrypted Protocol Kerberos Blocking Redaction
traffic
Windows MS SQL Server Supported Supported Supported for TCP, NMP Supported Supported Supported
TCP and NMP
Windows DB2 Supported, also Supported, also DB2 Exit TCP, SHM Â Supported Supported
with DB2 Exit with DB2 Exit (Except DB2 (Except DB2
Exit) Exit)
Windows Oracle Supported Supported Supported (ASO, TCP, NMP, BEQ Â Supported Supported
SSL)
Use your firewall management utility to check, and open as relevant, the ports listed below.
Depending on your license key, you can use the same S-TAP agent for both file and database activity monitoring. There are no specific S-TAP parameters for FAM.
The Base Filtering Engine (BFE) service must be running for the S-TAP installation. If the service exists but is not running, Guardium attempts to start it.
S-TAPs require .NET Framework 4.5 or higher version. If the .NET 4.5 or higher environment does not exist, S-TAP will install .NET 4.5.2.
When installing the Windows S-TAP in a Non-ASCII environment (for example, Japanese), use either the server with that language pack or set the system locale to that
location (Japan).
Auto-discovery supports these database types: MS SQL Server, DB2, Oracle, Informix, MongoDB, CouchDB. To create inspection engines on other discovered databases,
see the Discovered Instances report.
During an upgrade, auto-discovery discovers additional database instances but does not create inspection engines for the new instances. Auto-discovery adjusts any
preexisting inspection engines. This means that if you have added an inspection engine for a database that does not exist or specified a port that does not work, the auto-
If you do not want the S-TAP installation to perform automatic discovery of databases during installation or upgrade, you can prevent it during the S-TAP installation
process by following the procedure described for each Windows S-TAP installer.
Review the Windows S-TAP installation requirements at Windows: Prerequisites: installing S-TAP.
Verify that your database server and operating system are supported.
Verify that the intended S-TAP installation directory is empty or does not exist.
The GIM client is installed on the database server where you will install an S-TAP.
The GIM client on the database server is communicating with the Guardium system.
Obtain the S-TAP module from either Fix Central, or your Guardium representative.
The parameter WINSTAP_INSTALL_DIR cannot be modified after the installation. All other parameters can be modified after installation.
You can input any parameter in the Setup by Client page, in the Choose parameters ribbon, using the command WINSTAP_CMD_LINE with the syntax parameter=value for
[TAP] parameters, or with the syntax -param value for CLI parameters (Windows: S-TAP command line installation parameters), and they are added or updated in the
guard_tap.ini.
CAUTION:
There is no validation of input to this field.
Procedure
1. Upload the Windows S-TAP module for installation.
a. On the Guardium system, navigate to Manage > Module Installation > Upload Modules.
b. Click Choose File and select the S-TAP module you want to install.
c. Click Upload to upload the module to the Guardium system. After uploading, the module will be listed in the Import Uploaded Modules table.
d. In the Import Uploaded Modules table, click the check box next to the S-TAP module you want to install. The module will be imported and made available for
installation. After the module is imported, the Upload Modules page will be reset and the Import Uploaded Modules table will be empty.
2. Follow the GIM instructions in Set up by Client and refer to Windows: S-TAP GIM installation parameters.
While the default parameters are acceptable for most installations, you are required to provide a WINSTAP_INSTALL_DIR value. The default value is
C:/Program Files/IBM/Windows S-TAP. This is the only required parameter.
If WINSTAP_TAP_IP (equivalent to the -taphost command line parameter) is not specified, the GIM_CLIENT_IP value is used.
If WINSTAP_SQLGUARD_IP (equivalent to the -appliance command line parameter) is not specified, the GIM_URL value is used.
Optionally enable enterprise load balancing. See the parameter description in Windows: S-TAP GIM installation parameters.
To enable auto_discovery of database instances, set WINSTAP_NOAUTODISCOVERY to 0.
What to do next
Verify that the S-TAP is communicating with the Guardium system by navigating to Manage > Activity Monitoring > S-TAP Control and reviewing the S-TAPs status and
configuration.
Review the Windows S-TAP installation requirements at Windows: Prerequisites: installing S-TAP.
Verify that your database server and operating system are supported.
Verify that the intended S-TAP installation directory is empty or does not exist.
The GIM client is installed on the database server where you will install an S-TAP.
The GIM client on the database server is communicating with the Guardium system.
Obtain the S-TAP module from either Fix Central, or your Guardium representative.
The parameter WINSTAP_INSTALL_DIR cannot be modified after the installation. All other parameters can be modified after installation.
You can input any parameter in the Setup by Client page, in the Choose parameters ribbon, using the command WINSTAP_CMD_LINE with the syntax parameter=value for
[TAP] parameters, , or with the syntax -param value for CLI parameters (Windows: S-TAP command line installation parameters), and they are added or updated in the
guard_tap.ini.
CAUTION:
There is no validation of input to this field.
Procedure
1. Upload the Windows S-TAP module for installation.
a. On the Guardium system, navigate to Manage > Module Installation > Upload Modules.
b. Click Choose File and select the S-TAP module you want to install.
c. Click Upload to upload the module to the Guardium system. After uploading, the module will be listed in the Import Uploaded Modules table.
d. In the Import Uploaded Modules table, click the check box next to the S-TAP module you want to install. The module will be imported and made available for
installation. After the module is imported, the Upload Modules page will be reset and the Import Uploaded Modules table will be empty.
2. Select client systems where you want to install an S-TAP.
a. Navigate to Manage > Module Installation > Setup by Client.
b. On the Client Search Criteria screen, specify search criteria for the clients where you want to install the S-TAP, then click Search to continue. Search for
clients using any combination of the following search criteria:
Select a client group.
Search by client hostname, IP address, or operating system.
Leave all search criteria fields empty to return a list of all available clients.
c. On the Clients screen, click the check box next to the clients where you want to install the S-TAP, then click Next to continue.
3. Select and configure the S-TAP module before installing to client systems.
a. From the Modules table on the Common Modules screen, select the S-TAP module for installation, then click Next to continue.
Use the Display Latest Versions and Display Bundles Only check boxes to filter the list of available modules.
Use the Module Status table to review information about the selected module on the target clients.
b. From the Client Module Parameters screen, specify installation parameters for the S-TAP.
To apply the same parameters to multiple clients, specify installation parameters in the Common Module Parameters fields, click the check box next
to clients listed in the Client Module Parameters tables, and then click Apply to Selected.
To apply unique parameters to individual clients, specify installation parameters directly in the Client Module Parameters table.
Attention:
While the default parameters are acceptable for most installations, you are required to provide a WINSTAP_INSTALL_DIR value. The default value is
C:/Program Files/IBM/Windows S-TAP.
If WINSTAP_TAP_IP (equivalent to the -taphost command line parameter) is not specified, the GIM_CLIENT_IP value is used.
If WINSTAP_SQLGUARD_IP (equivalent to the -appliance command line parameter) is not specified, the GIM_URL value is used.
c. Once you have specified installation parameters for the S-TAP, apply those parameters to the selected clients by clicking Apply to Client.
4. Install the S-TAP to the selected clients.
a. From the Client Module Parameters screen, click Install/Update.
b. On the Schedule Date dialog, provide a date or time to begin the installation, then click Apply. To begin the installation immediately, use a value of now in the
Schedule Date field.
What to do next
Verify that the S-TAP is communicating with the Guardium system by navigating to Manage > Activity Monitoring > S-TAP Control and reviewing the S-TAPs status and
configuration.
All parameters are listed in Windows: Editing the S-TAP configuration parameters.
CAUTION:
Do not modify advanced parameters unless you are an expert user or you have consulted with IBM Technical Support.
Table 1. Parameters applicable to all .NET installers
GIM parameter Description
WINSTAP_INSTALL_DIR This is the install directory. Default install path is C:/Program Files/IBM/Windows S-TAP
WINSTAP_SQLGUARD_IP The SQLGUARD IP. You can set up multiple appliances by specifying this parameter multiple times, each with a unique
value.
Table 3. S-TAP Parameters with Applicable Value ON. These parameters are on by default with their value set to ON. Unless described
otherwise, setting these parameters to any value other than ON turns the parameter off.
GIM parameter Description
NAMED_PIPE_DRIVER_INSTALLE NAMED_PIPE_DRIVER_INSTALLED=1. Specifies the named pipe used by MS SQL Server for local access. If a named pipe is used,
D but nothing is specified in this parameter, S-TAP attempts to retrieve the named pipe name from the registry.
KRB_MSSQL_DRIVER_INSTALLED Deprecated from v10.1.4. It appears in the guard_tap.ini file but it does not affect the configuration.
This parameter is used to decrypt MSSQL SSL and Kerberos encrypted traffic. Set to 1 or 2 to collect MSSQL encrypted traffic and
Kerberos tickets. If set to 1, when STAP starts, it will pre-collect usernames correlated with SIDs, collecting them for number of
seconds defined in krb_mssql_driver_user_collect_time. When set to 2, the pre-collection isn’t done and the usernames are
correlated at run time.
Table 4. Enterprise Load Balancing parameters
GIM parameter Description
This option specifies the IP address of the central manager or managed unit this S-TAP should use for load balancing.
S-TAP parameters cannot be changed via the interactive installer during upgrade. Use the Guardium UI after the upgrade to
change S-TAP parameters.
If configuring the enterprise load balancer to run on a managed unit, the S-TAP must be at V10.1 or higher.
WINSTAP_INITIAL_BALANCER_TAP Optional. The application group name that this S-TAP belongs to for enterprise load balancing.
_GROUP Attention: Group names with spaces or special characters are not supported.
WINSTAP_INITIAL_BALANCER_MU Optional. The MU group name the app-group will be associated with. Requires a defined LB-APP-GROUP. An MU group must
_GROUP already exist on the Central Manager before it can be used during installation of S-TAP
Attention: Group names with spaces or special characters are not supported.
WINSTAP_LOAD_BALANCER_NUM_ The number of managed units the enterprise load balancer allocates for this S-TAP.
MUS
Parent topic: Windows: Installing an S-TAP agent
Review the Windows S-TAP installation requirements at Windows: Prerequisites: installing S-TAP.
Verify that your database server and operating system are supported.
Identify the IP address of the database server or domain controller where you will install the S-TAP, including any virtual IP addresses.
Identify the IP address of the Guardium system that will control the S-TAP.
Verify that the intended S-TAP installation directory is empty or does not exist.
Obtain the S-TAP module from either Fix Central, or your Guardium representative.
Note: Windows S-TAP parameters cannot be changed via the interactive installer during upgrading. The user can use the GUI after the upgrade to change Windows S-TAP
parameters.
Procedure
1. Log on to the database server using a system administrator account.
2. Copy the S-TAP module to your database and start the Guardium Windows S-TAP Install Wizard.
Attention: When installing an S-TAP on Windows 2012 or later, you must use administrative privileges. To do this, right-click the installer and choose Run as
Administrator.
3. Read the license agreement on the Guardium License screen. To continue installation, select I accept the terms of the license agreement and click Next.
4. Provide the requested content on the Customer Information screen, then click Next to continue. The default values are appropriate for most installations.
5. Select one of the following installation types, and then click Next to continue:
Typical: a typical installation will be appropriate for most users.
Compact: a compact installation assumes that additional features such as Enterprise Load Balancing are not required.
Custom: a custom installation allows you to modify additional S-TAP installation options such as the software choices, installation directory and the user
account that runs the Windows S-TAP process.
6. Optionally, enable Enterprise Load Balancing by selecting the Enable Load Balancing checkbox on the Load Balancing Options screen. Click Next to continue.
a. If you enable Enterprise Load Balancing, provide the load balancer IP address in the Load Balancer Host Address field.
b. Click the Advanced Options button to specify any additional Enterprise Load Balancing options. For more information, see Enterprise Load Balancing.
7. Verify the Software Tap Host Address and provide Appliance Address(es) on the Network Addresses screen, then click Next to continue.
The Software Tap Host Address specifies the address of the local machine where the S-TAP is being installed.
The Appliance Address(es) specify the Guardium system addresses that will control the S-TAP. Provide multiple addresses (typically not more than three) on
separate lines to establish failover systems for the S-TAP or when configuring S-TAP load balancing with the participate_in_load_balancing parameter.
Attention: If you do not want the S-TAP service to be enabled after installation, deselect the Start S-Tap Service checkbox. Deselecting the Start S-Tap Service
checkbox also disables the automatic discovery of databases and creation of inspection engines.
The Install Wizard Completed screen appears following a successful installation.
8. Click Finish to close the installer.
What to do next
Verify that the S-TAP is communicating with the Guardium system by navigating to Manage > Activity Monitoring > S-TAP Control and reviewing the S-TAPs status and
configuration.
Review the Windows S-TAP installation requirements at Windows: Prerequisites: installing S-TAP.
Verify that your database server and operating system are supported.
Identify the IP address of the database server or domain controller where you will install the S-TAP, including any virtual IP addresses.
Identify the IP address of the Guardium system that will control the S-TAP.
Verify that the intended S-TAP installation directory is empty or does not exist.
Obtain the S-TAP module from either Fix Central, or your Guardium representative.
Procedure
1. Log on to the database server using a system administrator account.
2. Copy the installer to your database, and using the Windows Command Prompt, navigate to the Windows S-TAP installer directory. For example,
cd c:\Windows-STAP-V10.5.0.89
where:
What to do next
Verify that the S-TAP is communicating with the Guardium system by navigating to Manage > Activity Monitoring > S-TAP Control and reviewing the S-TAPs status and
configuration.
In a CLI installation, you install an S-TAP using the setup.exe executable with the appropriate parameters, in this format:
Do not use “=†signs to assign values to the parameters. The only time “=†is used is when you want to add a parameter to the TAP section of the
guard_tap.ini file directly as it is typed in the command line.
If you want to add additional parameters not specified here but required in the guard_tap.ini file, you can append the [TAP] section by specifying the parameter and value
with an = sign, for example:
setup.exe -UNATTENDED -INSTALLPATH "C:/Program Files/IBM/Windows S-TAP" -APPLIANCE 10.0.148.160 -TAPHOST 10.0.146.160 QRW_INSTALLED=0
QRW_DEFAULT_STATE=0
INSTALLPATH WINSTAP_INSTALL_DIR This is the install directory. Default install path is C:/Program Files/IBM/Windows S-TAP
NOAUTODISCOVERY To prevent Auto-Discovery from running upon install. A value is not required.
APPLIANCE The SQLGUARD IP. You can set up multiple appliances by specifying this parameter multiple times, each with a unique value.
Table 3. S-TAP Parameters with Applicable Value ON. These parameters are on by default with their value set to ON. Unless described
otherwise, setting these parameters to any value other than ON turns the parameter off.
Command line parameter Description
NMP Specifies the named pipe used by MS SQL Server for local access. If a named pipe is used, but nothing is specified in this parameter,
S-TAP attempts to retrieve the named pipe name from the registry.
MSPLUGIN Deprecated from v10.1.4. It appears in the guard_tap.ini file but it does not affect the configuration.
This parameter is used to decrypt MSSQL SSL and Kerberos encrypted traffic. Set to 1 or 2 to collect MSSQL encrypted traffic and
Kerberos tickets. If set to 1, when STAP starts, it will pre-collect usernames correlated with SIDs, collecting them for number of
seconds defined in krb_mssql_driver_user_collect_time. When set to 2, the pre-collection isn’t done and the usernames are
correlated at run time.
Table 4. Enterprise Load Balancing parameters
S-TAP parameters cannot be changed via the interactive installer during upgrade. Use the Guardium
UI after the upgrade to change S-TAP parameters.
If configuring the enterprise load balancer to run on a managed unit, the S-TAP must be at V10.1 or
higher.
LB-APP-GROUP WINSTAP_INITIAL_BALANC Optional. The application group name that this S-TAP belongs to for enterprise load balancing.
ER_TAP_GROUP Attention: Group names with spaces or special characters are not supported.
LB-MU-GROUP WINSTAP_INITIAL_BALANC Optional. The MU group name the app-group will be associated with. Requires a defined LB-APP-GROUP.
ER_MU_GROUP An MU group must already exist on the Central Manager before it can be used during installation of S-TAP
Attention: Group names with spaces or special characters are not supported.
LB-NUM-MUS WINSTAP_LOAD_BALANCER The number of managed units the enterprise load balancer allocates for this S-TAP.
_NUM_MUS
Parent topic: Windows: Installing an S-TAP agent
Procedure
1. Install S-TAP on all nodes. In case GIM is used, install GIM client on all nodes, then install S-TAP on all nodes.
2. Configure the STAP parameter STAP_TAP_IP: public IP configured for the node. (Can be configured through GIM UI.)
The parameter STAP_ALTERNATE_IPS is not required.
If the Oracle database is encrypted (ASO/SSL) make sure the parameter ORA_DRIVER_INSTALLED=1
If the Oracle inspection engine is auto-discovered, it should already contain all required parameters including INSTANCE_NAME.
If a prior version of the Windows S-TAP has been installed, an upgrade can be performed from the command line using the setup program.
Procedure
This procedure will remove the installed S-TAP while making sure the configuration file is saved for future use.
Procedure
This procedure will remove the installed S-TAP while making sure the configuration file is saved for future use.
Procedure
Windows: When to restart or reboot the database after S-TAP installation or upgrade
This topic details the situations, after S-TAP installation, of when to restart and when to reboot the database server or database instance. Restart/reboot requirements are
the same for GIM and non-GIM implementations.
Windows S-TAP installation and upgrade does not require reboot of the database server unless stated otherwise in the release notes or as an exception in this document.
If you are not certain about reboot requirement for particular version you are using, you should check with your Technical Support representative.
Reboot database servers only when you need to upgrade the driver
Some configuration changes require that the S-TAP agent be restarted manually, as indicated in the parameter descriptions.
Sometimes a user is unable to make a decision during the process of installing an S-TAP or may make the wrong decision and it goes undetected until after the installation
process is complete. For instance a user may forget to type in or use the wrong IP address when defining a SQL Guard IP. These types of mistakes can be remedied by
modifying the S-TAP configurations.
Parameters in the GUI may be safely changed. Parameters that are not in the GUI rarely need changing and should normally be left unmodified; they are for use by
Guardium Technical Support or advanced users.
If you have installed your S-TAP by using the Guardium Installation Manager (GIM), you can update some parameters through the GIM GUI or API.
Procedure
1. Click Manage > Activity Monitoring > S-TAP Control to open S-TAP Control.
2. Perform operations on all S-TAPs in the page.
Refresh: refresh display of S-TAPs.
Add All to Schedule: add all displayed S-TAPs to the S-TAP verification schedule.
Remove All from Schedule: remove all displayed S-TAPs from the S-TAP verification schedule.
Comments: add comments. See Comments
3. Identify the S-TAP to be configured by its IP address or the symbolic host name of the database server on which it is installed. View and perform operations on
individual S-TAPs.
Option Description
Deleting S-TAPs is useful to clean up your display when you know that an S-TAP has become inactive, or when the
Guardium unit is no longer listed as a host in the S-TAP's configuration file. In either of these cases, the S-TAP displays
indefinitely with an offline status if you do not delete it.
You cannot remove an active S-TAP from the list. Clicking delete does not stop an S-TAP from sending information, nor
does it remove the Guardium host from the list of hosts stored in the S-TAP's configuration file.
Refresh: Click Refresh to fetch a copy of the latest S-TAP configuration from the agent. (There is no auto-refresh of the S-TAP
display.)
Opens the S-TAP Commands popup, where you can run various commands on the S-TAP host.
Restart: Restarts the S-TAP. Not usually needed, and if yes, it's easier to simply kill it from the database server.
Send Command: S-TAP logging
Reinitialize buffer: reset the K-TAP statistics along with deleting the S-TAP buffer
Run Diagnostics: Run the S-TAP diagnostics script (and upload the results to the Guardium system)
Record Replay Log: Records all data to a file on DB server (RECORD) and sends data to collector (REPLAY)
Revoke Ignore: All sessions ignored by a revokable ignore policy will be un-ignored and start capturing the traffic
again for those sessions
Run Database Instance Discovery: Runs the discovery process, once immediately. (If enabled to run automatically,
it runs, by default, every 24 hours.)
Opens the S-TAP configuration window. Parameters that do not appear in the GUI are advanced parameters. Do not
Edit S-TAP configuration:
modify them if you are not an advanced user, or have not been instructed to modify them by Guardium Technical Support.
See GUI parameters:
Windows: General parameters
Windows: Configuration Auditing System (CAS) parameters
Windows: Guardium Hosts (SQLGuard) parameters
Windows: Inspection engine parameters
Show S-TAP Event Log: Click to open the S-TAP event log, where you can see events such as connect, disconnect, GIM server configuration, and
so on. This log is very useful for troubleshooting.
Add to Schedule checkbox Adds the individual S-TAP to the scheduled verification.
Revoke All Ignored Sessions A database could be running many sessions, some of which are currently ignored. Clear this option to stop ignoring traffic
checkbox from that server.
The Guardium Discovery Agent is a software agent automatically installed with the S-TAP package on a database server. The instance discovery agent reports database
instances, listener, and port information to the Guardium system. Discovery does not find and report on every detail of the DB instances on the server.
Newly discovered database instances can be seen in the Discovered Instances report. From this report, datasources and inspection engines can quickly be added to
Guardium using the Actions menu.
If databases on the database server are not operational (started) or are added later, the Discovery Agent can still discover these instances by running the Run Discovery
Agent command from the STAP Control window (Manage > Activity Monitoring > S-TAP Control. Click , and select Run Database Instance Discovery).
S-TAP Discovery can be run manually but this action is not suggested. The main reason to run it manually is for debugging purposes. If a new request comes in from the
user interface while a scheduled discovery is running, the new request is ignored.
Note: In order to avoid an instance where S-TAP discovery does not open the Informix database, it is recommended to start Informix databases using the full path to the
executable.
The S-TAP Discovery application parameters should be left at their default values, except for advanced users. Discovery application are described in Linux and UNIX
systems: Discovery parameters.
Software_tap_host: IP address or hostname of the database server on which the S-TAP is installed
sqlguard_ip: S-TAP discovery results are sent to this IP. (The Guardium system with primary=1 in the SQLguard parameters. )
Procedure
1. Navigate to Manage > Activity Monitoring > S-TAP Control.
2. In the row of the S-TAP, click . The S-TAP Configuration window opens.
3. Scroll to the bottom of the inspection engines, and click next to Add Inspection Engine....
4. Select the protocol and enter the port range. The window refreshes with the relevant parameters, some with their default values.
5. Configure all required parameters, and click Add. If you are missing parameters, the system informs you what is missing.
Verification checks sniffer operation and communication between the Guardium system and the inspection engines. You can enable verification for all S-TAP clients on
your system, or individual S-TAP clients, or individual inspection engines.
DB2
DB2 Exit (DB2 version 10)
FTP
Kerberos
Mysql
Oracle
PostgreSQL
Sybase
Windows File Share
exclude IE
MSSQL
named pipes
Standard verification
Checks the sniffer operation, and the communication between the S-TAP and the inspection engine. It submits invalid login request and verifies that the
appropriate error message is returned.
Advanced verification
Use advanced verification to avoid failed login requests, and manage individual IEs. For avoiding failed login requests, you must identify or create a datasource
definition associated with the target database. The datasource definition includes credentials, which the verification process uses to log in to the database. Then it
submits a request to retrieve data from a nonexistent table in order to generate an error message.
For both types of verification requests, the results are displayed in a new dialog that provides information about the tests that were performed and recommended actions
for tests that failed.
Before connecting to the database, the verification process checks whether the sniffer process is running on the Guardium system. The sniffer is responsible for
communicating with each S-TAP and processing the data that is received. If the sniffer is not running, responses from the S-TAP are not recognized.
The verification process attempts to log in to your database's STAP client with an erroneous user ID and password, to verify that this attempt is recognized and
communicated to the Guardium system.
Next the verification process checks whether it can connect to the selected inspection engine on the database server. It expects to receive a response that indicates a
failed login. If a different response is received, you might have to investigate further.
View the verification results in the S-TAP Verification page (Manage > Reports > Activity Monitoring > S-TAP Verification page). Failed checks are shown first, with
recommendations for next steps. Checks that succeeded are shown in a collapsed section at the end of the list. In some situations, it might be useful to review the
successful checks in order to choose among possible next steps.
Procedure
1. Access Manage > Activity Monitoring > S-TAP Control.
2. Use these options:
Add All to Schedule: add all inspection engines for all displayed S-TAPs to verification.
Remove All from Schedule: remove all inspection engines for all displayed S-TAPs from verification.
Add to Schedule: add all inspection engines of the selected S-TAP client to the schedule.
If an S-TAP does not have the option All Can Control enabled, you can only change its status if your Guardium system is the primary system for this S-TAP.
3. Click Refresh.
4. To verify now, go to Manage > Activity Monitoring > S-TAP Verification Scheduler and click Run Once Now.
Procedure
1. Access Manage > System View > S-TAP Status Monitor.
2. Click anywhere in the row of the S-TAP.
The window refreshes with the individual inspection engines of this host.
3. To verify now, select one or more inspection engines and click Verify.
4. Configure advanced verification.
a. Click one inspection engine, and click Advanced Verify.
b. Optionally, under Datasource, select Show only matching S-TAP host or select a name from the Name drop-down list to search for a specific inspection
engine.
c. Click Close.
5. To add to or remove from verification.
a. Select one or more inspection engines.
b. Click Add to Schedule or Remove from Schedule
Once a schedule is defined, you can click the Pause button to temporarily stop the verification process while keeping it active. Use the Run Once Now button to run the
verification once in real-time.
Procedure
1. Click Manage > Activity Monitoring > S-TAP Verification Scheduler to open the S-TAP Verification Scheduler.
2. In the S-TAP Verification Scheduler portion of the page, click Modify Schedule.
3. In the Schedule Definition dialog, use the drop-down lists and check boxes to schedule when verification runs. This schedule is applied to all S-TAPs that are
scheduled for verification.
4. Click Save to save your changes.
Each load balancing model is described here, along with its specific parameter requirements.
Note: This topic described S-TAP load balancing, and not Enterprise Load Balancing.
Failover
S-TAP sends traffic to one collector (primary) and fails over to the secondary as needed. The S-TAP agents are configured with a primary and at least one secondary
collector IP. If the S-TAP agent cannot send the traffic to the primary collector for various reasons, the S-TAP agent automatically fails over to the secondary. It continues
to send data to the secondary host until either the secondary host system becomes unavailable, the primary host becomes available again, or until the S-TAP is restarted
(at which point it attempts to connect to its primary host first). If the secondary host system becomes unavailable, it fails over to another secondary if there is one defined.
In the second case S-TAP fails over from the secondary Guardium host back to the Primary Guardium host. It's recommend setting up a primary and up to two secondary
collectors. You can either define one collector as a standby failover collector only, or a few failover collectors. When using one standby failover, one collector is usually
sufficient for 4-5 collectors. When using a few failover collectors, each one should run at a maximum 50% capacity, so that there are always resources for additional load.
Choose the setup that works best with your architecture, database, and data center layout. If the primary becomes available, the S-TAP fails back from the secondary
Guardium host back to the Primary Guardium host.
The S-TAP restarts each time configuration changes are applied from the active host.
In the S-TAP Control window, Details section: set Load Balancing to 0; In the Guardium Hosts section: add at least one secondary Guardium Host.
Additional failover configuration should be left at the default values, except by advanced users.
Before designating a Guardium system as a secondary host for an S-TAP, verify these items.
The Guardium system must have connectivity to the database server where S-TAP is installed. When multiple Guardium systems are used, they are often attached
to disjointed branches of the network.
The Guardium system must not have a security policy that will ignore session data from the database server where S-TAP is installed. In many cases, a Guardium®
security policy is built to focus on a narrow subset of the observable database traffic, ignoring all other sessions. Either make sure that the secondary host will not
ignore session data from S-TAP or modify the security policy on the Guardium system as necessary.
Load balancing
This configuration balances traffic from one database onto multiple collectors. This option might be good when you must monitor all traffic (comprehensive monitoring) of
an active database. (Note that for outliers detection, the collectors need to be under the same aggregator and central manager in order for the aggregator to process all
related data.) When the generated traffic is large and you need to house the data online on a collector for an extended period, this method might be your best choice
because it performs session-based load balancing across multiple collectors. An S-TAP can be configured in this manner with up to 10 collectors.
In the S-TAP Control window, Details section: set Load Balancing to 1 for load balancing.
Grid
With Grid, the S-TAP communicates to the collector through a load balancer, such as f5 and Cisco. The S-TAP agent is configured to send traffic to the load balancer. The
load balancer forwards the S-TAP traffic to one of the collectors in the pool of collectors. You also can configure failover between load balancers for continuous monitoring
if the load balancer should fail.
S-TAPs in the F5 environment upload their log files and results of running diagnostics (all files from ..\Logs folder except for memory dumps) to the active collector and
central manager (if exists) to the location ./var/IBM/Guardium/log/stap_diagnostic/
In the S-TAP Control window, Details section: set Load Balancing to 3 for the grid model.
In addition, set:
Redundancy
In redundancy, the S-TAP communicates its entire payload to multiple collectors. The S-TAP is configured with more than one collector (often only two) and
communicates the identical content to both. This option provides full redundancy of the same logged data across multiple collectors. It can also be used for logging data
and alert on activity at different levels of granularity.
In the S-TAP Control window, Details section: set Load Balancing to 2 for redundancy.
S-TAPs can be configured to only connect to a certain group of machine(s) that authenticate with a given certificate or set of certificates. Â These certificates can either be
generated locally on the Guardium system and sent off to the Certificate Authority (CA) for signing or can be created at the CA and installed whole on the Guardium
system.
Procedure
1. Log into your Guardium system with CLI.
2. Enter: cli> create csr sniffer
3. Enter the requested data.
4. Copy from the -----BEGIN CERTIFICATE REQUEST----- to the -----END CERTIFICATE REQUEST----- into a file and send this to your CA for signing.
The CA will sign the certificate and send you back a public key that looks something like:
It asks you to confirm that you want to store the certificate, and when you confirm, it stores it.
The CA sends you two files, and the public cert for your CA. Â
Have these files handy to either import (via scp/ftp/etc) to the Guardium system or to copy-paste into the cli interface on the Guardium system.
Procedure
1. Log in to the Guardium system via CLI.
2. Store the private key by entering: cli> Â store certificate keystore [import | console] The import takes the saved file, and then copies and pastes the contents of the
file into your console interface. It asks for the password that the file was saved with. Â Either you provided this to the CA for creation of the certificate, or more
likely, they provided you with a password when they sent your files. Here's what it looks like on the Guardium system:
You need the CN of the cert installed on the Guardium system and the public-key for the CA that signed the certificate on the Guardium system. You also might want a
Certificate Revocation list signed by the same CA that signed the Guardium system cert, but it's not necessary.
Procedure
1. Copy the public key [and the CRL if wanted] for the CA that the CA sent you to a directory on the S-TAP host. Take note of this directory.
2. Set guardium_ca_path=[path-to-CA.pem]
3. Set sqlguard_cert_cn=[the full CN or partial CN (using * as a wildcard) of the Guardium system]
4. If you want to use a certificate revocation list at this time, set guardium_crl_path=[path-to-crl.crl] It should look like:
guardium_ca_path=/var/tmp/pki/Victoria_QA_CA.pem
sqlguard_cert_cn=sample1_qa.victoria
guardium_crl_path=/var/tmp/pki/Victoria_QA_CA.crl
5. Change tls=1.
6. Restart the S-TAP You are now connected using Openssl.
Limitations:
Procedure
1. Create a new folder within the DB2 SQLLIB folder, for each instance$DB2PATH\security\plugin\commexit\instance_name For example: C:\Program
Files\IBM\SQLLIB\security\plugin\commexit\DB2_01
2. Copy the corresponding DLLs from the S-TAP installation directory into the created directories:
For 32-bit DB2:
db2fexitx86.dll
db2exitx86.dll
For 64-bit DB2:
db2exitx64.dll
db2fexitx64.dll
3. Stop the DB2 instance(s), and issue the following command:
for 32 bit: UPDATE DBM CFG USING COMM_EXIT_LIST db2fexitx86
for 64 bit: UPDATE DBM CFG USING COMM_EXIT_LIST db2fexitx
4. Start the DB2 instances.
5. Add an inspection engine for DB2 Exit with protocol DB2 Exit. Navigate to Manage > Activity Monitoring > S-TAP Control. See parameter descriptions in Windows:
Inspection engine parameters. You can also modify the guard_tap.ini, but it's much easier to use the GUI since it fills in some of the information automatically and
does some validation. If modifying the guard_tap.ini
[DB_DB2_EXIT1]
DB_TYPE=DB2_EXIT
INSTANCE_NAME=Service_name
The service name is not the instance name. You can determine the service name by using the db2tap utility in the S-TAP installation folder, or from the control
panel. Set the instance name to the portion of the service name that follows the second dash ( - ) delimiter. For example, if the service name in the control panel is
DB2 - DB2COPY1 - DB2-01-0, set INSTANCE_NAME to DB2-01-0.
6. To stop using the feature and stop DB2, issue the following command and then restart the DB2: db2 UPDATE DBM CFG USING COMM_EXIT_LIST NULL
Note: Parameters in the GUI may be safely changed. Parameters that are not in the GUI are advanced, and rarely need changing. They are normally be left unmodified;
they are for use by Guardium support or advanced users.
CAUTION:
Do not modify advanced parameters unless you are an expert user or you have consulted with IBM Technical Support.
You can some modify parameters in the GUI. See Windows: Configure S-TAP from the GUI.
You can input any parameter in the Setup by Client page, in the Choose parameters ribbon, using the command WINSTAP_CMD_LINE with the syntax parameter=value for
[TAP] parameters, and it is added or updated in the guard_tap.ini.
CAUTION:
There is no validation of the input when using the command WINSTAP_CMD_LINE. Use this command carefully. Do not modify advanced parameters unless you are an
expert user or you have consulted with IBM Technical Support.
If it is necessary to modify the configuration file from the database server, follow the procedure described in this section.
The S-TAP needs restarting after you modify the guard_tap.ini. If you're using GIM, it restarts the S-TAP automatically.
CAUTION:
Parameters must be added to their relevant section: [Version], [TAP], [SQLGuard], [DB_<name>].
 PRIMARY  Indicates the primary Guardium system for this S-TAP. In guard_tap.ini: 0=secondary, 1=primary
(chec
kmark
indica
tes
the
prima
ry
host)
  TAP_GUARD_TCP_PORT 9500 Read only. Port used for S-TAP to connect to Guardium system.
Guard WINS SQLGUARD_IP NULL IP address or hostname of the Guardium system that acts as the host for the S-TAP. You can define
ium TAP_S multiple hosts by adding [SQLGuard_1], [SQLGuard_2], and so on.
Host QLGU
ARD_I
P
Parent topic: Windows: Editing the S-TAP configuration parameters
These parameters are stored in the [VERSION] section of the S-TAP properties file.
Table 1. S-TAP configuration parameters in the [VERSION] section
GUI guard_tap.ini Description
Version  TAP_VERSION  Read only. The version of S-TAP installed on the server.
S-TAP TAP_IP Â Read only. Used by the file system monitoring service, instead of the SOFTWARE_TAP_HOST
Host parameter. Both parameters should have the same value.
All can WSTA ALL_CAN_CONTROL 0 0=S-TAP can be controlled only from the primary Guardium system. 1=S-TAP can be controlled
control P_AL from any Guardium system.
L_CA
N_CO
NTRO
L
Load WINS PARTICIPATE_IN_LOAD_BALAN 0 Controls S-TAP load balancing (not enterprise load balancing) to Guardium systems:
balanci TAP_ CING
ng PART 0: No load balancing.
ICIPA 1: Load balancing. Traffic is balanced between the primary and secondary servers, defined
TE_I in the SQLGuard section.
N_LO 2: Redundancy. Fully mirrored S-TAP sends all traffic to all primary and secondary servers,
AD_B defined in the SQLGuard section.
ALAN 3: Hardware load balancing. Guardium uses a load balancer such as F5 or Cisco. S-TAP
CING sends the traffic to the load balancer, which forwards it to one of the collectors in the pool.
Use the primary parameter in the SQLGUARD section to specify primary, secondary, etc. servers.
If this parameter is set to 0, and you have more than one Guardium system monitoring traffic,
then the non-primary Guardium systems are available for failover.
TLS USE_TLS 0 1=use SSL to encrypt traffic between the agent and the Guardium system.
Use
0=do not encrypt. Warning - the traffic between the agent and Guardium system is in clear text.
Guardium recommends encrypting network traffic between the S-TAP and the collector
whenever possible, only in cases where the performance is a higher priority than security should
this be disabled.
TLS Â FAILOVER_TLS 1 1= If ssl connection is not possible for any reason, fail over to using non-secure connection.
Failove 0=use only secure connections.
r
  ALTERNATE_IPS  Comma-separated list of alternate or virtual IP addresses used to connect to this database
server. This is used only when your server has multiple network cards with multiple IPs, or virtual
IPs. S-TAP only monitors traffic when the destination IP matches either the S-TAP Host IP
defined for this S-TAP, or one of the alternate IPs listed here, so it's recommend that you list all
virtual IPs here.
 DB2_TAP_INSTALLED 0 Set to 1 for sniffing DB2 shared memory traffic. Starts the DB2 TAP Service when set to 1.
 DB2_EXIT_DRIVER_INSTALLED  DB2 Integration with S-TAP: set to 1 to enable DB2 Exit library integration 1) Let S-TAP capture
all DB2 traffic directly from the DB2 engine - Note, that it is only for specifc DB2 releases - 10.1
and onwards 2) When using this method, Firewall and Scrub/Redact functionality are not
supported. Also, stored procedures will not be captured. 3) It lets us pick up all DB2 traffic ,
regardless of encryption/network protocol. 4) This solution simplifies the S-TAP configuration for
customers that will deploy this version of DB2, and gives them native DB2 support.
 DB2_SHMEM_DRIVER_LEVEL  Deprecated
  DC_COLLECT_FREQ 24 Specifies the frequency of collection in hours. Minimum is 1, maximum is 24. GuardiumDC is a
service that collects updates of user accounts (SIDs and usernames) from the primary domain
controller and then signals the changes to Guardium_S-TAP to update S-TAP internal
SID/UserName? map. If S-TAP cannot find resolved SID in the map, it tries to get it from the
primary Domain Controller, in which case S-TAP logs a message into debug log (level 7) The
account name *** has been retrieved for SID ***.
 DOMAIN_CONTROLLER  The name of the specific controller from which the SID/usernames map should be read.
 HIGH_RESOLUTION_TIMER 0 0: send time stamps in milliseconds. 1: send time stamps in microseconds, but use milliseconds
system timer (to reduce system performance hit - multiply milliseconds by 1000). 2: send time
stamps in microseconds, use high resolution windows timer (most accurate). For cases 1 and 2,
the S-TAP will indicate to the Guardium system that micro seconds are sent, by setting the
reserved byte in PacketData to 1.
 BUFFER_FILE_SIZE 50 Advanced. The initial size of the buffer. The range is 5 to 1000 in MB.
  BUFFER_FILE_NAME  The full path of the memory mapped file if BUFFER_MMAP_FILE=1. Default is WSTAP working
folder/StapBuffer/STAP_buffer.dtx
 SOFTWARE_TAP_HOST  The database server host on which S-TAP is installed. It can be an IP address or a name
recognized by the DNSserver. There is no default. An invalidly configured SOFTWARE_TAP_HOST
is automatically replaced with a valid local IP.
 TCP_ALIVE_MESSAGE 1 This parameter is deprecated since Guardium v10.x. Guardium collectors no longer send UDP
alive messages.
 DISABLE_SHARED_MEMORY_IF_ 0 Â
TURNED_ON
registration attempts with a Guardium system if a previous attempt was not successful
S-TAP checks for new logs available from Program Files\IBM\Windows S-TAP\Logs for
uploading onto collector
 RECV_LEVEL 0 Advanced.
Messag REMOTE_MESSAGES 1 1=Send messages to the active Guardium system. 0=Do not send messages
es:
remote
 SNIFFED_UDP_PORTS 88 Deprecated.
 SYNCH_FLAG 1 Read only. Deprecated in v10.0. Indicates whether parameters are synchronized with the UI.
 TAP_DBSERVER_NAMES  Â
 TAP_MIN_HEARTBEAT_INTERVA 30 Maximum time the S-TAP attempts to write to the primary Guardium system buffer before
L attempting to write to the secondary Guardium buffer. Default is 30 sec, meaning it tries to write
at least 5*60/30 times before failover, by default (using also TAP_MIN_TIME_BEFOREFAILOVER).
  TAP_MIN_TIME_BEFOREFAILOV 5 The time interval, in minutes, after which the S-TAP switches to secondary Guardium system if: it
ER cannot connect to its primary Guardium system; it can connect to its primary Guardium system
but cannot write to its buffer.
 TCP_BUFFER_SIZE 60000 Advanced. Minimum number of bytes to collect before sending a message to the Guardium
system
 SQLGUARD_CERT_CN NULL The common name to expect from the Sqlguard certificate.
 GUARDIUM_CRL_PATH NULL The path to the Certificate Revocation list file or directory.
 TAP_FAILOVER_SESSION_QUIES 240 The number of seconds after failover, when unused sessions in the failover list from the previous
CE active servers can be removed from the current active server,
 TAP_FAILOVER_SESSION_SIZE 8192 Size, in MB, of the failover session list. 0=no failover sessions should be saved
 DB_IGNORE_RESPONSE  Ignore response at inspection level. Use this function to ignore all database responses at the S-
TAP level, without sending anything to the Guardium system. In certain environments, where
only interested in client transactions, this function saves bandwidth and processing time for the
S-TAP and the Guardium system. Use this function for an easier configuration for ignoring
unwanted responses from the database, without loading the network. Database types can be
listed as comma separated or ALL can be specified to ignore responses from all types of
databases, for example, DB_IGNORE_RESPONSE=ALL or DB_IGNORE_RESPONSE=MSSQL,DB2.
Supported DB types: ALL, MSSQL_NP, MSSQL, MYSQL, TRD, PGRS, MSSYB, ORACLE, DB2,
DB2_EXIT, INFORMIX, KERBEROS, FTP, CIFS.
 DB_IGNORE_RESPONSE_FILTE 0.0.0.0/0.0.0.0 Comma separated list of IP/MASKs to be response-ignored. Any DB responses of the type
R specified by DB_IGNORE_RESPONSE to the specified IP/MASKs are ignored
 UPLOAD_FEATURE 1 Controls uploading of all log files from Program Files\IBM\Windows S-TAP\Logs onto the
collector.
Parent topic: Windows: Editing the S-TAP configuration parameters
These parameters are stored in the individual [DB_<name>] inspection engine section of the S-TAP properties file, with the name of a data repository. There can be
multiple sections in a properties file, each describing one inspection engine used by this S-TAP.
GUI guard_tap.ini Default value Description
Instan INSTANCE_NAME Â The name of the database instance on this server. Required for MS SQL Server is using encryption;
ce MS SQL Server using Kerberos Authentication; DB2 Exit traffic collection; DB2 SHM traffic. (Default is
Name MSSQLSERVER.)
Port PORT_RANGE_START Â Starting port range specific to the database instance. Together with TAP_DB_PORT_MAX defines the
range range of ports monitored for this database instance. There is usually only a single port in the range.
For a Kerberos inspection engine, set the start and end values to 88-88. If a range is used, do not
include extra ports in the range, as this could result in excessive resource consumption while the S-
TAP attempts to analyze unwanted traffic.
Name NAMED_PIPE sql\query,sqlloc Specifies the named pipe used by MS SQL Server for local access. If a named pipe is used, but
d Pipe al,\MSSQLSERV nothing is specified in this parameter, S-TAP attempts to retrieve the named pipe name from the
ER registry.
Client NETWORKS Â Identifies the clients to be monitored, using a list of addresses in IP address/mask format:
Ip/Ma n.n.n.n/m.m.m.m. If an improper IP address/mask is entered, the S-TAP does not start. Valid values:
sk
null=select all clients
127.0.0.1/255.255.255.255=local traffic only
Client Ip/Mask (networks) and Exclude Client Ip/Mask (exclude networks) cannot be specified
simultaneously.
If the IP address is the same as the IP address for the database server, and a mask of
255.255.255.255 is used, only local traffic will be monitored. An address/mask value of
1.1.1.1/0.0.0.0 monitors all clients.
Exclu EXCLUDE_NETWORKS Â A list of client IP addresses and corresponding masks that are excluded from monitoring. This option
de allows you to configure the S-TAP to monitor all clients, except for a certain client or subnet (or a
Client collection of these). Client Ip/Mask (networks) and Exclude Client Ip/Mask (exclude networks)
Ip/Ma cannot be specified simultaneously.
sk
Proce TAP_DB_PROCESS_NAMES Â Database service executables that are to be monitored. For example, a DB2 IE would be
ss TAP_DB_PROCESS_NAMES=DB2SYSCS.EXE. For Oracle or MS SQL Server only, when named pipes
Name are used. For Oracle, the list has two entries: oracle.exe,tnslsnr.exe. For MS SQL Server, the list is just
one entry: sqlservr.exe.
Identi TAP_IDENTIFIER NULL Optional. Used to distinguish inspection engines from one another. If you do not provide a value for
fier this field, Guardium auto-populates the field with a unique name using the database type and GUI
display sequence number.
DB2 Shared DB2_FIX_PACK_ADJUSTMENT 80 Required when DB2 is selected as the database type, and shared memory connections are
Mem. Adjust. monitored. The offset to the server's portion of the shared memory area. Offset to the beginning
of the DB2 shared memory packet, depends on the DB2 version: 32 in pre-8.2.1, and 80 in 8.2.1
and higher.
 DB2_LOG_SIZE  Advanced. The maximum file size, in MB, that the functional DLL can keep buffered before it
starts throwing away log entries.
DB2 Sh. Mem. DB2_CLIENT_OFFSET 61440 The offset to the client's portion of the shared memory area. Required when DB2 is selected as
Client Pos. the database type, and shared memory connections are monitored. The client offset can be
calculated by taking the value of the DB2 parameter ASLHEAPSZ and multiplying by 4096 to get
the appropriate offset. The default for this parameter is 61440 decimal. This parameter is
calculated by taking the DB2 database configuration value of ASLHEAPSZ and multiplying by
4096. To get the value for ASLHEAPSZ, execute the following DB2 command: db2 get dbm cfg
and look for the value of ASLHEAPSZ. This value is typically 15 which yields the 61440 default. If
it's not 15, take the value and multiply by 4096 to get the appropriate client offset.
DB2 Shared DB2_SHMEM_SIZE 131072 DB2 shared memory segment size. Required when DB2 is selected as the database type, and
Mem. Size shared memory connections are monitored.
Parent topic: Windows: Editing the S-TAP configuration parameters
These parameters are stored in the [TAP] section of the S-TAP properties file.
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
G
I
M guard_tap.ini Default value Description
W FIREWALL_TIMEOUT 10 Time, in seconds to, wait for a verdict from the Guardium system if the firewall timed out. Look at
S firewall_fail_close value to know whether to block or allow the connection. The value can be any integer
T value.
A
P
_
F
I
R
E
W
A
L
L
_
T
I
M
E
O
U
T
W FIREWALL_FAIL_CLOSE 0 If the verdict does not come back from the Guardium system and the firewall_timeout expires: if
S firewall_close = 0 the connection goes through; if firewall_close=1 the connection is blocked.
T
A
P
_
F
A
I
L
_
C
L
O
S
E
W FIREWALL_DEFAULT_STATE 0 0: An event triggers traffic in a session to be watched and checked for firewall policy violations.
S 1: All traffic is watched by default for firewall policy violations
T
A
P
_
D
E
F
A
U
L
T
_
S
T
A
T
E
W FIREWALL_FORCE_WATCH NULL When the firewall feature is enabled and firewall_default_state is 0, the session is watched automatically
S when its client IP matches one of this list of IP/MASK values. The list itself is separated with commas, for
T example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2
A
P
_
F
O
R
C
E
_
W
A
T
C
H
W FIREWALL_FORCE_UNWATCH NULL When the firewall feature is enabled and firewall_default_state is 1, the session is unwatched
S automatically when its client IP matches one of this list of IP/MASK values. The list itself is separated with
T commas, for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2,
A
P
_
F
O
R
C
E
_
U
N
W
A
T
C
H
Parent topic: Windows: Editing the S-TAP configuration parameters
These parameters are stored in the [TAP] section of the S-TAP properties file.
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
GIM guard_tap.ini Default Description
Value
WINSTAP_QRW_INSTAL QUERY_REWRITE_INST 0 Enable / disable the Dynamic Data Masking for Databases feature. When set to 0, all other
LED ALLED parameters in this group are ignored.
0=No
1=Yes
WINSTAP_QRW_DEFAU QUERY_REWRITE_DEF 0 Sets the query rewrite activation trigger. Must be 0 if firewall_default_state=1.
LT_STATE AULT_STATE
0=QRW activated per session when triggered by a rule in the installed policy
1=QRW activated for every session regardless of the installed policy
WINSTAP_QRW_FORCE QUERY_REWRITE_FOR NULL Comma separated list of client IP/MASKs (for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2) to watch
_WATCH CE_WATCH automatically. Valid when qrw_default_state is 0. Cannot be configured to the same range as
firewall_force_watch.
WINSTAP_QRW_FORCE QUERY_REWRITE_FOR NULL Comma separated list of client IP/MASKs (for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2) to exclude
_UNWATCH CE_UNWATCH from watching. Valid when firewall_default_state is 1. Cannot be configured to the same range as
firewall_force_unwatch.
WINSTAP_QUERY_REW QUERY_REWRITE_FAIL 8 If the verdict does not come back from the Guardium system and the QUERY_REWRITE_TIMEOUT
RITE_FAIL_CLOSE _CLOSE expires: if QUERY_REWRITE_CLOSE=0 the query rewrite operation proceeds; if
QUERY_REWRITE_CLOSE=1 the connection is terminated.
WINSTAP_QUERY_REW QUERY_REWRITE_TIME 10 If the verdict does not come back from the Guardium system and the QUERY_REWRITE_TIMEOUT
RITE_TIMEOUT OUT expires: if QUERY_REWRITE_CLOSE=0 the query rewrite operation proceeds; if
QUERY_REWRITE_CLOSE=1 the connection is terminated.
Parent topic: Windows: Editing the S-TAP configuration parameters
These parameters are stored in the [TAP] section of the S-TAP properties file.
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
GIM guard_tap.ini Default value Description
WINSTAP_DISC DISCOVERY_INTERVAL 24 The time interval, in hours, at which auto-discovery runs. Set to 0 to disable.
OVERY_INTERV
AL
Parent topic: Windows: Editing the S-TAP configuration parameters
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
These parameters are stored in the [DEBUG_OPTIONS] section of the S-TAP properties file:
guard_tap.ini Default value Description
DEBUG_MAX_FILE_SIZE 200
DEBUGLEVEL 0 Level of debug messages to store. Leave at 0 unless directed by IBM Technical Support.
0
Only critical error information
From v10.1.4: Two "startup" debug logs saved in bin\..\logs. Filename syntax:
startup_hostname_timestamp.new and startup_hostname_timestamp.old. Files from bin\..\logs get
uploaded automatically if upload_feature is on.
1
All previous messages plus repeatable critical error information
From v10.1.4: Two "normal" debug logs saved in bin\StapBuffer. Filename syntax:
stap_hostname_timestamp.new and stap_hostname_timestamp.old. Files from bin\StapBuffer are not
uploaded.
2
Not used
3
All messages from level 1, plus brief information about packets sent to a Guardium system
4
All messages from level 3, plus local sniffing log
5
All messages from level 4, plus network sniffing log
6
All messages from level 5, plus heartbeat receiving log
7
All messages from level 6, plus miscellaneous debugging information
DUMP_FILE_MODE 0 Enables capture of dump files if S-TAP crashes. When the parameter is not zero, a new dump file is opened
every time the S-TAP starts; it is empty if there is no crash.
DEBUG_FILE_MODE <install Deprecated in V10.1.4. Location of the S-TAP debug file. Default until 10.1.4 is <install
folder>/StapBuffe folder>/StapBuffer/stap.txt.
r/stap.txt
v10.1.4 and higher: If the debuglevel > 0, then the log from the previous S-TAP session (if it exists) is saved
as: %STAP_DIR%\Bin\StapBuffer\stap_%HOSTNAME%%YY-MM-DD%%HHMMDD%.old and the new log is
created as: %STAP_DIR%\Bin\StapBuffer\stap_%HOSTNAME%%YY-MM-DD%%HHMMDD%.new. In
addition to this, start-up logs containing just messages related to S-TAP start-up are always generated in
%STAP_DIR%\Logs: startup_%HOSTNAME%%YY-MM-DD%%HHMMDD%.old and
startup_%HOSTNAME%%YY-MM-DD%%HHMMDD%.new.
KERNEL_DEBUG_LEVEL 0 Â
WER_DUMP 1
WER_DUMP_FOLDER None If the parameter is not set, the following value is used. If the STAP installation folder is rooted anywhere but
C:\Program Files (x86)\... then the WER dump folder is set to the full path ending in ...\Windows S-
TAP\Bin\..\Logs. If the STAP installation folder contains the text "(x86)" in it, the dump folder is set to
C:\Guardium\Dumps and that path will be created by the STAP process.
For example, if Windows S-TAP is installed to C:\PROGRAM FILES\IBM\WINDOWS S-TAP and uses default
values for WER_DUMP_FOLDER, WER_DUMP_COUNT, Windows S-TAP uses the following registry settings,
then Windows S-TAP crash dump is generated via Windows Error Reporting (WER) facility when it's crashed.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error
Reporting\LocalDumps\guardium_stapr.exe
CAS_SERVER_PORT 16017 The port for communication with the CAS agent. 16017 for unencrypted; 16019 for
encrypted.
Parent topic: Windows: Editing the S-TAP configuration parameters
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
Default
guard_tap.ini value Description
WFP_DRIVER_INSTALLED 1 WFP driver is used instead of LHMON. This option can be supported on Windows 2008 SP2 or
newer because Windows supports WFP API since this version. This parameter is ignored when
tcp_driver_installed=1
KRB_MSSQL_DRIVER_INSTALLED 2 Deprecated from v10.1.4. It appears in the guard_tap.ini file but it does not affect the
configuration.
This parameter is used to decrypt MSSQL SSL and Kerberos encrypted traffic. Set to 1 or 2 to
collect MSSQL encrypted traffic and Kerberos tickets. If set to 1, when STAP starts, it pre-collects
usernames correlated with SIDs, collecting them for the number of seconds defined in
krb_mssql_driver_user_collect_time. When set to 2, the pre-collection isn’t done and the
usernames are correlated at run time.
In V10.1, this parameter is used to enable/disable Correlation. If it is set to non-zero value, use
Correlation. If zero, don't use Correlation. The default is non zero value.
KRB_MSSQL_DRIVER_LEVEL 0 This parameter is deprecated from v10.1.4. Controls thread priorities of different sniffers.
KRB_MSSQL_DRIVER_NONBLOCKING 0 This parameter is deprecated from v10.1.4. It appears in the guard_tap.ini file but it does not
affect the configuration.1=get domain user names from the domain controller in a separate thread.
In this case the first packet with the new user does not resolve the user SID into domain user
name.
KRB_MSSQL_DRIVER_USER_COLLECT_TIME 30 This parameter is deprecated from v10.1.4. Use the Correlation driver introduced in 10.1.Time
limit for collecting SIDs at STAP startup.
CORRELATION_TIMEOUT 5 The number of seconds the WFP and NMP sniffers wait for correlation to occur before giving up
and resuming the flow of traffic to the appliance. The default is 5 seconds.
KRB_MSSQL_DRIVER_ONDEMAND 0 Deprecated in v9.0 GPU patch 50. Set to 1 if you want to save time by resolving user SIDs into
domain user names only for Kerberos tickets from new users for the running STAP instance.
Parent topic: Windows: Editing the S-TAP configuration parameters
Procedure
1. Click Manage > Module Installation > Set up by Client to open the Client Search Criteria.
2. Click Search to perform a filtered search.
3. Select the Clients that will be the target for the action (starting S-TAP)
4. Click Next to open the Common Modules panel.
5. Select the Module for WINSTAP.
6. Click Next to open the Module Parameters panel.
7. Select the clients that will be the target for the action (starting S-TAP®).
8. Change the WINSTAP_ENABLED parameter to 1 (one).
9. Click Apply to Clients to apply to the targeted clients.
10. Click Install/Update to schedule the update to the targeted clients. This update can be scheduled for NOW or some time in the future. When the schedule is run for
this update the S-TAP service on the targeted clients starts at the specified time.
Procedure
1. Click Manage > Module Installation > Set up by Client to open the Client Search Criteria.
2. Enter Client Search Criteria if you want to perform a filtered search of registered clients.
3. Click Search to perform filtered search and display the Clients panel.
4. Select the clients that will be the target for the action (stopping S-TAP).
5. Click Next to open the Common Modules panel.
6. Select the Module for WINSTAP.
7. Click Next to open the Module Parameters panel.
8. Select the client that will be the target for the action (stopping S-TAP).
9. Change the WINSTAP_ENABLED parameter to 0.
10. Click Apply to Clients to apply to the targeted clients
11. Click Install/Update to schedule the update to the targeted clients. This update can be scheduled for NOW or some time in the future. When the schedule is run for
this update the S-TAP service on the targeted clients is stopped at the specified time.
Procedure
1. Log on to the database server system using a system administrator account.
2. From the Services control panel, start the IBM Security Guardium S-TAP.
3. Log in to the Guardium system to which this S-TAP reports. Verify that the Status light in the S-TAP control panel is green.
Procedure
1. Log on to the database server system using a system administrator account.
2. From the Services control panel, stop the IBM Security Guardium S-TAP.
3. Log in to the UI of the Guardium system to which this S-TAP was reporting, verify that the Status light in the S-TAP control panel is now red.
You can create alerts that are based on exceptions that are created by S-TAPs, but other domains that are used by S-TAP reports are system-private and cannot be
accessed by users. Â
System View
S-TAP Status Monitor in the System Monitor window: For each S-TAP reporting to this Guardium system, this report identifies the S-TAP Host, S-TAP Version, DB Server
Type, Status (active or inactive), Last Response Received (date and time), Instance Name, Primary Host Name, and true/false indicators for: MS SQL Server Shared
Memory, DB2® Shared Memory, Win TCP, Local TCP monitoring, Named Pipes Usage, Encryption, Firewall, DB install Dir, DB port Min and DB Port Max.
Note: The DB2 shared memory driver has been superseded by the DB2 Tap feature.
S-TAP Status Monitor: For each S-TAP reporting to this Guardium system, this report identifies the S-TAP Host, DB Server Type, S-TAP Version, Status (active or inactive),
Inspection Engine status, Last Response Received (date and time), Primary Host Name, and true/false indicators for: Firewall and Encrypted. Click the S-TAP Status and
the Inspection Engine status to see the Verification status on all Inspection Engines.
S-TAP Events: For each S-TAP reporting to this Guardium system, this report identifies the S-TAP Host, Timestamp, Event type (Success, Error Type, and so on), and Tap
Message.
If no messages display in the S-TAP Events panel, the production of event messages may have been disabled in the configuration file for that S-TAP®. If this is the case,
you may be able to locate S-TAP event messages on the host system in the Event Log.
Tap Monitor
Primary Guardium® Host Change Log: Log of primary host changes for S-TAPs. The primary host is the Guardium system to which the S-TAP sends data. Each line of the
report lists the S-TAP Host, Guardium Host Name, Period Start, and Period End.
S-TAP Status: Displays status information about each inspection engine that is defined on each S-TAP Host. This report does not have From and To date parameters, since
it is reporting current status. Each row of the report lists the S-TAP Host, DB Server Type, Status, Last Response, Primary Host Name, Yes/No indicators for the following
attributes: Shared Memory Driver Installed, DB2 Shared Memory Driver Installed, Named Pipes Driver Installed, and App Server Installed. In addition, it lists the Hunter
DBS.
Inactive S-TAPs Since: Lists all inactive S-TAPs that are defined on the system. It has a single runtime parameter: QUERY_FROM_DATE, which is set to now -1 hour by
default. Use this parameter to control how you want to define inactive. This report contains the same columns of data as the S-TAP Status report, with the addition of a
count for each row of the report.
To access, use the GUI. You can create alerts based on results.
The time interval is in hours (example, 5 is every 5 hours). Use - (minus) for a time interval less than 1 hour.
Fields in Table
TIMESTAMP
SOFTWARE_TAP_HOST
TOTAL_BYTES_SO_FAR
TOTAL_BYTES_DROPPED_SO_FAR
TOTAL_BYTES_IGNORED
TOTAL_BUFFER_INIT
IOCTL_REQUESTS
TOTAL_RESPONSE_BYTES_IGNORED
System CPU%
System Idle%
STAP CPU%
Buffer recycled
Note: The GAM service should be off by default as it requires configuration specific to the environment in which it is installed. Improper configuration can cause very
serious operational issues. This is a tool to aid in troubleshooting and otherwise is not required.
Monitoring covers:
CPU usage
Memory
Handles
Number of threads
Alive - responsiveness (supported agents only, currently S-TAP is the only supported agent) (See Responsiveness)
Guardium Agent Monitor is installed when S-TAP is installed but is not enabled by default. When S-TAP is uninstalled, GAM is uninstalled.
Note: Just like S-TAP, GAM requires administrative privileges. When installing, run with "Run as Administrator" as an administrative user.
The default install location for GAM is the parent folder of S-TAP (C:\Program Files\IBM\Guardium Agent Monitor\).
After enabling GAM, make sure the process is running on the database server (resmon.exe).
GAM Configuration
The Guardium Agent Monitor runs with its configuration file, resmon.ini, as its argument. The monitor is controlled by using the resmon.ini file. See sample resmon.
Note that the default values for all of the parameters are at the bottom in the sample ini.
Global Configuration
CPU_LOAD_LIMIT: Percentage CPU threshold at which either action is taken, or UPDATE_INTERVAL starts counting occurrences of reaching threshold
CPU_INTERVALS_ALLOWED: Number of intervals the CPU can be above the threshold before triggering an action (used in conjunction with UPDATE_INTERVAL to
set a time limit)
UPDATE_INTERVAL: 0 = action is taken when CPU reaches its load limit. 1 = action is taken when CPU has reached its load limit the number of times specified by
CPU_INTERVALS_ALLOWED
CPUAVE: Defines the type of CPU average. 1 = usage averaged across all CPU cores (system average), 0 = percentage of the core used by the process.
For these metrics there are two thresholds, limit and peak limit. An action is triggered when a limit threshold is passed for more intervals than allowed, or when a
peak limit threshold is passed. Metrics refers to CPU, memory, and so on.
[METRIC]_LIMIT: Lower level threshold. An action is triggered if this limit is exceeded for more intervals than [METRIC]_INTERVALS_ALLOWED
[METRIC]_INTERVALS_ALLOWED: Number of intervals allowed for the lower limit threshold before an action is triggered (used with UPDATE_INTERVAL for time
limit)
[METRIC]_PEAK_LIMIT: Upper level threshold. An action is triggered if this threshold is exceeded once
Note: [METRIC]_INTERVALS_ALLOWED is used in conjunction with UPDATE_INTERVAL to set a time limit for the threshold. (for example, UPDATE_INTERVAL=1,
CPU_INTERVALS_ALLOWED=10, CPU_LOAD_LIMIT=10 means an action is triggered if the CPU load is over 10% for over 10 seconds).
Responsiveness
NAMEDPIPE_INTERVAL: The interval, in seconds, at which the S-TAP agent is pinged to verify responsiveness. Set to "0" to disable
Action Configuration
The actions that can be triggered are described under Core Dump Configuration and Diagnostic Configuration. The second and third actions are only initiated if they
are triggered within the ACTION_RESET_INTERVAL of the previous action. If the ACTION_RESET_INTERVAL time has elapsed with no new triggers, then the next
trigger starts a new cycle starts with the FIRST_ACTION.
FIRST_ACTION: 0 = no action. 1 = stop then restart the service. 2 = stop the service.
SECOND_ACTION: The action initiated the second time there is a trigger during the ACTION_RESET_INTERVAL. 0 = no action. 1 = stop then restart the service. 2 =
stop the service.
THIRD_ACTION: The action initiated the third time there is a trigger during the ACTION_RESET_INTERVAL. 0 = no action. 1 = stop then restart the service. 2 = stop
the service.
ACTION: 1 = take a core dump whenever an action is triggered; 0 = no core dump is taken.
MAX_NUM_DUMP: The maximum number of core dumps to be stored in the dump directory (keeping the latest).
Diagnostic Configuration
DIAGACTION: 1 = run the diagnostic script whenever an action is triggered; 0 = no diagnostic script is run.
DIAGNAME: Name of the diagnostic file to be run (must be in the same folder as the service executable)
Example of resmon.ini
From the GUI, the S-TAP® version number is displayed in Manage > System View > S-TAP Status Monitor
Alternatively, you can display the S-TAP version number from the command line of the database server.
Run debug from the command line to quickly identify configuration issues
Turn on debug from the GIM GUI or the command line. See debug levels in Windows: Debug parameters.
Verify the connection between the database server and the Guardium system
Verify that you can ping the Guardium system at sqlguard_ip from the database server.
If the ping is successful, verify that you can telnet to the following ports on the Guardium system: 16016/16018
If there is a firewall between the database server and the Guardium system
Verify that the following ports are open for traffic between these two systems: TCP Port 16016 or TLS Port 16018 for encrypted connections.
Note: Use the following command to check the port availability: nmap -p port guardium_hostname_or_ip
Verify that the sqlguard_ip parameter is set to the correct guardium_hostname_or_ip for the Guardium system that you are connecting to.
1. Click Manage > Activity Monitoring > S-TAP Control to open S-TAP Control.
2. Locate the S-TAP Host for the IP address that corresponds to your database server.
3. Expand the Guardium Hosts subsection, and verify that the active Guardium Host is correctly configured.
4. If necessary, click Modify to update the Guardium Hosts.
1. Click Manage > Activity Monitoring > S-TAP Certification to open S-TAP Certification.
2. Look at the S-TAP Approval Needed check box. If this box is checked, new S-TAPs can connect to this Guardium system only after they have been added to
the list of approved S-TAPs.
3. If S-TAP Approval is turned on, select Daily Monitor > Approved Tap Clients to view a list of approved S-TAPs. If the S-TAP that you are investigating is not on
this list, return to the S-TAP Certification pane, enter the IP address of the S-TAP in the Client Host field, and click Add.
The verification process attempts to log in to your database's STAP client with an erroneous user ID and password, to verify that this attempt is recognized and
communicated to the Guardium system. Your S-TAP could be configured in a way that prevents the inspection engine message from reaching the Guardium system
from which the request was made.
Load balancing: if the S-TAP is configured to return responses to more than one Guardium system, the error message could be sent to a different Guardium
system.
Failover: If secondary Guardium systems are configured for the S-TAP, the error message could be sent to a secondary Guardium system if the primary
Guardium system is too busy.
Db_ignore_response: if the S-TAP is configured to ignore all responses from the database, it does not send error messages to the Guardium system.
Client IP/mask: if any mask is defined that is not 0.0.0.0, it could prevent the error message from being sent.
Exclude IP/mask: if any mask is defined that is not 0.0.0.0, it could prevent the error message from being sent.
Related topics:
For data activity monitoring, the S-TAP monitors activity between the client and the database and forwards that information to the Guardium collector. The database traffic
is logged into the collector based on criteria specified in the security policy. It is also possible to reduce the amount of traffic that is originally sent to the collector by
ignoring trusted connections or ignoring traffic from specific IPs.
For file activity monitoring, unlike data activity, the policy rules are pushed down to the file server and thus only data that is specified in the security policy is forwarded to
the collector.
S-TAP takes care of upgrading S-TAP kernel components at boot time --adjusting to kernel upgrades in Linux environments.
For example, you may want to track or perform one or more of the following:
This table covers the most common platforms, database types, and protocols, supported by Guardium's monitoring mechanisms. The table presents general guidelines.
There may be other combinations that are not presented here that are supported. Some of the supported setups presented here may be dependent on specific
configurations. Contact Customer Support to verify the best setup for your specific needs. Empty cells indicate that the combination is not supported.
The exit libraries are preferred over all other monitoring mechanisms. If you cannot use an exit library, K-TAP is the next choice, then A-TAP, and finally PCAP.
OS Database Network Local traffic Encrypted Shared Kerberos Blocking Redaction UID Chain
traffic traffic Memory
AIX Oracle K-TAP K-TAP A-TAP (ASO, Â K-TAP K-TAP, A-TAP K-TAP K-TAP
SSL)
AIX Sybase ASE K-TAP K-TAP A-TAP (SSL) Â K-TAP K-TAP, A-TAP K-TAP K-TAP, A-TAP
(A-TAP only
when
configured for
real IPs)
AIX Sybase IQ K-TAP K-TAP A-TAP A-TAP (Sybase  K-TAP, A-TAP K-TAP K-TAP
(decrypts login 16.1 does not
packets only, support DB
no TLS username)
support)
AIX DB2 DB2 Exit, K- DB2 Exit, K- DB2 Exit DB2 Exit, K- K-TAP DB2 Exit, K- K-TAP DB2 Exit, K-
TAP TAP TAP TAP TAP
AIX Informix Informix Exit, Informix Exit, Informix Exit Informix Exit, Â Informix Exit, Informix Exit, Informix Exit,
K-TAP K-TAP K-TAP K-TAP K-TAP K-TAP
HP-UX Oracle K-TAP K-TAP A-TAP (ASO, Â K-TAP K-TAP, A-TAP K-TAP K-TAP
SSL)
HP-UX Sybase ASE K-TAP K-TAP A-TAP (Sybase   K-TAP, A-TAP K-TAP K-TAP, A-TAP
15 only) (A-TAP only
when
configured for
real IPs)
HP-UX DB2 DB2 Exit, K- DB2 Exit, K- DB2 Exit DB2 Exit, K- K-TAP DB2 Exit, K- K-TAP DB2 Exit, K-
TAP TAP TAP TAP TAP
HP-UX Informix Informix Exit, Informix Exit, Informix Exit Informix Exit, Â Informix Exit, Informix Exit, Informix Exit,
K-TAP K-TAP K-TAP K-TAP K-TAP K-TAP
Linux DB2 DB2 Exit, K- Â DB2 Exit DB2 Exit, A- K-TAP DB2 Exit, K- K-TAP DB2 Exit, K-
TAP TAP TAP, A-TAP (A- TAP
TAP with Linux
2.6.36 and
higher only)
Linux Informix Informix Exit, Informix Exit, Informix Exit Informix Exit, Â Informix Exit, Informix Exit, Informix Exit,
K-TAP K-TAP A-TAP K-TAP, A-TAP K-TAP K-TAP
(A-TAP with
Linux 2.6.36
and higher
only)
Linux Oracle K-TAP K-TAP A-TAP (ASO, Â K-TAP K-TAP, A-TAP K-TAP K-TAP
SSL) (A-TAP with
Linux 2.6.36
and higher
only)
Linux Postgres K-TAP K-TAP A-TAP Â Â K-TAP, A-TAP K-TAP K-TAP, A-TAP
(A-TAP with (A-TAP only
Linux 2.6.36 when
and higher configured for
only) real IPs)
Linux Sybase IQ K-TAP  A-TAP (x86_64 A-TAP (Sybase  K-TAP, A-TAP K-TAP K-TAP, A-TAP
only) 16.1 does not (A-TAP with (A-TAP only
support DB Linux 2.6.36 when
username) and higher configured for
only) real IPs)
Linux Sybase ASE K-TAP K-TAP A-TAP Â Â K-TAP, A-TAP K-TAP K-TAP, A-TAP
(A-TAP with (A-TAP only
Linux 2.6.36 when
and higher configured for
only) real IPs)
Linux MongoDB K-TAP K-TAP A-TAP Â Â K-TAP, A-TAP K-TAP K-TAP, A-TAP
(A-TAP with (A-TAP only
Linux 2.6.36 when
and higher configured for
only) real IPs)
Linux Teradata Teradata Exit, Â Teradata Exit, Â Â Teradata Exit, K-TAP K-TAP, A-TAP
K-TAP A-TAP K-TAP, A-TAP (A-TAP only
(ATAP with when
Linux 2.6.36 configured for
and higher real IPs)
only)
Solaris Oracle K-TAP K-TAP A-TAP (ASO, Â K-TAP K-TAP, A-TAP K-TAP K-TAP
SSL)
Solaris Sybase ASE K-TAP K-TAP A-TAP (Sparc  K-TAP K-TAP, A-TAP K-TAP K-TAP, A-TAP
only) (A-TAP only
when
configured for
real IPs)
Solaris Postgres K-TAP K-TAP A-TAP (9.3 and   K-TAP, A-TAP K-TAP K-TAP, A-TAP
higher) (A-TAP only
when
configured for
real IPs)
Solaris DB2 DB2 Exit, K- DB2 Exit, K- DB2 Exit DB2 Exit, K- K-TAP DB2 Exit, K- K-TAP DB2 Exit, K-
TAP TAP TAP TAP TAP
Solaris Informix Informix Exit, Informix Exit, Informix Exit Informix Exit, Â Informix Exit, Informix Exit, Informix Exit,
K-TAP K-TAP K-TAP K-TAP K-TAP K-TAP
Parent topic: Linux and UNIX systems: S-TAP functionality
Linux and UNIX systems: Linux, Solaris, AIX, and HP-UX S-TAP monitoring mechanisms
The Guardium UNIX S-TAP uses several different monitoring mechanisms to collect database traffic. During configuration, you can choose the method that best meets
your requirements. All mechanisms filter the traffic to reduce network overhead and increase performance.
You choose the mechanism during installation. All mechanisms filter the traffic so that only database-related traffic for specific sets of client and server IP addresses is
collected. The mechanisms are presented here in order of preference: exit libraries, K-TAP, A-TAP, PCAP. See Linux and UNIX systems: S-TAP support matrix and choose
the mechanism that meets your needs.
Exit libraries
The exit libraries are the preferred monitoring mechanism. They give the best performance, and can handle both local and network traffic, whether encrypted or
not. They always capture DB_USER. The only disadvantage is that exit libraries are only available on some databases.
They require configuration on the database, and if you upgrade the S-TAP version, then the exit library also requires an update.
Exit libraries are supported only for DB2, Informix, and Teradata.
K-TAP
K-TAP is a kernel module that is installed into the operating system. It supports all protocols and connection methods (for example, TCP, TLI, SHM, Named Pipes).
When enabled, it observes access to a database server by hooking into the mechanisms that are used to communicate between the database client and server.
Use DB2 and Informix exit libraries with K-TAP to capture shared memory traffic on DB2 and Informix servers. This method is preferable to using A-TAP.
With Linux, the kernel frequently updates, and there are many kernel versions. The K-TAP version depends on the Linux version. See Linux and UNIX systems:
Building a K-TAP.
K-TAP is installed during S-TAP installation. If K-TAP fails to install, PCAP is installed instead. After it is installed, it can be enabled or disabled with a configuration
file setting. If you do not load K-TAP during the S-TAP installation, and decide later that you want to use it, you need to reconfigure and restart the S-TAP.
A-TAP
The A-TAP (application-level tap) sits in the application layer to support monitoring of encrypted database traffic, which cannot be done in the kernel by K-TAP. A-
TAP monitors communication between internal components of the database server. It picks up unencrypted data in the application layer, and sends it to the K-TAP.
K-TAP is a proxy to pass data to S-TAP, which then sends it to the Guardium collector.
With A-TAP, instead of capturing data from the kernel, where the data is still encrypted, Guardium captures data by loading a TAP library before executing the
original database binary. The A-TAP libraries are a no-op (no interface). The libraries tap the database in application-mode, after the data is decrypted or before it is
encrypted by the database. Hence there are no changes made to how the database would normally operate other than the encrypted traffic is now being captured
by Guardium. This means that you do not need to update scripts and tools to call the Guardium code before executing the Oracle code.
A-TAP is included in every S-TAP but must be configured separately for each database instance to be monitored. See Linux and UNIX systems: A-TAP management.
Restrictions:
A-TAP is not supported in an environment where a 32-bit database is located on a 64-bit server.
Monitoring: When using A-TAP, redaction is not supported. Blocking is supported for Linux kernels at 2.6.36 or later releases.
A-TAP is required when DBMS encryption in motion is used, but there may be other internal database implementation details such as shared memory that require
it.
Informix and DB2 on Linux integrate with Guardium more closely using an exit and thus are the recommended method for shared memory support when applicable.
PCAP
PCAP is a packet-capturing mechanism that listens to network traffic from and to a database server. In a UNIX environment, since the K-TAP captures all network
traffic, PCAP is rarely used. PCAP is used to capture local TCP/IP traffic on the device.
Restriction:
Tip: The PCAP uses the client IP/mask values for all local inspection engines to determine what to monitor and report. A PCAP that is installed with an S-TAP with
multiple inspection engines that have different client IP/mask values, captures traffic from all clients that are defined in all inspection engines. The PCAP might be
processing and sending more information to the Guardium system than you intend.
Guardium recommends encrypting network traffic between the S-TAP and the collector whenever possible, only in cases where the performance is a higher priority than
security should this be disabled. There is a small impact on performance when enabling encryption. The default S-TAP configuration is no encryption, to avoid any
performance impact.
Before you determine the best choice for your environment, consider the following factors:
Configuring the S-TAP with TLS requires extra time for encryption that might affect performance on the database server where the S-TAP agent is installed. The
appliance (collector) also requires time to decrypt this traffic.
If applications and database users are communicating with the database in an unencrypted manner, configuring the S-TAP agent to communicate over the network
with encryption may not make your network safer.
In general, it makes sense to encrypt S-TAP traffic if the data that is sent to an appliance on a different network is encrypted, or if the database traffic that is monitored is
network encrypted.
Encryption is enabled during the inspection engine configuration, and can be modified at any time.
A user can change user names several times before connecting to the database; for example, by running ssh informix@barbet, su - db2inst1, su -, su - oracle9, and then
running sqlplus scott/tiger@onora1. With UID Chains, Guardium can trace this process back to the process that called it, and back to the original (offending) user.
For Solaris Zones, user IDs may be reported instead of user names.
The SSH client's IP address and port are added to the UID chain.
Postgres on Solaris 11 with zones is not supported, due to zone configuration not allowing access from master to slave zones in some directories.
Solaris Zones and AIX® WPAR: set the db2bp_path in the guard_tap.ini file to the full path of the db2bp executable file, the full path of the relevant db2bp as seen
from the global zone/wpar.
No UID Chains for Inter-process Communication (IPC) on Solaris 8/9.
UID chains are not detected for Hadoop databases.
The hunter_trace parameter is required for TCP/IP connections on UNIX S-TAP®. Set hunter_trace = 1 during installation to enable uid_chain for local TCP/IP
connections.
If the process that starts the session exits before STAP can examine it, UID chain does not work
UID chain does not support local TCP on Linux for DB2. In addition, DB2 exit requires a specific version of the database to support UID chains.
When running as a non-root user, UID chain does not work for DB2 Shared Memory (SHM) with S-TAP.
Guardium does not log UID chain for network traffic.
Guardium might not log UID chain for very short sessions since Guardium relies on the process ID of the application to determine the UID chain. If the process that
starts the session exits before STAP can examine it, UID chain does not work.
Restriction: UID chain is not supported in any scenario that requires A-TAP for intercepting the traffic, including:
UID Chain Records older than 2 hours are purged when the regular inference process runs. Records older than 1 day are purged on a nightly basis.
While S-TAP is normally deployed on a database server, a K-TAP based firewall can be deployed to a proxy server. By utilizing S-GATE, you can monitor traffic that
originates from the proxy server. See Linux and UNIX systems: Application server parameters and S-GATE Actions (Blocking Actions) in the Policies help topic for more
information on setting appserver parameters and using S-GATE within Policies.
This flow describes installing S-TAP on a single database reporting to one collector. See the related topics for additional information on S-TAP in clusters and zones.
Linux make version 3.81 or later. To view your version of the make utility, run the command: make -v
Oracle ASO, HP-UX 11.11 LD_PRELOAD must be installed. It is installed by patch PHSS_28436 or later.
TLS For S-TAP® on a server, either /dev/random or /dev/urandom must be present on the server. See the TLS port requirements in Linux
and UNIX systems: Port requirements for S-TAP.
Note: A root user that installs GIM or S-TAP needs permissions to create and delete users and groups.
Table 2. Required directories per platform
Requirement Type Linux Solaris AIX HP-UX
File exists tar, awk, grep, tr tar, awk, grep, tr tar, awk, grep, tr tar, awk, grep, tr
Table 1. Linux, Solaris, AIX and HP-UX: S-TAP Disk Space Requirements
Disk Space Description
S-TAP® Program files GIM Install: AIX: 400 MB; HP-UX: 500 MB; Linux: 450 MB; Solaris: 400 MB
non-GIM Install: AIX: 300 MB; HP-UX: 400 MB; Linux: 350 MB; Solaris: 300 MB
Buffer file By default, the S-TAP uses anonymous memory to stage data for transmission to the Guardium system. If you
configure the S-TAP to use a buffer file, the size defaults to 50 MB. The size is controlled by the buffer_file_size
parameter in the guard_tap.ini file.
Parent topic: Linux and UNIX systems: S-TAP installation prerequisites
Use your firewall management utility to check, and open as relevant, the ports listed.
Obtain the IP address of the database server on which you are installing S-TAP. If virtual IPs are used, note those as well (you will need to configure those later,
when completing the configuration).
If installing on the central manager, identify the IP address of the collector that will control this S-TAP, and to which this S-TAP will report.
Verify connectivity between the database server and the collector. On the database, enter nmap -p <port> <ip_address>. For example, to check that port 16018
(the port Guardium® uses for TLS) is reachable at IP address 192.168.3.104, enter the command
nmap -p 16018 192.168.3.104
Typical output looks like:
Starting nmap V. 3.00 Interesting ports on g4.guardium.com (192.168.3.104): Port State Service 16018/tcp open unknown
In v10.1.4 and higher, use the GIM deploy monitoring agents tool to automatically activate GIM clients, install S-TAP, and begin monitoring database traffic. See Quick
start for deploying monitoring agents.
When you install an S-TAP client, the installation program checks whether the guardium group exists. If the group does not exist, the installation program creates it. If you
use certain components or features, such as A-TAP or DB2 Exit, you must add users to this group to ensure proper functioning. These requirements are described in the
The installation process creates log files for the whole STAP package (S-TAP, K-TAP, A-TAP, Tee, P-CAP, Discovery). The log files are good for troubleshooting failed
installations. Locations include /var/tmp, /tmp, and /var/log.
In rare cases you will need to run the S-TAP as guardium (and not root). This can cause other issues and should only be used when necessary. Running S-TAP as the
Guardium user can cause some database or protocol to stop working because of permission levels. Verify that the database path or exec file has permission that allows
the user Guardium to read. Depending on your environment, typical limitations are:
Linux and UNIX systems: Installing the S-TAP client with GIM (v10.1.4)
Use the Guardium Installation Manager to install the S-TAP agent either from a stand-alone Guardium appliance, or from the Central manager to schedule
installation on one or more databases.
Linux and UNIX systems: Installing S-TAP agent with GIM (v10.1-10.1.3)
The Guardium Installation Manager (GIM) is the recommended method for installing S-TAPs on your database servers. GIM enables you to install, upgrade, and
manage agents on individual servers or groups of servers. This includes monitoring processes that were installed under its control, modifying agent parameters,
and performing other management tasks.
Linux and UNIX systems: S-TAP GIM installation parameters
Understand the parameters (each with a short description) that are typically used in your GIM installation.
Linux and UNIX systems: Installing and updating S-TAP using RPM
You can install, uninstall, and update S-TAP on a Linux server using the RPM. The advantage of installing by RPM is that you install and maintain STAP using the
same method that you manage all other software on the database server.
Linux and UNIX systems: Installing the S-TAP client using the shell installer
Use the shell installer, either in interactive mode or non-interactive mode, to install the S-TAP client on Linux, Solaris, HPUX, and AIX database servers.
Linux and UNIX systems: S-TAP install script parameters
Understand the script parameters for installing S-TAPs.
Linux and UNIX systems: Install and uninstall S-TAP with native installers
The native installer provides a shell for the shell installer. The only advantage is that it ensures that S-TAP is registered in the operating system asset repository.
This registration is not required by Guardium for the installation of the S-TAP, but it might be a requirement at your company. Use the native installer only when
necessary.
Linux and UNIX systems: When to restart or reboot after S-TAP install or upgrade
This topic details the situations, after S-TAP installation, of when to restart and when to reboot the database server or database instance. Restart/reboot
requirements are the same for GIM and non-GIM implementations.
Linux and UNIX systems: Work with K-TAP
Learn about K-TAP.
Linux and UNIX systems: Installing the S-TAP client with GIM (v10.1.4)
Use the Guardium Installation Manager to install the S-TAP agent either from a stand-alone Guardium appliance, or from the Central manager to schedule installation on
one or more databases.
Procedure
1. Verify that the GIM client is installed on the database server. See Installing the GIM client on a UNIX server.
2. Upload the relevant S-TAP module to the Guardium Installation Manager appliance.
a. Go to Manage > Module Installation > Upload Modules.
b. Click Choose File and select the S-TAP module that you want to install.
c. Click Upload to upload the module to the appliance. The module appears in the Import Uploaded Modules table.
d. In the Import Uploaded Modules table, click the check box next to the S-TAP module you want to install. The module imports and becomes available for
installation. The Upload Modules page resets and the Import Uploaded Modules table is now empty.
3. Follow the GIM instructions in Set up by Client and Linux and UNIX systems: S-TAP GIM installation parameters. These parameters are mandatory:
STAP_TAP_IP: the IP address or FQDN of the database server or node on which the STAP is being installed (equivalent to the -taphost command line
parameter). If not specified, the GIM_CLIENT_IP value is used.
What to do next
Verify S-TAP status:
Monitor installation of the Guardium clients by navigating to Manage > Module Installation > Set up by Client (v10.1.4: Legacy). Click Search, then click the next
to the S-TAP.
View the module status in the report at Manage > Reports > Install Management > GIM Clients Status
Verify that the row of the S-TAP has a green status (first column) in Monitor > Maintenance > S-TAP Logs > S-TAP Staus
Parent topic: Linux and UNIX systems: Install the S-TAP agent
Related concepts:
Guardium Installation Manager
Linux and UNIX systems: Installing S-TAP agent with GIM (v10.1-10.1.3)
The Guardium Installation Manager (GIM) is the recommended method for installing S-TAPs on your database servers. GIM enables you to install, upgrade, and manage
agents on individual servers or groups of servers. This includes monitoring processes that were installed under its control, modifying agent parameters, and performing
other management tasks.
Procedure
1. Verify that the GIM client is installed on the database server. See Installing the GIM client on a UNIX server.
2. Upload the relevant S-TAP module to the Guardium Installation Manager appliance.
a. On the Guardium system, navigate to Manage > Module Installation > Upload Modules.
b. Click Choose File and select the S-TAP module you want to install.
c. Click Upload to upload the module to the Guardium system. After uploading, the module will be listed in the Import Uploaded Modules table.
d. In the Import Uploaded Modules table, click the check box next to the S-TAP module you want to install. The module will be imported and made available for
installation. After the module is imported, the Upload Modules page will be reset and the Import Uploaded Modules table will be empty.
3. Select client systems where you want to install an S-TAP.
a. Navigate to Manage > Module Installation > Setup by Client.
b. On the Client Search Criteria screen, specify search criteria for the clients where you want to install the S-TAP, then click Search to continue. Search for
clients using any combination of the following search criteria:
Select a client group.
Search by client hostname, IP address, or operating system.
Leave all search criteria fields empty to return a list of all available clients.
c. On the Clients screen, click the check box next to the clients where you want to install the S-TAP, then click Next to continue.
4. Select and configure the S-TAP module before installing to client systems.
a. From the Modules table on the Common Modules screen, select the S-TAP module for installation, then click Next to continue.
Use the Display Latest Versions and Display Bundles Only check boxes to filter the list of available modules.
Use the Module Status table to review information about the selected module on the target clients.
b. From the Client Module Parameters screen, specify installation parameters for the S-TAP. These parameters are mandatory:
STAP_TAP_IP: the IP address or FQDN of the database server or node on which the STAP is being installed (equivalent to the -taphost command line
parameter). If not specified, the GIM_CLIENT_IP value is used.
STAP_SQLGUARD_IP: the IP address or FQDN of the primary collector with which this STAP communicates (equivalent to the -appliance command line
parameter). If not specified, then, the GIM_URL value is used.
Attention: See the enterprise load balancing parameters in Linux and UNIX systems: S-TAP GIM installation parameters.
To apply the same parameters to multiple clients, specify installation parameters in the Common Module Parameters fields, click the check box next
to clients listed in the Client Module Parameters tables, and then click Apply to Selected.
To apply unique parameters to individual clients, specify installation parameters directly in the Client Module Parameters table.
c. Once you have specified installation parameters for the S-TAP, apply those parameters to the selected clients by clicking Apply to Client.
5. Install the S-TAP to the selected clients.
a. From the Client Module Parameters screen, click Install/Update.
b. On the Schedule Date dialog, provide a date or time to begin the installation, then click Apply. To begin the installation immediately, use a value of now in the
Schedule Date field.
What to do next
Verify S-TAP status:
Monitor installation of the Guardium clients by navigating to Manage > Module Installation > Set up by Client . Click Search, then click the next to the S-TAP.
View the module status in the report at Manage > Reports > Install Management > GIM Clients Status
Verify that the row of the S-TAP has a green status (first column) in Monitor > Maintenance > S-TAP Logs > S-TAP Staus
Parent topic: Linux and UNIX systems: Install the S-TAP agent
Related concepts:
All parameters are listed in Linux and UNIX systems: Editing the S-TAP configuration parameters.
CAUTION:
Do not modify advanced parameters unless you are an expert user or you have consulted with IBM Technical Support.
Table 1. Other S-TAP Parameters
GIM parameter Description
STAP_TAP_IP The IP address or FQDN of the database server or node on which the STAP is being installed (equivalent to the -taphost command
line parameter). If not specified, the GIM_CLIENT_IP value is used.
STAP_SQLGUARD_IP The IP address or FQDN of the primary collector with which this STAP communicates (equivalent to the -appliance command line
parameter). If not specified, then, the GIM_URL value is used.
KTAP_ALLOW_MODULE_COMBOS For Linux only. If the bundle does not have an exact kernel match, it installs the best match. If the K-TAP cannot be installed or
does not start, a query is presented to the user whether to continue installation. Default=N
KTAP_LIVE_UPDATE Enables the KTAP update without requiring a server reboot. Default=Y
Table 2. Enterprise Load Balancing parameters
GIM parameter Description
STAP_LOAD_BALANCER_IP Required if you are configuring load balancing. If blank, enterprise load balancing is disabled.
This option specifies the IP address of the central manager or managed unit this S-TAP should use for load balancing.
If configuring the enterprise load balancer to run on a managed unit, the S-TAP must be at V10.1 or higher.
STAP_INITIAL_BALANCER_TAP_GR Optional. The application group name that this S-TAP belongs to for enterprise load balancing.
OUP Attention: Group names with spaces or special characters are not supported.
STAP_INITIAL_BALANCER_MU_GR Optional. The MU group name the app-group will be associated with. Requires a defined LB-APP-GROUP. An MU group must
OUP already exist on the Central Manager before it can be used during installation of S-TAP
Attention: Group names with spaces or special characters are not supported.
STAP_LOAD_BALANCER_NUM_MUS The number of managed units the enterprise load balancer allocates for this S-TAP.
Parent topic: Linux and UNIX systems: Install the S-TAP agent
Linux and UNIX systems: Installing and updating S-TAP using RPM
You can install, uninstall, and update S-TAP on a Linux server using the RPM. The advantage of installing by RPM is that you install and maintain STAP using the same
method that you manage all other software on the database server.
There is a single RPM for the 32-bit S-TAPs and two RPMs for the 64-bit S-TAPs so that the 64-bit S-TAP does not have a dependency on 32-bit libraries if 32-bit exit
libraries are not required. The extra RPM looks like guard-stap-32bit-exit-libs-10.1.0.89165-1-rhel-6-linux-x86_64.x86_64.rpm and has a dependency on the main RPM.
By default, the installation process checks the Linux kernel to determine whether a K-TAP module has been created to work with that kernel. If it exists, it installs (sets
ktap_installed = 1). If there is none, K-TAP does not install unless you have enabled Loader Flexibility, which aids in the installation of currently built modules when an
exact match does not exist. When Loader Flexibility is enabled, it attempts to build a K-TAP to match your Linux kernel.
v10.12 and higher: RPM installs S-TAP to /opt/guardium; this location cannot be changed. tap_ip is set automatically to the hostname of the system. sqlguard_ip is set to
127.0.0.1 as a placeholder for proper configuration. Complete the configuration after the installation, as described in this procedure.
v10.12 and higher: You can run the guard-config-update script as root user or a non-root user. Use the help command to see your permitted functions.
Procedure
1. Unzip the S-TAP package and copy the RPM to /tmp of the database server.
2. v10.12 and higher: To enable Loader Flexibility, set the Linux environment variable NI_ALLOW_MODULE_COMBOS="Y"
3. Install the RPM.
[--set-tap-ip [IP or hostname]] Set tap_ip in S-TAP config file /usr/local/guardium/guard_stap/guard_tap.ini (default:
rh5u9x64t.guard.swg.usma.ibm.com)
[--set-sqlguard-ip [IP or hostname]] Set sqlguard_ip in SQLGuard_0 section in S-TAP config file /usr/local/guardium/guard_stap/guard_tap.ini (default:
127.0.0.1)
[--add-sqlguard [ID] [IP or hostname]] Add SQLGuard_ID section to S-TAP config file /usr/local/guardium/guard_stap/guard_tap.ini
(V10.1.4 and higher)
[--remove-sqlguard [ID]] Remove SQLGuard_ID section from S-TAP config file /usr/local/guardium/guard_stap/guard_tap.ini
(V10.1.4 and higher)
[--modify-sqlguard [ID] [parameter] Set SQLGuard_ID section parameter to value in S-TAP config file /usr/local/guardium/guard_stap/guard_tap.ini.
[value]] Parameters:
(V10.1.4 and higher)
sqlguard_ip
IP address or hostname of SQLGuard unit
sqlguard_port
Port used to connect to SQLGuard unit (default: 16016)
primary
Order of preference (1=primary, 2=secondary, 3=tertiary and so on)
num_main_thread
Number of main connections to use for this SQLGuard, used with participate_in_load_balancing = { 1, 4 } (default:
1)
connection_pool_size
Number of data connections per main connection to SQLGuard unit (default: 0)
[--modify-tap [parameter] [value]] Set TAP section parameter to value in S-TAP config file /usr/local/guardium/guard_stap/guard_tap.ini. Parameters:
(V10.1.4 and higher)
tap_debug_output_level
Set debugging level (must be an integer >= 0, but not 2 or 3)
participate_in_load_balancing
Set participate in load balancing (values: 1, 2, 3, 4). (See Linux and UNIX systems: S-TAP Load Balancing models
and configuration guidelines)
use_tls
Enable TLS [ 0, 1 ]
failover_tls
TLS connections failover to non-TLS [ 0, 1 ]
hunter_trace
Enable UID chain reporting [ 0, 1 ]
buffer_file_size
Buffer file size in MB
alternate_ips
Comma-separated list of alternate IPs/hostnames for STAP
firewall_installed
Enable firewall [ 0, 1 ]
firewall_fail_close
Action to take when there is no verdict (e.g. SQLGuard unreachable or timeout reached) [ 0 : do nothing, 1 : block
connection ]
firewall_default_state
Set default state [ 0 : not watched, 1 : watched ]
firewall_timeout
Set firewall timeout in seconds
firewall_force_watch
Comma-separated list of IP/masks to watch even with firewall_default_state=0
firewall_force_unwatch
Comma-separated list of IP/masks to unwatch even with firewall_default_state=1
[--help-config [option]] Show information about an option in the ini, if available (show all available if none specified)
[--retry-ktap-load] Retry KTAP loading (useful after installing dev packages, updating after KTAP request, or changing flexload; automatically
restarts S-TAP)
[--discover-ies] Run discovery and replace all Inspection Engines with those discovered
[--stop [service]] Stop service ( S-TAP, tee, or monitor) temporarily (Solaris services and inittab treat this as permanent disable, does not
auto-start on boot until re-enabled)
[--start [service]] Start service ( S-TAP, tee, or monitor) if not already running (implies enable)
[--disable [service]] Prevent service (stap, tee, or monitor) from running again
[--enable [service]] Configure service (stap, tee, or monitor) for automatic start
[--status] Show which services are started and if they are configured to start automatically
5. To upgrade, copy the RPM package to /opt/guardium and run the command: rpm -U <RPM_NAME>
6. To uninstall:
a. To get the RPM name, run: rpm -qa | grep guard_stap
b. Run rpm -e <RPM_NAME>
After un-install, the directory /opt/guardium still exists, but should only contain /opt/guardium/guard_stap/guard_tap.ini.rpmsave and /opt/guardium/rpm_logs
What to do next
After installation completes, verify S-TAP status:
Verify that the row of the S-TAP has a green status (first column) in Monitor > Maintenance > S-TAP Logs > S-TAP Staus
Parent topic: Linux and UNIX systems: Install the S-TAP agent
Linux and UNIX systems: Installing the S-TAP client using the shell installer
Use the shell installer, either in interactive mode or non-interactive mode, to install the S-TAP client on Linux, Solaris, HPUX, and AIX database servers.
If any stage of the installation fails, undo all of the steps up to that point. Do not leave the S-TAP partially installed.
The S-TAP package name is in the format: guard-stap-guard-10.1.0_r79927_1-rhel-5-linux-x86_64.sh, where the first three numbers are the release number,
followed by the revision number, in this example r79927.
Interactive mode is recommended for individual S-TAPs. The system prompts for the basic configuration, and verifies your input immediately, so there are no errors. By
default, K-TAP is installed automatically during S-TAP installation. The S-TAP installer checks if the K-TAP is available for the kernel version. If the installation process
does not find a matching K-TAP, it attempts to build one to match your Linux kernel. If the K-TAP cannot be installed or does not start, a query is presented to the user
whether to continue installation.
Use the non-interactive mode to install on multiple databases multiple systems by running a single command, using the tapfile parameter, --tapfile <path to ini file>, and a
guard_tap.ini file that specifies the databases and their details. If you are installing on multiple databases, consider using GIM instead of non-interactive mode.
Procedure
1. Log on to the database server using the root account.
2. Designate an installation directory and verify it has sufficient disk space, approximately 400 MB - 500 MB total.
3. Copy the S-TAP .tgz to the local disk on the database server, typically to /tmp.
4. For a typical installation by non-interactive mode, the minimum parameters are:
Note: The S-TAP installer includes all possible modules specific to the different Linux kernels. In rare cases, the S-TAP package does not have the appropriate K-
TAP module. In this case, copy the K-TAP module to /tmp and install using these commands. The K-TAP module file is copied into the S-TAP install directory during
the install.
./guard-stap-guard-10.0.0_r79927_1-rhel-5-linux-x86_64.sh --
--modules /tmp/modules-guard-10.0.0_r79927_1.tgz"
5. For interactive mode, run the installed script. In some cases you will need to run the S-TAP as Guardium. This can cause other issues and should only be used when
absolutely necessary. The only value you must enter is the IP address of the SQL Guard unit. All others can be left at their defaults. The installer prompts as follows.
When the script asks "Would you like to run guard_discovery? [Y/n]" if you choose yes, then it runs the guard_discovery once with the --update-tap-flag to initially
configure inspection engines. No matter what, it configures guard_discovery --send-to-sqlguard-flag to run once every 24 hours.
What to do next
Verify S-TAP status:
Verify that the row of the S-TAP has a green status (first column) in Monitor > Maintenance > S-TAP Logs > S-TAP Staus
Parent topic: Linux and UNIX systems: Install the S-TAP agent
--tapfile <file> The install process reads this guard_tap.ini file and uses its parameters for the STAP you are installing. For example:
/var/tmp/guard-stap-10.0.0_r103368_v10_5_1-rhel-5-linux-x86_64.sh --ni --dir /usr/local --tapfile /var/tmp/guard_tap.ini.
--ipfile <file> Text file that specifies a list of hostnames, IP addresses, and Guardium system addresses separated by a single space. For
example:
--sqlguardip <sqlguardip> The IP of the Guardium system this S-TAP should communicate with.
--load-balancer-ip <load_balancer_ip> The IP address of the central manager or managed unit this S-TAP uses for enterprise load balancing.
--lb-app-group <app_group> Optional. The application group name that this S-TAP belongs to for enterprise load balancing.
Attention: Group names with spaces or special characters are not supported.
--lb-mu-group <mu_group> Optional. The MU group name the app-group will be associated with. Requires a defined LB-APP-GROUP. This parameter can
only be specified once, during initial installation. An MU group must already exist on the Central Manager before it can be used
during installation of S-TAP
Attention: Group names with spaces or special characters are not supported.
--lb-num-mus <number_of_mus> The number of managed units the enterprise load balancer allocates for this S-TAP.
Parent topic: Linux and UNIX systems: Install the S-TAP agent
Linux and UNIX systems: Install and uninstall S-TAP with native installers
A native installer ensures that S-TAP is registered in the operating system asset repository. This registration is not required by Guardium for the installation of the S-TAP,
but it might be a requirement at your company. There is a separate native installer for each OS type.
Linux and UNIX systems: Installing and uninstalling S-TAP with AIX native installer
Linux and UNIX systems: Installing and uninstalling S-TAP with HP-UX native installer
Linux and UNIX systems: Installing and uninstalling the S-TAP with Solaris native installer
Parent topic: Linux and UNIX systems: Install the S-TAP agent
Linux and UNIX systems: Installing and uninstalling S-TAP with AIX native installer
Before you begin
Verify all Linux and UNIX systems: S-TAP installation prerequisites.
Procedure
1. Obtain the IP address of the database server on which you are installing S-TAP. If virtual IPs are used, note those as well (you will need to configure those later,
when completing the configuration).
2. Identify the IP address of the collector that will control this S-TAP, and to which this S-TAP will report.
3. Verify connectivity between the database server and the collector. On the database, enter nmap -p <port> <ip_address>. For example, to check that port 16018
(the port Guardium® uses for TLS) is reachable at IP address 192.168.3.104, enter the command
nmap -p 16018 192.168.3.104
Typical output looks like:
Starting nmap V. 3.00 Interesting ports on g4.guardium.com (192.168.3.104): Port State Service 16018/tcp open unknown
4. Locate the appropriate native installer file (.bff file) from the S-TAP Installation DVD, for your version of AIX®.
5. Enter the following command on a clean server (no previous S-TAP installation) to extract the shell installer for AIX, substituting the appropriate file name with the
appropriate .bff file:
6. Continue with running the interactive installer of the installation procedure, running the generated installation script rather than the default installation script for the
operating system version.
Parent topic: Linux and UNIX systems: Install and uninstall S-TAP with native installers
Procedure
Example
Linux and UNIX systems: Installing and uninstalling S-TAP with HP-UX native installer
Before you begin
Verify all Linux and UNIX systems: S-TAP installation prerequisites.
Procedure
1. Obtain the IP address of the database server on which you are installing S-TAP. If virtual IPs are used, note those as well (you will need to configure those later,
when completing the configuration).
2. Identify the IP address of the collector that will control this S-TAP, and to which this S-TAP will report.
3. Verify connectivity between the database server and the collector. On the database, enter nmap -p <port> <ip_address>. For example, to check that port 16018
(the port Guardium® uses for TLS) is reachable at IP address 192.168.3.104, enter the command
nmap -p 16018 192.168.3.104
Typical output looks like:
Starting nmap V. 3.00 Interesting ports on g4.guardium.com (192.168.3.104): Port State Service 16018/tcp open unknown
4. Locate the appropriate native installer file (.depot.gz file) on the Guardium S-TAP® Installation DVD, for your version of HPUX.
5. Extract the file with
gzip -d <filename>.depot.gz
6. Enter the swinstall command as follows, supplying the selected file name (the appropriate native installer file) and your database server host name. This command
starts an interactive program. Follow the prompts and use the appropriate controls to install the appropriate S-TAP installation program (.sh file), which is located in
/var/spool/sw/var/tmp.
Parent topic: Linux and UNIX systems: Install and uninstall S-TAP with native installers
Procedure
To remove HPUX S-TAP using the native installer, use the following command:
swremove @<hostname>:/var/spool/sw
Linux and UNIX systems: Installing and uninstalling the S-TAP with Solaris native installer
Before you begin
Verify all Linux and UNIX systems: S-TAP installation prerequisites.
Procedure
1. Obtain the IP address of the database server on which you are installing S-TAP. If virtual IPs are used, note those as well (you will need to configure those later,
when completing the configuration).
2. Identify the IP address of the collector that will control this S-TAP, and to which this S-TAP will report.
3. Verify connectivity between the database server and the collector. On the database, enter nmap -p <port> <ip_address>. For example, to check that port 16018
(the port Guardium® uses for TLS) is reachable at IP address 192.168.3.104, enter the command
nmap -p 16018 192.168.3.104
Typical output looks like:
Starting nmap V. 3.00 Interesting ports on g4.guardium.com (192.168.3.104): Port State Service 16018/tcp open unknown
4. Locate the appropriate native installer file (.pkg file) on the Guardium S-TAP® Installation DVD, for your version of Solaris
5. Enter the pkgadd command to run the installer using the selected file:
pkgadd -d <filename>.pkg
6. Continue with running the interactive installer of the installation procedure, running the extracted shell installer script rather than the default installation script for
the operating system version.
Parent topic: Linux and UNIX systems: Install and uninstall S-TAP with native installers
Procedure
pkgrm GrdTapIns
Linux and UNIX systems: When to restart or reboot after S-TAP install or upgrade
This topic details the situations, after S-TAP installation, of when to restart and when to reboot the database server or database instance. Restart/reboot requirements are
the same for GIM and non-GIM implementations.
What must be restarted after installation of UNIX/Linux S-TAP when using EXIT
Teradata: needs database restart
DB2: needs database restart
Informix: No restart needed. If ifxserver is running, then restart it. If ifxserver is not running, then no need to restart anything.
What must be restarted after installation of UNIX/Linux S-TAP when using A-TAP
The database must be restarted when using A-TAP.
A-TAP should be deactivated and de-instrumented prior to any database software updates.
What must be restarted after installation of UNIX/Linux S-TAP when using K-TAP
OS/Database Oracle  DB2  Sybase  MS-SQL  Informix Â
 TPC/IPC SHM TPC/IPC SHM TPC/IPC SHM TPC/IPC SHM TPC/IPC SHM
Solaris NR NR NR NR NR NR NR NR NR NR
HP-UX NR NR NR NR NR NR NR NR NR NR
NR = No restart/reboot required (based on utilizing live update mechanism and referencing live update link if you have one)
REQ = Restart required
NA = not applicable
Reboot guidelines
Parent topic: Linux and UNIX systems: Install the S-TAP agent
K-TAP is a kernel module that is installed into the operating system. Is it installed during S-TAP installation. After it is installed, it can be enabled or disabled by using a
configuration file setting. When enabled, it observes access to a database server by hooking the mechanisms used to communicate between the database client and
server. With K-TAP you do not need to change how database clients connect to the server.
At installation time, you will choose whether or not to load the K-TAP kernel module to the server operating system. This is the only way to load that module. If you do not
load K-TAP initially, and decide later that you want to use it, you will need to remove S-TAP®, and then re-install it.
Note: If K-TAP fails to load properly during installation, possibly caused by hardware or software compatibility, P-CAP is installed as the default collection mechanism.
Note: Intra-session traffic is transferred from the old KTAP to the new KTAP by use of a callback. This means that, for most databases, it can take two SQL requests before
interception resumes with the new KTAP for pre-existing sessions. In the case of Sybase IOCP, this takes three SQL requests due to the nature of the session.
Parent topic: Linux and UNIX systems: Install the S-TAP agent
KTAP loader mechanism uses the following sequence for Linux S-TAP installation (with GIM and non-GIM).
Note: KTAP loader mechanism automatically proceeds to the next step if the previous step was unsuccessful.
1. KTAP Loader looks for exact kernel module match for the Operating system level and if found, loads it.
2. If KTAP Loader did not find a match, it compiles KTAP the module locally and loads it. This can happen only if the system has required packages installed (gcc
and kernel-devel for booted kernel).
3. If KTAP Loader has not yet been able to load the correct kernel module, and if FlexLoad mechanism is ON, KTAP Loader finds the closest matching kernel
module and loads it.
4. If KTAP cannot load the kernel module, it informs you with a "Failed to load" message. It either installs the S-TAP without the KTAP, or fails the S-TAP
installation. You can then request a matching module from Guardium support. This takes about two weeks to prepare.
When you install an S-TAP on a Linux system, the installation process checks the Linux kernel to determine whether a K-TAP has been created to work with that kernel. If a
kernel is running that hasn’t loaded the KTAP before, it searches for a matching module and loads it. If the installation process does not find a matching K-TAP, it
attempts to build one to match your Linux kernel.
Most of the K-TAP code is independent of the kernel. The installer for version 9.1 provides a new layer of code, which enables the kernel-independent code to interact with
your kernel. This new layer is delivered as proprietary source code. The installer builds the complete K-TAP by compiling this proprietary source code against your Linux
kernel. This produces a K-TAP specific to your Linux distribution.
This process requires that the standard kernel development utilities, provided with Linux distribution, are present on the database server where the K-TAP is to be built.
The development package must be an exact match for the kernel. The gcc compiler is also required.
If you have several systems running the same Linux distribution, you can build a K-TAP on one system and copy it to the others. For example, you might build a K-TAP on a
test system and then copy it to one or more production database servers after testing. If you use the Guardium Installation Manager (GIM) to install the S-TAP, GIM can
automatically copy the bundle containing the new K-TAP to a Guardium system from which you can distribute it to other database servers.
When the installer attempts to build a K-TAP module, you see messages issued by guard-ktap-loader. These messages can include:
It is attempting to build
Linux and UNIX systems: Copying a new K-TAP module to other systems
When you build a new K-TAP module for a Linux database server, you can copy that module to other database servers that run the same Linux distribution.
Procedure
1. Log in to the database server with the tested K-TAP.
2. Change directory to /usr/local/guardium/guard_stap/ktap/current/ and run ./guard_ktap_append_modules to add the locally built modules to modules.tgz.
3. Copy the updated modules.tgz file to the target server.
4. Log in to the target server and change directory to /usr/local/guardium/guard_stap/ktap/current/.
5. Run the K-TAP loader with the retry parameter and the full path to the updated modules.tgz file. For example:
Results
The custom K-TAP module is ready to use on the target system. Repeat this procedure for each matching Linux system to which you want to deploy the K-TAP module.
Parent topic: Linux and UNIX systems: Work with K-TAP
Copying a K-TAP module by using GIM
Linux and UNIX systems: K-TAP parameters
Linux and UNIX systems: Enable K-TAP after installation if Tee was installed by default
If, during the installation process, K-TAP fails to load properly, possibly caused by hardware or software incompatibility, Tee is installed as the default collection
mechanism. To switch back to K-TAP, after compatibility issues are resolved, follow these steps.
Procedure
1. Disable the S-TAP®. See Stop UNIX S-TAP for more information.
2. Edit guard_tap.ini and change ktap_installed to 1 and tee_installed to 0
3.
Run the guard_ktap_loader install command.
Procedure
1. Install S-TAP on the master zone (global zone) regardless of the zone in which the database runs, since the local zones share information from the master zone.
2. When configuring the Inspection Engine, use the global zone values for the db_install_dir path and tap_db_process_names. (From the global zone, S-TAP monitors
access to databases in all zones.)
3. If you are using PCAP, add the IP addresses of all zones that you want to monitor to the alternate_ips parameter in the guard_tap.ini file on the Solaris database.
4. At the end of the installation:
K-TAP is not loaded on the local zone as it is only loaded on the global. It is visible on the local zones.
S-TAP does not run on the local zones.
In a non-RAC Oracle database, a single instance accesses a single database. The database consists of a collection of data files, control files, and redo logs located on disk.
The instance comprises the collection of Oracle-related memory and operating system processes that run on a computer system.
In an Oracle RAC environment, two or more computers (each with an Oracle RDBMS instance) concurrently access a single database. This allows an application or user to
connect to either computer and have access to a single coordinated set of data.
Procedure
1. Install S-TAP on all nodes. In case GIM is used, install GIM client on all nodes, then install bundle S-TAP on all nodes.
2. Configure the STAP parameters. All of the parameters can be configured through GIM UI.
STAP_TAP_IP: public IP configured for the node
STAP_ALTERNATE_IPS: comma separated list of VIPs (virtual IPs) configured for the node, and the scan listener
Tip: Use this command to retrieve value for virtual hostnames to put in alternate_ips: su – grid –c ‘cat
$ORACLE_HOME/network/admin/*.ora’|grep –i host
For example:
Configure STAP Inspection engine parameter: unix_domain_socket_marker=<key>, where <key> value can be found in listener.ora in the IPC protocol
definition
Tip: Command to retrieve value for unix_domain_socket: su – grid –c â€c̃at $ORACLE_HOME/network/admin/*.ora’|grep –i KEY
Example: If the following is a description in the listener.ora LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ORCL))))
then unix_domain_socket_marker=ORCL
Example: If there is more than one IPC line in listener.ora, use a common denominator of all the keys:
Guardium uses a string search in the path. "LISTENER" works for all four and should be used in this case: unix_domain_socket_marker=LISTENER
Example: If there is no common denominator, create additional inspection engines with unix_domain_socket_marker corresponding to the specific
IPC key(s). For example the guard_tap.ini may look similar to this example in the end:
[DB_0]
...
unix_domain_socket_marker=EXTPROC1522
...
[DB_1]
...
unix_domain_socket_marker=LISTENER
3. If the Oracle database is encrypted (ASO/SSL), activate ATAP on all nodes (active and standby).
a. Stop all Oracle services (including clusterware) and verify that ohasd.bin is down.
i. Run crsctl stop cluster -all
ii. Verify that ohasd.bin is down
b. Authorize user oracle and grid (in case listener belongs to user grid).
c. Configure A-TAP parameters.
d. Activate A-TAP.
e. Start all Oracle services in the cluster.
4. In Oracle RAC environment, verify which user starts the listener. If it is with user grid, authorize the user grid.
If all DB2 instances have the same db2_shmem_size, db2_fix_pack_adjustment, and db2_shmem_client_position, the packets from all instances are collected
even if only one instance is configured.
If all DB2 instances have the same db2_shmem_size, but different db2_fix_pack_adjustment or db2_shmem_client_position, then only packets from the first
configured DB2 instance are collected.
Procedure
1. Compute the client I/O area offsest (db2_shmem_client_position)
a. Open a new bash shell as the db2 instance user.
b. Run the ps -x command to verify that the db2bp command processor is not currently running for this shell. You should not see a command called db2bp
running. If it does, either kill it or run a new shell.
c. Run the following two commands:
The output contains several columns beyond those shown here, but they do not affect this procedure. Find the line that contains the process ID that
was identified in step 2.b and also has a value of 2 under NATTCH. The DB2 shared-memory segment size is the value in the SEGSZ column. In this
example, it is 131072.
d. Tip: if the list returned in step 2.c is too long, you can filter it by using the process ID. In this case, you would enter ipcs -ma | grep 5309370. The
results do not contain the column headers, but you can look at the previous results to see the column headers and identify the correct line and
column. In this example, it is the last line.
3. Set these parameters in order to capture the DB2 shared memory traffic.
Table 1. DB2 Parameters
Parameter STAP Name ATAP Name
Linux and UNIX systems: Activate A-TAP on all nodes of a DB2 Cluster
A-TAP needs to be activated on all nodes where a DB2 server is shared by nodes on a DB2 cluster.
Procedure
1. Authorize db2 user on node 1. <guardium_base>/xxx/guardctl authorize-user <user-name>
For example:
# /usr/local/guardium/guard_stap/guardctl list-active
db2inst1
3. Restore the original DB2 server on node 1 after activating ATAP on it, so that other nodes can activate ATAP. (All nodes share the executable.( In the db2 adm
directory, copy db2sysc-guard-original over db2sysc (make a copy of each first and set them aside). For example:
# > cp db2sysc-guard-original db2sysc
4. Delete db2sysc-guard-original (or it will fail activation on node 2). For example:
# rm -rf db2sysc-guard-original
5. Move cluster resources to node 2. For example:
# pcs resource move resource_id <destination node>
6. Authorize db2 user and activate on node 2 (steps 1 and 2). This will create the libraries on node 2 and replace the db2sysc-guard-original that has been deleted.
The current status should be:
Node01:
# /usr/local/guardium/guard_stap/guardctl list-active
db2inst1
Node02:
# /usr/local/guardium/guard_stap/guardctl list-active
db2inst1
For these database types, when the S-TAP starts it must have access to the database home. If your environment uses a clustering scheme in which multiple nodes share a
single disk that is mounted on the active node, but not on the passive node, the database home is not available on the passive node until failover occurs.
S-TAP can be configured for delayed loading by setting a configuration file property, WAIT_FOR_DB_EXEC. When starting, if S-TAP finds that there is no access to the
database home, it checks the WAIT_FOR_DB_EXEC value, and takes the appropriate action.
WAIT_FOR_DB_EXEC > 0, S-TAP starts regardless of whether or not it can stat() process name. It tries to stat() process name every 15 minutes
WAIT_FOR_DB_EXEC < = 0 S-TAP tries to stat() process name in inspection engine immediately after it comes up. If it cannot stat() process name, S-TAP exits.
Before setting this property to a positive value, be sure to set all other necessary configuration properties and test that the S-TAP starts and collects data correctly. This
property can be set only by editing the configuration file, and not from the GUI.
Parent topic: Linux and UNIX systems: Special environments configuration
If you have installed A-TAP, you must deactivate it before attempting any upgrade/install operations; see the description of the A-TAP deactivation command, in Linux and
UNIX systems: Deactivating A-TAP.
If you are removing a previous version of S-TAP that used K-TAP, you will need to reboot the database server. If K-TAP has been installed, you will have a device file
named: /dev/guard_ktap.
Procedure
5. This step applies to AIX® WPARs and Solaris Zones only (skip for all others). If you are uninstalling a previous version of S-TAP that included K-TAP, issue the
following commands from the master node: rm -f /wpars/<server>/dev/ktap* and rm -f /wpars/<server>/dev/guard_ktap*, where /wpars/<server> is the path from
the master node to the WPAR.
Procedure
1. Log on to the database server system using the root account.
2. If the system has A-TAP and the encryption box is not used:
a. Stop the database.
b. User guardctl to deactivate the A-TAP.
3. If the system has A-TAP and the encryption box is used, stop the DB. (ATAP is active whenever the DB is running. Once the encryption box has been set to activate
ATAP automatically, it cannot be disabled by simply unchecking the box. The system needs to be rebooted with the feature disabled in order to clear the setting.)
4. Before running live update, either through GIM or shell installers, make sure no process except the S-TAP is using the K-TAP device. The S-TAP must be running and
A-Tap must be deactivated. Run fuser /dev/ktap_xxx or lsof | grep ktap_xxx (where xxx is the old version number) to see if any process is holding the device open.
Failure to do so can result in unpredictable behavior.
5. If un-installing version 6.0 or later of S-TAP:
a. For Red Hat Enterprise Linux 6: Stop S-TAP using the stop utap command.
b. For Red Hat Enterprise Linux 7: Stop S-TAP using the systemctl stop guard_utap command
c. All others:
i. Remove the utap agent entry in the /etc/inittab file (regardless of whether or not it has been commented). In a default installation, this statement
should look like this: utap:<nnnn>:respawn:/usr/local/guardium/guard_stap/guard_stap /usr/local/guardium/guard_stap/guard_tap.ini
ii. Save the /etc/inittab file.
iii. Run the init q command
d. Run ps - ef | grep stap to verify that S-TAP is no longer running.
6. Copy the S-TAP configuration file to a safe location (a non-Guardium directory).
7. Run the uninstall script. For example, using the default directory: [root@yourserver ~]# /usr/local/guardium/guard_stap/uninstall
Note: Do not run the uninstall program with S-TAP running. Be sure that you have stopped S-TAP.
8. If your previous version of S-TAP included K-TAP, reboot the database server now.
9. HP-UX servers only (skip for all others): If you are uninstalling a previous version of S-TAP that included K-TAP, run the uninstall script again after reboot.
10. AIX WPARs only (skip for all others): If you are uninstalling a previous version of S-TAP that included K-TAP, issue the following commands from the master node
after uninstall: rm -f /wpars/<server>/dev/ktap* and rm -f /wpars/<server>/dev/guard_ktap*, where /wpars/<server> is the path from the master node to the
WPAR.
11. Upgrade the S-TAP, using one of
Windows: Installing S-TAP agent with GIM (v10.1-10.1.3), or Linux and UNIX systems: Installing the S-TAP client with GIM (v10.1.4) using the Upgrade
option at the end of the procedure.
If you are upgrading K-TAP, set KTAP_LIVE_UPDATE to yes. Modify other parameters as relevant. Parameters you leave unchanged are carried over in
the upgrade.
Linux and UNIX systems: Installing and updating S-TAP using RPM, using the -u flag and other relevant upgrade parameters listed in the Linux and UNIX
systems: S-TAP install script parameters. To upgrade K-TAP, specify --live_update Y
Linux and UNIX systems: Installing the S-TAP client using the shell installer
12. After a K-TAP live upgrade:
The first SQL for an existing session after updating K-TAP is not captured.
Existing A-TAP sessions on Solaris local zone are not logged.
Some processes may still reference memory in the old K-TAP module. Under this scenario, the module refuses to free the resources to prevent future
instability. When this happens, the user should, after those resources are no longer being used, try a manual cleanup by running the guard_ktap_cleanup that
is kept in the ktap directory.
On HP-UX 11.11, the old K-TAP module is no longer installed, but it still shows up as registered when you execute kmadmin -s | grep tap. Manually unregister
this module with kmmodreg -U ktap_<version>.
On Solaris and AIX®, the old dev-nodes are not automatically deleted after a reboot and they need to be removed manually.
Exceptions:
If the DB server is installed with a version that was not installed through GIM, and the non-GIM K-TAP version is not the same with installing K-TAP version,
the value of the KTAP_LIVE_UPDATE is ignored, since an upgrade from a non-GIM version requires system reboot
When upgrading from a non-GIM version to the same GIM version, the system does not need to be rebooted.
You can NOT reinstall a previously installed K-TAP version without rebooting the machine.
Error Handling:
In the event of a failure, it is extremely important to check the GIM Events List report, since some failures require system reboot in order to fully recover.
Some configuration changes require that the S-TAP agent be restarted manually, as indicated in the parameter descriptions.
Sometimes a user is unable to make a decision during the process of installing an S-TAP or may make the wrong decision and it goes undetected until after the installation
process is complete. For instance a user may forget to type in or use the wrong IP address when defining a SQL Guard IP. These types of mistakes can be remedied by
modifying the S-TAP configurations.
Parameters in the GUI may be safely changed. Parameters that are not in the GUI rarely need changing and should normally be left unmodified; they are for use by
Guardium Technical Support or advanced users.
If you have installed your S-TAP by using the Guardium Installation Manager (GIM), you can update some parameters through the GIM GUI or API.
Procedure
1. Click Manage > Activity Monitoring > S-TAP Control to open S-TAP Control.
2. Perform operations on all S-TAPs in the page.
Refresh: refresh display of S-TAPs.
Add All to Schedule: add all displayed S-TAPs to the S-TAP verification schedule.
Remove All from Schedule: remove all displayed S-TAPs from the S-TAP verification schedule.
Comments: add comments. See Comments
3. Identify the S-TAP to be configured by its IP address or the symbolic host name of the database server on which it is installed. View and perform operations on
individual S-TAPs.
Option Description
Deleting S-TAPs is useful to clean up your display when you know that an S-TAP has become inactive, or when the
Guardium unit is no longer listed as a host in the S-TAP's configuration file. In either of these cases, the S-TAP displays
indefinitely with an offline status if you do not delete it.
You cannot remove an active S-TAP from the list. Clicking delete does not stop an S-TAP from sending information, nor
does it remove the Guardium host from the list of hosts stored in the S-TAP's configuration file.
Refresh: Click Refresh to fetch a copy of the latest S-TAP configuration from the agent. (There is no auto-refresh of the S-TAP
display.)
Opens the S-TAP Commands popup, where you can run various commands on the S-TAP host.
Restart: Restarts the S-TAP. Not usually needed, and if yes, it's easier to simply kill it from the database server.
Send Command: S-TAP logging
Reinitialize buffer: reset the K-TAP statistics along with deleting the S-TAP buffer
KTAP logging: Similar to S-TAP Logging; increases the debug output from KTAP
Run Diagnostics: Run the S-TAP diagnostics script (and upload the results to the Guardium system)
Upload Linux Modules: Linux only. Uploads the local custom build module of K-TAP.
Record Replay Log: Records all data to a file on DB server (RECORD) and sends data to collector (REPLAY)
Revoke Ignore: All sessions ignored by a revokable ignore policy will be un-ignored and start capturing the traffic
again for those sessions
Run Database Instance Discovery: Runs the discovery process, once immediately. (If enabled to run automatically,
it runs, by default, every 24 hours.)
Opens the S-TAP configuration window. Parameters that do not appear in the GUI are advanced parameters. Do not
Edit S-TAP configuration:
modify them if you are not an advanced user, or have not been instructed to modify them by Guardium Technical Support.
See GUI parameters:
Linux and UNIX systems: General parameters
Linux and UNIX systems: Configuration Auditing System (CAS) parameters
Linux and UNIX systems: Application server parameters
Linux and UNIX systems: Guardium Hosts (SQLGuard) parameters
Linux and UNIX systems: Inspection engine parameters
Show S-TAP Event Log: Click to open the S-TAP event log, where you can see events such as connect, disconnect, GIM server configuration, and
so on. This log is very useful for troubleshooting.
Add to Schedule checkbox Adds the individual S-TAP to the scheduled verification.
Revoke All Ignored Sessions A database could be running many sessions, some of which are currently ignored. Clear this option to stop ignoring traffic
checkbox from that server.
The Guardium Discovery Agent is a software agent automatically installed with the S-TAP package on a database server. The instance discovery agent reports database
instances, listener, and port information to the Guardium system. Discovery does not find and report on every detail of the DB instances on the server.
The discovery bundle is not installed in a slave zone or WPAR; the discovery agent running on the global zone collects information from other zones.
Note: On Solaris zones architecture, when DB2® instances are running on slave zones, Discovery does not discover the DB2 shared memory parameters.
Newly discovered database instances can be seen in the Discovered Instances report. From this report, datasources and inspection engines can quickly be added to
Guardium using the Actions menu.
If databases on the database server are not operational (started) or are added later, the Discovery Agent can still discover these instances by running the Run Discovery
Agent command from the STAP Control window (Manage > Activity Monitoring > S-TAP Control. Click , and select Run Database Instance Discovery).
S-TAP Discovery can be run manually but this action is not suggested. The main reason to run it manually is for debugging purposes. If a new request comes in from the
user interface while a scheduled discovery is running, the new request is ignored.
You can run Discovery from a local command line on the database server (/usr/local/guardium/guard_stap/guard_discovery), in one of three ways:
with the --update-tap flag: edits the guard_tap.ini to add or update inspection engines
with the --send-to-sqlguard flag (or with no flag, this is the default): sends the found changes to the Guardium system, where they appear in the Discovered
Instances report
with the --print-output flag: prints the found changes to stdout (for debugging)
If the S-TAP running as "user" (and not guardium), the discovery functionality is limited. The following message displays:
Note: S-TAP Discovery is not supported on AIX 5.3 because of static libraries are needed on that platform.
Note: In order to avoid an instance where S-TAP discovery does not open the Informix database, it is recommended to start Informix databases using the full path to the
executable.
The S-TAP Discovery application parameters should be left at their default values, except for advanced users. Discovery application are described in Linux and UNIX
systems: Discovery parameters.
Procedure
1. Navigate to Manage > Activity Monitoring > S-TAP Control.
2. In the row of the S-TAP, click . The S-TAP Configuration window opens.
3. Scroll to the bottom of the inspection engines, and click next to Add Inspection Engine....
4. Select the protocol and enter the port range. The window refreshes with the relevant parameters, some with their default values.
5. Configure all required parameters, and click Add. If you are missing parameters, the system informs you what is missing.
Verification checks sniffer operation and communication between the Guardium system and the inspection engines. You can enable verification for all S-TAP clients on
your system, or individual S-TAP clients, or individual inspection engines.
Standard verification
Checks the sniffer operation, and the communication between the S-TAP and the inspection engine. It submits invalid login request and verifies that the
appropriate error message is returned.
Advanced verification
Use advanced verification to avoid failed login requests, and manage individual IEs. For avoiding failed login requests, you must identify or create a datasource
definition associated with the target database. The datasource definition includes credentials, which the verification process uses to log in to the database. Then it
submits a request to retrieve data from a nonexistent table in order to generate an error message.
For both types of verification requests, the results are displayed in a new dialog that provides information about the tests that were performed and recommended actions
for tests that failed.
The verification process attempts to log in to your database's STAP client with an erroneous user ID and password, to verify that this attempt is recognized and
communicated to the Guardium system.
Next the verification process checks whether it can connect to the selected inspection engine on the database server. It expects to receive a response that indicates a
failed login. If a different response is received, you might have to investigate further.
Some error messages from individual databases do not indicate a specific problem. For example, on several supported databases, the error code returned for a wrong port
can also mean that the database itself is not started.
View the verification results in the S-TAP Verification page (Manage > Reports > Activity Monitoring > S-TAP Verification page). Failed checks are shown first, with
recommendations for next steps. Checks that succeeded are shown in a collapsed section at the end of the list. In some situations, it might be useful to review the
successful checks in order to choose among possible next steps.
Procedure
1. Access Manage > Activity Monitoring > S-TAP Control.
2. Use these options:
Add All to Schedule: add all inspection engines for all displayed S-TAPs to verification.
Remove All from Schedule: remove all inspection engines for all displayed S-TAPs from verification.
Add to Schedule: add all inspection engines of the selected S-TAP client to the schedule.
If an S-TAP does not have the option All Can Control enabled, you can only change its status if your Guardium system is the primary system for this S-TAP.
3. Click Refresh.
4. To verify now, go to Manage > Activity Monitoring > S-TAP Verification Scheduler and click Run Once Now.
Procedure
1. Access Manage > System View > S-TAP Status Monitor.
2. Click anywhere in the row of the S-TAP.
The window refreshes with the individual inspection engines of this host.
3. To verify now, select one or more inspection engines and click Verify.
4. Configure advanced verification.
a. Click one inspection engine, and click Advanced Verify.
b. Optionally, under Datasource, select Show only matching S-TAP host or select a name from the Name drop-down list to search for a specific inspection
engine.
c. Click Close.
5. To add to or remove from verification.
a. Select one or more inspection engines.
b. Click Add to Schedule or Remove from Schedule
Once a schedule is defined, you can click the Pause button to temporarily stop the verification process while keeping it active. Use the Run Once Now button to run the
verification once in real-time.
Procedure
Linux and UNIX systems: S-TAP Load Balancing models and configuration guidelines
Understand the S-TAP load balancing models, and choose the one appropriate to your setup
Each load balancing model is described here, along with its specific parameter requirements.
Failover
S-TAP sends traffic to one collector (primary) and fails over to one or more collectors (secondary, thirdly, and so on) as needed. The S-TAP agents are configured with a
primary and at least one secondary collector IP. If the S-TAP agent cannot send the traffic to the primary collector for various reasons, the S-TAP agent automatically fails
over to the secondary. It continues to send data to the secondary host until either the secondary host system becomes unavailable, or the primary host becomes available
again. In the first case, it fails over to the tertiary if there is one defined. In the second case S-TAP fails over from the secondary Guardium host back to the Primary
Guardium host. You can configure as many failover collectors as you want, although there is no reason to define more than 3. You can either define one collector as a
standby failover collector only, or a few failover collectors. When using one standby failover, one collector is usually sufficient for 4-5 collectors. When using a few failover
collectors, each one should run at a maximum 50% capacity, so that there are always resources for additional load. Choose the setup that works best with your
architecture, database, and data center layout.
The S-TAP restarts each time configuration changes are applied from the active host.
In the S-TAP Control window, Details section: set Load Balancing to 0; In the Guardium Hosts section: add at least one secondary sqlguard_ip.
Additional failover configuration should be left at the default values, except by advanced users.
Before designating a Guardium system as a secondary host for an S-TAP, verify these items.
The Guardium system must be configured to manage S-TAPs. To check this and re-configure if necessary, see Configure Guardium system to Manage Agents.
The Guardium system must have connectivity to the database server where S-TAP is installed. When multiple Guardium systems are used, they are often attached
to disjointed branches of the network.
The Guardium system must not have a security policy that will ignore session data from the database server where S-TAP is installed. In many cases, a Guardium®
security policy is built to focus on a narrow subset of the observable database traffic, ignoring all other sessions. Either make sure that the secondary host will not
ignore session data from S-TAP or modify the security policy on the Guardium system as necessary.
Load balancing
This configuration balances traffic from one database onto multiple collectors. This option might be good when you must monitor all traffic (comprehensive monitoring) of
an active database. (Note that for outliers detection, the collectors need to be under the same aggregator and central manager in order for the aggregator to process all
related data.) When the generated traffic is large and you need to house the data online on a collector for an extended period, this method might be your best choice
because it performs session-based load balancing across multiple collectors. An S-TAP can be configured in this manner with up to 10 collectors.
Grid
With Grid, the S-TAP communicates to the collector through a load balancer, such as f5 and Cisco. The S-TAP agent is configured to send traffic to the load balancer. The
load balancer forwards the S-TAP traffic to one of the collectors in the pool of collectors. You also can configure failover between load balancers for continuous monitoring
if the load balancer should fail.
Redundancy
In redundancy, the S-TAP communicates its entire payload to multiple collectors. The S-TAP is configured with more than one collector (often only two) and
communicates the identical content to both. This option provides full redundancy of the same logged data across multiple collectors. It can also be used for logging data
and alert on activity at different levels of granularity.
This mode utilizes extra threads and K-TAP buffers to increase throughput. Set participate_in_load_balancing to 4. See Linux and UNIX systems: Increasing S-TAP
throughput
Linux and UNIX systems: Set up S-TAP authentication with SSL certificates
Set up authentication between an S-TAP server and Guardium system.
S-TAPs can be configured to only connect to a certain group of machine(s) that authenticate with a given certificate or set of certificates. Â These certificates can either be
generated locally on the Guardium system and sent off to the Certificate Authority (CA) for signing or can be created at the CA and installed whole on the Guardium
system.
Linux and UNIX systems: Generating certificate signing request (CSR) on Guardium system
Use this procedure to generate a certificate signing request locally on the Guardium system, for sending to the Certificate Authority (CA) for signing.
Linux and UNIX systems: Installing an SSL certificate generated outside of the Guardium system
Use this procedure to install the SSL certificate that was created by the CA.
Linux and UNIX systems: Generating certificate signing request (CSR) on Guardium system
Use this procedure to generate a certificate signing request locally on the Guardium system, for sending to the Certificate Authority (CA) for signing.
Procedure
1. Log into your Guardium system with CLI.
2. Enter: cli> create csr sniffer
3. Enter the requested data.
4. Copy from the -----BEGIN CERTIFICATE REQUEST----- to the -----END CERTIFICATE REQUEST----- into a file and send this to your CA for signing.
The CA will sign the certificate and send you back a public key that looks something like:
It asks you to confirm that you want to store the certificate, and when you confirm, it stores it.
Parent topic: Linux and UNIX systems: Set up S-TAP authentication with SSL certificates
Linux and UNIX systems: Installing an SSL certificate generated outside of the Guardium
system
Use this procedure to install the SSL certificate that was created by the CA.
The CA sends you two files, and the public cert for your CA. Â
Have these files handy to either import (via scp/ftp/etc) to the Guardium system or to copy-paste into the cli interface on the Guardium system.
Procedure
1. Log in to the Guardium system via CLI.
2. Store the private key by entering: cli> Â store certificate keystore [import | console] The import takes the saved file, and then copies and pastes the contents of the
file into your console interface. It asks for the password that the file was saved with. Â Either you provided this to the CA for creation of the certificate, or more
likely, they provided you with a password when they sent your files. Here's what it looks like on the Guardium system:
Parent topic: Linux and UNIX systems: Set up S-TAP authentication with SSL certificates
Linux and UNIX systems: Configuring the S-TAP to use x.509 certificate authentication
About this task
First, take note of what you have assigned as the CA and the CN of the certificate. Â If you don't remember, use the CLI command show system certificate to display the
values.
You need the CN of the cert installed on the Guardium system and the public-key for the CA that signed the certificate on the Guardium system. You also might want a
Certificate Revocation list signed by the same CA that signed the Guardium system cert, but it's not necessary.
Procedure
1. Copy the public key [and the CRL if wanted] for the CA that the CA sent you to a directory on the S-TAP host. Take note of this directory.
2. Set guardium_ca_path=[path-to-CA.pem]
3. Set sqlguard_cert_cn=[the full CN or partial CN (using * as a wildcard) of the Guardium system]
4. If you want to use a certificate revocation list at this time, set guardium_crl_path=[path-to-crl.crl] It should look like:
guardium_ca_path=/var/tmp/pki/Victoria_QA_CA.pem
sqlguard_cert_cn=sample1_qa.victoria
guardium_crl_path=/var/tmp/pki/Victoria_QA_CA.crl
5. Change tls=1.
6. Restart the S-TAP You are now connected using Openssl.
Parent topic: Linux and UNIX systems: Set up S-TAP authentication with SSL certificates
You can configure any S-TAP to create multiple threads to increase the throughput of data. If the S-TAP configuration file defines more than one Guardium system, a
thread can be created for each Guardium system. S-TAP creates extra threads, matching the number of Guardium systems, in v10.1.4 and higher up to 10 threads. When
participate_in_load_balancing parameter is set to 4, the K-TAP creates a similar number of buffers matching the number of Guardium systems up to 5 threads. The K-TAP
alternates between the buffers, placing entire packets in each buffer. Each S-TAP thread reads from a different K-TAP buffer, and sends traffic data to a single Guardium
system.
In this configuration, no one Guardium receives all the data from the S-TAP. The distribution is similar to that used when participate_in_load_balancing is set to 1.
Attention: Prior to V10 GPU200, when a Guardium system becomes unavailable, no failover is provided. Data that was being sent to a Guardium system is lost until the
system becomes available or the configuration is changed.
Attention: Prior to V10 GPU300, if the S-TAP configuration file defines more than one Guardium system, a thread can be created for each Guardium system. This feature is
activated only when participate_in_load_balancing parameter is set to 4.
Encrypted and unencrypted A-TAP traffic cannot be sent to the same Guardium system. This is similar to the situation when participate_in_load_balancing is set to 1
It works in a mutual authentication mode, verifying both the identity of the user that is requesting authentication as well as the server providing the requested
authentication. The Kerberos authentication mechanism issues tickets for accessing network services. These tickets contain encrypted data, including an encrypted
password, that confirms the user's identity to the requested service.
For auditing and alerting, it’s important to know which database user performed an action. When login is done with a Kerberos ticket, determining the database user is
not always straightforward.
Guardium S-TAP only sees network traffic and passes it on to the sniffer on the Guardium appliance. When a Kerberos ticket is used for login, S-TAP passes that Kerberos
ticket along to the sniffer. For some database server types, the sniffer can determine the database user from the Kerberos login traffic and no additional information is
required. For other database server types, the sniffer needs some assistance. That function is performed by the S-TAP Kerberos plugin.
The S-TAP Kerberos plugin is not enabled by default; it requires additional configuration.
If you use Kerberos at all, configure the plugin. There is no performance implication or other downside to configuring the plugin, just in case you need it.
The data flow between the database, the Guardium sniffer and the Guardium audit data is:
1. S-TAP captures the Kerberized database login packet (along with other activity) and sends it to the Guardium appliance.
2. If the sniffer can determine the user name from the Kerberos ticket, it parses it.
3. If the sniffer cannot determine the user name from the Kerberos ticket, it sends the Kerberos ticket, along with a request for the database user, to the S-TAP. S-TAP
checks to see if there is a Kerberos plugin configured. If there is a Kerberos plugin configured, S-TAP gives the ticket to the plugin and the plugin attempts to figure
DB2 No
Oracle Yes
Cassandra Yes
HBase Yes
MongoDB No
HDFS No
Big SQL No
Hive Yes
Impala No
Parent topic: Linux and UNIX systems: Kerberos-authenticated database traffic
Procedure
1. For a default shell install: kerberos_plugin_dir=/usr/local/guardium/guard_stap
2. For a default GIM install: (exact path varies with software release in use) kerberos_plugin_dir=/usr/local/IBM/modules/STAP/10.1.3_r101299_1-
1495145548
3. Default (plugin is disabled): kerberos_plugin_dir=NULL
# Kerberos values
KRB5RCACHETYPE=none
KRB5_KTNAME=/path/to/kerberos/krb5.keytab
KRB5_CONFIG=/path/to/kerberos/krb5.conf
# Plugin values
KRB5_PLUGIN_CCACHE=/path/to/kerberos/krb5cc_*
KRB5_PLUGIN_GSSAPI_LIBRARY=/path/to/lib/libgssapi_krb5.so
#KRB5_PLUGIN_DEBUG=0
Lines beginning with a #, as well as blank lines, are treated as comments and ignored. Invalid entries cause errors and prevent the Kerberos plugin from running.
When any configuration entry is changed, the S-TAP must be restarted for the updated values to take effect.
KRB5RCACHETYPE
KRB5_PLUGIN_GSSAPI_LIBRARY=/usr/lib64/libgssapi_krb5.so KRB5_PLUGIN_GSSAPI_LIBRARY=/opt/freeware/lib64/libgssapi_krb5.so
Alternately, if the library is located on the standard library search path for the system, you can specify only the file name, for example:
KRB5_PLUGIN_GSSAPI_LIBRARY=libgssapi_krb5.so
Note: Any libraries that are needed by the GSSAPI library (typically libkrb5.so, libk5crypto.so, libkrbsupport.so) must also be on the system.
Important: If the Kerberos libraries are NOT in the standard library paths, you need to use the parameter KRB5_PLUGIN_GSSAPI_LIBRARY. Uncomment it and
update its value with full path of libgssapi_krb5.so.
KRB5_PLUGIN_DEBUG
This parameter is used for debugging the plugin only. For normal operation this line must be commented out, or plugin performance is impacted.
Procedure
1. In the guard_tap.ini file, change the value of kerberos_plugin_dir parameter to the full path to the Guardium S-TAP since that is where the plugin is located.
GIM installation: kerberos_plugin_dir=<guardium_base>/modules/STAP/current
S-TAP shell installation: kerberos_plugin_dir=<guardium_base>/guard_stap
2. Configure these in the guardkerbplugin.conf file that is also located in S-TAP installation directory:
KRB5_KTNAME=<full path to kerberos krb5.keytab file>
KRB5_CONFIG=<full path to kerberos krb5.conf file>
Optional parameters as described above. This configuration parameter for ticket cache might be required if the Kerberos plugin does not recognize the user.
This parameter accepts wild cards as there is usually more than one cache file. V10.1.4 and higher: You can specify multiple paths, separated by colons.
KRB5_PLUGIN_CCACHE=<full path to kerberos krb5cc_* files:additional full path to kerberos krb5cc_* files:etc>
Note: In Guardium releases previous to V. 10.1.2, the parameters allow_weak_crypto = 1 and clockskew = 600 were required. In most cases these parameters are
no longer required
Linux and UNIX systems: Finding the Kerberos configuration parameters for Oracle
For Oracle Kerberos, locate the Kerberos keytab and configuration file locations in sqlnet.ora.
Procedure
1. Enter: grep –i KERBEROS $ORACLE_HOME/network/admin/sqlnet.ora
Output is similar to:
SQLNET.AUTHENTICATION_KERBEROS5_SERVICE = oracle
SQLNET.KERBEROS5_CONF = /home/oracle11/krb5/krb5.conf
SQLNET.KERBEROS5_REALMS = /home/oracle11/krb5/krb.realms
SQLNET.AUTHENTICATION_SERVICES= (BEQ,KERBEROS5)
SQLNET.KERBEROS5_CLOCKSKEW = 600
SQLNET.KERBEROS5_KEYTAB = /home/oracle11/krb5/keytab
SQLNET.KERBEROS5_CONF_MIT = TRUE
Linux and UNIX systems: Finding the Kerberos configuration parameters for Sybase
Use the Sybase environment variables to get the Kerberos information.
Procedure
1. Enter: klist -k
Output is similar to:
The A-TAP mechanism monitors communication between internal components of the database server. The data is unencrypted in the application layer, where A-TAP picks
it up and sends to K-TAP. K-TAP is a proxy to pass data to S-TAP, and from there it is then sent to the Guardium collector.
This figure shows where A-TAP fits in with the overall architecture on the database server.
A-TAP is included in every S-TAP but must be specifically configured for each database that requires it.
A-TAP is required when DBMS encryption in motion is used, but there may be other internal database implementation details such as shared memory that require it.
Informix and DB2 on Linux integrate with Guardium more closely using exits, and thus are the recommended method for shared memory support when applicable.
Restrictions: A-TAP is not supported in an environment where a 32-bit database is located on a 64-bit server.
Monitoring restrictions: A-TAP does not support redaction. Blocking is supported for Linux kernels at 2.6.36 or later releases.
Linux and UNIX systems: Preparing for A-TAP configuration and maintenance
Configuring and maintaining A-TAP requires coordination with both the database and system administrators.
Linux and UNIX systems: A-TAP configuration and activation
Configure and activate each A-TAP.
Linux and UNIX systems: A-TAP activate, deactivate and DB stop, restart guidelines
Understand when to activate and deactivate A-TAP, and stop or restart the DB.
Linux and UNIX systems: guardctl utility commands for A-TAP
The guardctl utility is the A-TAP management tool. Understand these commands before starting to work with A-TAPs.
Linux and UNIX systems: guardctl return codes
The guardctl error codes clarify error conditions that occur, in particular, when you are call the guardctl script to manage ATAP instances via another script.
Linux and UNIX systems: Database-specific guardctl parameters
Each database type has specific guardctl requirements.
Linux and UNIX systems: Deactivating A-TAP
You must deactivate A-TAP before upgrading the database OS. You also need to deactivate the ATAPs before upgrading or uninstalling STAP (whether or not it's
installed via GIM, RPM, or shell installer).
Linux and UNIX systems: Configuring and Activating A-TAP in Special Environments
Zones, WPARs, Teradata, and Oracle require additional configuration.
Linux and UNIX systems: Troubleshooting A-TAP configuration issues
This section summarizes common mistakes made during A-TAP configurations, their symptoms, and how to avoid them.
Linux and UNIX systems: Preparing for A-TAP configuration and maintenance
Configuring and maintaining A-TAP requires coordination with both the database and system administrators.
In addition, you must work with the DBA to get the required parameters to input into the utility. Details of the needed parameters are in Linux and UNIX systems:
Database-specific guardctl parameters. For ongoing maintenance, your organization must have documented procedures in place to handle the activation and deactivation
of A-TAP during OS and database upgrades. See Linux and UNIX systems: A-TAP activate, deactivate and DB stop, restart guidelines. For clustered environments, you need
to configure and activate A-TAP on all nodes.
S-TAP is installed.
If the software is installed with GIM, verify that GIM_ROOT_DIR is the absolute path to the modules, for example /usr/local/guardium/modules.
Procedure
1. Verify ktap_installed=1 in the guard_tap.ini file.
2. Log off from all active database sessions and stop the database. It is very important that all processes with database admin user are stopped. For example, on
oracle, issue ps -ef | grep oracle
3. As root user, authorize the database administrative user to log traffic using the guardctl utility with the authorize-user command as follows:
<guardium_base>/xxx/guardctl authorize-user <user-name>
4. Once S-TAP is installed, add the Oracle OS user to the Guardium group (created by the S-TAP install script). This group is created by the S-TAP installer and users
can be added by the system administrator using the usermod utility. Some platforms require the user to be completely logged off in order for this change to take
effect. For example, where Oracle is the user ID of the OS user for the Oracle database and db2inst1 is the user ID of the OS user for DB2 database:
On Solaris, the user has to be completely logged off from the system.
No process should be running in the system under this user id.
In order to verify this, use the following command (assuming the user is Oracle):
ps -efU oracle
If the output is empty, use the following command to add the user to the group:
If the user belongs to groups other than dba, they should be listed as well. The latter can be verified using the following command:
id -a oracle
Once the user is added to the Guardium group, the encrypted traffic should be logged for this user.
5. Store the configuration parameters:
a. See Linux and UNIX systems: Database-specific guardctl parameters to determine the parameters needed for your database type and platform.
b. Store configuration for the database instance using the store-conf command of the guardctl utility as follows. As root user:
<guardium_base>/xxx/guardctl db_instance=<instance> [<name>=<value> ...] store-conf
Note: In Guardium V10.1 and higher, instrumentation is done automatically during activate; there is no explicit instrumentation.
6. Activate A-TAP.
a. As root user: Enter <guardium_base>/xxx/guardctl db_instance=<instance> activate
Note: Optionally, you can activate A-TAP by using the Encryption checkbox of the inspection engine configuration in the Guardium GUI, though there are no
advantages to activating it in the GUI. This option is not available for Linux platforms.
b. Confirm that the instances are activated using the list-active command of the guardctl utility: <guardium_base>/xxx/guardctl list-active
Linux and UNIX systems: A-TAP activate, deactivate and DB stop, restart guidelines
Understand when to activate and deactivate A-TAP, and stop or restart the DB.
Scenario Instructions
After installation of UNIX A-TAP in Oracle cluster environment All database instances as well as all inter-cluster processes must be restarted
guardctl utility
To use the guardctl utility, you must log in as root, since it requires superuser privileges. The guardctl utility is installed under <guardium_base>/guard_stap directory
where <guardium_base> is the directory where Guardium software is installed. In the case of a GIM installation guardctl it is installed under
<guardium_base>/modules/ATAP/current/files/bin.
Syntax
db_instance: ${db_instance}
db_user: ${db_user}
db_base: ${db_base}
db_home: ${db_base}
db_version: ${db_version}
db_type: ${db_type}
is_active: ${is_active} (“yes†or “no†)
is_instrumented: ${is_db_instrumented} (“yes†or “no†)
msg: some string
rv: ${retval}
commands
Command Description
activate Activates A-TAP for the specified database instance using the stored parameters. v10.1.3 and higher: Outputs Name/Value pairs if -v or -qv
specified.
v10.1.3, activating an instance that's already active (whether DB is running or not) does not generate an error.
deactivate Deactivates the A-TAP for the specified, single database instance. v10.1.3 and higher: Outputs Name/Value pairs if -v or -qv specified.
From Guardium V10.1.3, deactivating an instance that's already inactive (whether DB is running or not) does not generate an error.
deactivate-all Deactivates A-TAP for a specified list of database instances. If no database instances are specified, all active A-TAPs are deactivated.
v10.1.3 and higher: Outputs Name/Value pairs for each instance, if -v or -qv specified. You can optionally specify the db-type to deactivate a
group (e.g. all Oracle). For additional name/value pair, specify “overall_rv={0,1}†at end. Returns success (0) if rv=0 for every instance.
Returns failure (1) if at least one instance reports rv != 0.
deinstrument Removes instrumentation for the specified Oracle DB. Not required from v10.1 and higher. If deinstrumentation is required, it is done
automatically during deactivate. V10.1.3 and higher: Outputs Name/Value pairs if -v or -qv specified.
v10.1.3 and higher, deinstrumenting an instance that is not instrumented does not generate an error, even if the is DB running, regardless of
activation status.
get-statistics Get A-TAP statistics. Statistics includes information about which ATAPs are active, which are inactive, and which are in an incorrect in-
between state (this shouldn't happen, it usually occurs when someone updates the DB while ATAP is active).
help Default command, prints the list of supported commands, parameters and their default values.
instrument Explicitly creates relinked instrumented Oracle. If instrumentation is required, it is usually done automatically during activate. Manual
instrumentation is only required for Oracle versions <= 10 on AIX.
Instrumenting an already instrumented instance returns an error. v10.1.3 and higher: Outputs Name/Value pairs if -v or -qv specified.
is-active Returns 1 if there is at least one A-TAP activated instance. Otherwise, returns 0.
is-user-authorized Checks whether the db-user (running A-TAP) is authorized to the guardium group, and can log database traffic to K-TAP/S-TAP.
list-active Lists database instance user names of all active A-TAP database instances.v10.1.3 and higher: Outputs Name/Value pairs if -v or -qv
specified.
list-configured Lists database instances with configured but inactive A-TAPs. v10.1.3 and higher: Outputs Name/Value pairs if -v or -qv specified.
repair Run this command if the DB is (accidentally) upgraded while the A-TAP is active. It renames the -guard-original and -guard-instrumented
files. Returns success on successful repair or if repair is not necessary. Does not touch the current DB executable. V10.1.3 and higher:
Outputs Name/Value pairs if -v or -qv specified. From v10.1.4, it is called automatically on activate and deactivate.
restore-active-ataps Restores the active state of the A-TAPs previously saved via save-active-ataps. If an instance fails to activate (due to DB running or some
other error), then the remaining instances still attempt to activate. This command can be run multiple times without problem, since
activating an already active instance is not an error. Introduced in v10.1.4.
save-active-ataps Saves the configurations for the currently active A-TAPs in a single file so that they can be restored later to an active state. Useful prior to
deactivate-all when preparing to upgrade DBs. Introduced in v10.1.4.
2 is-active called on unrecognized instance Returned by is-active when a db-instance specified is not known to guardctl and as such cannot
be determined to be active or not
20 attempted to activate instance while database was Returned by activate to indicate that the DB instance is running, so activation could not take
running, but not yet active place
21 attempted to deactivate instance while database was Returned by deactivate to indicate that the DB instance is running, so deactivation could not
running, but not yet inactive take place
22 user is not authorized Returned by instrument and activate to indicate that the db-user specified is not authorized as
a member of the 'guardium' group. Run authorize-user to correct.
23 db-home parameter doesn't match db_install_dir Returned by store-conf and activate to indicate that the current guard_tap.ini doesn't have an
parameter in guard_tap.ini IE configured with a db_home that matches the db_install_dir ATAP parameter. One of those
needs to be adjusted to the correct value or STAP may not run.
24 attempt to deactivate an instance where the executable is Returned by deactivate. This instance looks like it should be activated, but the binary isn't what
neither an ATAP executor or the instrumented binary it should be if it is. DB executable could have been updated while ATAP was active. Run the
repair command to fix the issue and activate again.
25 attempt to activate atap when encryption=1 set in Returned by activate when the encryption parameter is set to 1 in the IE. Do not activate with
guard_tap.ini guardctl and use the encryption parameter in the ini.
26 db executable file not found Returned by activate, deactivate, instrument, deinstrument, store-conf, prepare-libs, and repair.
The DB executable is missing (e.g. the oracle binary itself is not in the path specified). Check the
path parameters used when configuring the instance.
27 instrumentation required but not done Returned by activate and store-conf when instrumentation is required, but has not already been
done. Oracle instrumentation is now automatically done in most cases, but still needs to be
manually specified for AIX and Oracle versions <= 10.
28 is-active reports instance is not active Returned by is-active. Informational only. The db-instance specified is not active or if no
instances were specified, no instances are active.
29 deactivate-all not complete success Returned by deactivate-all when at least one active instance could not be deactivated.
43 instrumentation error, cannot save original binary Returned by instrument when the -guard-original file already exists. Either A-TAP is currently
active with instrumentation, or A-TAP is inactive but the instrumentation is still active.
Deactivate and deinstrument before subsequent instrument and activate.
44 attempt to instrument while instance running and not Returned by instrument when DB instance is currently running. Stop DB instance before
already instrumented attempting to instrument again.
45 attempt to instrument while A-TAP is active and not Returned by instrument when A-TAP is already active, but instrumentation is not active. This
already instrumented can happen when switching from an Oracle configuration that doesn't require instrumentation
to one that does. Deactivate A-TAP before attempting to instrument again.
46 attempt to instrument and already instrumented instance Returned by instrument while instance is already instrumented. If instrumentation needs to be
redone, deinstrument first.
94 no atap library supporting this db Returned by instrument, deinstrument, prepare-libs, activate, deactivate, repair, list-active, and
list-configured. Usually indicates that an unknown error occurred.
95 system error, cannot find group Returned by activate. The guardium group doesn't appear to be known to this system.
96 system error, cannot create group Returned by authorize-user. The guardium group did not exist and an attempt to create the
group failed.
98 platform unsupported Returned by instrument, deinstrument, prepare-libs, activate, deactivate, repair, list-active, list-
configured, store-conf. The DB you're trying to use with ATAP is not supported on this platform
(e.g. DB2, Informix, teradata, or mongo on anything but Linux, etc).
Example:
db-user Oracle user name Use the database instance user name.
db_type Oracle
db_base Database instance user home The value for db_base must match the correct path for $ORACLE_BASE or the database instance user home
directory directory. It cannot be ~DB_USER.
db_version The database version Run SQL > SELECT * FROM V$VERSION
db_use_inst No/ A-TAP activation uses relinked version of Oracle previously For S-TAPs at v10.1 and higher, instrumentation is done automatically with the
rumented yes created with the instrument command of guardctl. “activate†command or through the Guardium UI.
db_bits 32 DB instance architecture (32 for 32-bit, 64 for 64-bit) Required only if A-TAP is not able to recognize the architecture.
or
64
Parent topic: Linux and UNIX systems: Database-specific guardctl parameters
db_user Sybase user name Use the database instance user name.
db_instance Sybase instance name Sybase Server instance name. This parameter is used to name the ATAP instance within guardctl.
db_type sybase
> go
db_ho Points to Same as db_base The basis for how we look for the DB binary. It can usually use the value of
me where the db_base, though it's immediately apparent when activating if it's wrong
database is (guardctl complains about not finding the DB binary)
installed
db_ba Database DB instance user home directory. this needs to match If you aren't specifying db_home separately, use the value for db_base as the
se instance user db_install_dir in some IE in the guard_tap.ini. Do not use the value for db_home.
home directory ~DB_USER shortcut, use the full path instead.
db_bit 32 or 64 DB instance architecture (32 for 32-bit, 64 for 64-bit) Required only if A-TAP is not able to recognize the architecture.
s
db- 0 to any Low end of TCP port range to intercept Specify if you want real IPs reported for encrypted sessions. There are
tcp- integer potentially performance impacts in this mode as well as the added
min- complication to the ATAP setup by specifying the port range.
port Leave blank to use the non-specific IP mode.
db- 0 to any High end of TCP port range to intercept Specify if you want real IPs reported for encrypted sessions. There are
tcp- integer potentially performance impacts in this mode as well as the added
max- complication to the ATAP setup by specifying the port range.
port Leave blank to use the non-specific IP mode.
Parent topic: Linux and UNIX systems: Database-specific guardctl parameters
db_type db2
db_base Database instance user Value for db_base must match the correct path DB instance user home Where db_base is not same as db_home
home directory directory. It cannot be ~DB_USER.
db_bits 32 or 64 DB instance architecture (32 for 32-bit, 64 for 64-bit) Required only if A-TAP is not able to
recognize the architecture.
db2-shmsize 131072 DB2 shared memory size When the value is different than the default
db2-c2soffset 61440 DB2 shared memory client area offset When the value is different than the default
db2-header- 20 DB2 shared memory header offset When the value is different than the default
offset
Parent topic: Linux and UNIX systems: Database-specific guardctl parameters
db_type informix
db_base Home directory of DB instance user home directory. Value for db_base must match the correct path DB instance Where db_base is not same
db_user user home directory. It cannot be ~DB_USER. as db_home
db_type postgres
pg_ctl --version
db_base Home directory of db_user DB instance user home directory. Value for db_base must match the correct path DB instance Where db-base is not
user home directory. It cannot be ~DB_USER. same as db-home
db-tcp-min- 0 to any integer Low end of TCP port range to intercept Using Real IPs
port
db-tcp-max- 0 to any integer High end of TCP port range to intercept Using Real IPs
port
Parent topic: Linux and UNIX systems: Database-specific guardctl parameters
Procedure
1. Make sure the database is stopped. Log off from all active database sessions.
2. Deactivate A-TAP for the database:
general example
<guardium_base>/xxx/guardctl –db-instance=<instance-name> deactivate
Greenplum example
/opt/guardium/guard_stap/guardctl --db-type=greenplum --db-home=/usr/local/greenplum-db-4.3.4.0 --db-user=gpadmin --db-
instance=greenplum --db-base=/usr/local/greenplum-db-4.3.4.0 deactivate
Linux and UNIX systems: Configuring and Activating A-TAP in Special Environments
Zones, WPARs, Teradata, and Oracle require additional configuration.
Linux and UNIX systems: Installing and activating A-TAP in Zones and WPARs environment
Linux and UNIX systems: Deactivate and uninstall A-TAP in Zones and WPARs environment
Linux and UNIX systems: Installing and activating A-TAP in Zones and WPARs environment
About this task
Procedure
1. Install STAP/KTAP on the master/global Zone/WPAR by the normal method.
2. For Solaris Zones, for each sub-zone where Oracle is installed, make sure the Guardium device is mapped:
zoneadm -z <zonename> halt
zonecfg -z <zonename>
<zonename>> add device
<zonename>device> set match=/dev/ktap_xxx (for Solaris 10) (katp_xxx is the filename )
<zonename>device> set match=/dev/guard_ktap (for Solaris 11)
<zonename>device> end
<zonename>> verify
<zonename>> exit
zoneadm -z <zonename> boot
3. With multiple KTAP devices, repeat the steps for each KTAP device by using the name, ktap_xxxx (Solaris 10) or guard_ktap_x (Solaris 11).
4. Copy the entire A-TAP installation directory to a sub-Zone/sub-WPAR. Assuming Guardium software is installed on the master Zone/WPAR under
/usr/local/guardium, and there exists a writable directory /usr/local with enough free space on the sub-Zone/sub-WPAR: On the master/global Zone/WPAR: cd
/usr/local; tar -cvf - guardium | ssh root@subzonehost 'cd /usr/local && tar -xvf -'
5. Copy the A-TAP libraries to each sub-Zone/sub-WPAR, and activate it.
If an A-TAP is to be activated on the master Zone/WPAR, activate it normally using guardctl.
Note: Activation must be done using guardctl; it cannot be done through enabling encryption box in the inspection engine section in GUI interface or by
setting encryption=1 in the guard_tap.ini file.
If A-TAP will not be used on the master Zone/WPAR, use guardctl to prepare the libraries for use. On the master Zone/WPAR:
/usr/local/guardium/bin/guardctl --db_instance=<instance-name> --db_type=<database-type> --db_version=<database-
version> prepare-libs
Note: After A-TAP activation, if the database indicates that libguard-xxx.so cannot be found, re-check this step.
6. Install and activate A-TAP for database instances using 1 through 5 on each desired sub-Zone/sub-WPAR.
Note: A-TAP (guardctl) activation may complain and issue warnings about the following:
errors installing libraries under /usr/lib (since that directory belongs to the global/master zone)
not being able to change the guard_tap.ini to monitor oracle-guard instead of oracle (since the file is on the global zone)
not being able to restart S-TAP (since it is running only on the master zone)
7. Adjust the guard_tap.ini file in the master/global Zone/WPAR by manually editing the guard_tap.ini file..
Change the appropriate db_exec_path line:
For Oracle on Solaris: set db_exec_path to oracle-guard-original instead of oracle
For Oracle on AIX: set db_exec_path to oracle-guard-instrumented instead of oracle
Change the files and directories referenced in the IE definitions ( db_install_dir and db_exec_file) so they are relative to the root directory of the WPAR and
not the global partition. (IE order, tap_identifier string, etc, should be identical in all the guard_tap.ini files.)
8. Restart S-TAP.
9. For Solaris, verify the guard_ktap link and permissions on each sub-Zone. This must be performed as root from the global/master Zone.
a. cd to the sub-zone device directory, for example: cd /export/home2/zones/iris3/dev
b. Verify that the KTAP device exists (if it does not, there was a problem with the installation in 2): ls -l ktap_*
c. Verify that the guard_ktap symbolic link exists: ls -l guard_ktap
d. If it does not exist, create it. (Note: ktap_xxxxx is the device just listed): ln -fs ktap_xxxx guard_ktap
For Example:
Note: ATAP, WPAR and encrypted traffic: with a WPAR/Zone, encrypted traffic and decrypted traffic have different IPs when this traffic goes to the analyzer. Thus
the db_user in WPAR/Zones is meaningless.
Parent topic: Linux and UNIX systems: Configuring and Activating A-TAP in Special Environments
Linux and UNIX systems: Deactivate and uninstall A-TAP in Zones and WPARs environment
IBM Security Guardium V10.1 609
About this task
Procedure
1. On every sub-Zone/sub-WPAR with A-TAP installed/active:
a. Deactivate (and deinstrument if necessary, for Oracle on AIX) all A-TAPs using guardctl following the steps in Linux and UNIX systems: Deactivating A-TAP.
b. Manually remove (rm -rf) the installation directory
c. Manually remove the ATAP libraries: find /usr/lib -type f -name 'libguard-*.so' | xargs rm -f
Note: Removing the libraries may give errors; these can be ignored.
2. Uninstall STAP/KTAP using the normal method
a. Remove the libraries: find /usr/lib -type f -name 'libguard-*.so' | xargs rm -f o
b. On Solaris, remove the ktap device from each zone’s configuration:
c. Remove the ktap device file and link from each sub-Zone/sub-WPAR device directory, for example:
/export/home2/zones/iris3/dev cd /export/home2/zones/iris3/dev
rm -f ktap_xxxx guard_ktap
d. With multiple KTAP devices, repeat the steps for each KTAP device by using the name ktap_xxxx (Solaris 10) or guard_ktap_x (Solaris 11).
Parent topic: Linux and UNIX systems: Configuring and Activating A-TAP in Special Environments
Linux and UNIX systems: Upgrading A-TAP in Zones and WPARs environment
Procedure
1. For Solaris Zone:
a. On the master/global-zone, remove the previously installed K-TAP device.
<zonename>> verify
<zonename>> exit
zoneadm -z <zonename> boot
c. For Solaris sub-zones, remove the previous K-TAP device file and link from sub-zone device directory. Go to the sub-zone device directory, for example
/export/home2/zones/iris3/dev.
cd /export/home2/zones/iris3/dev
rm -f ktap_xxxx guard_ktap
<zonename>device> end
<zonename>> verify
<zonename>> exit
zoneadm -z <zonename> boot
b. Add the guard_ktap link and change permission. Go to the sub-zone device directory, for example: sub-zone device
directory=/export/home2/zones/iris3/dev
cd /export/home2/zones/iris3/dev
ln -fs ktap_xxxx guard_ktap
chmod 0666 ktap_xxxx
chmod 0666 guard_ktap
c. Since there are multiple ktap devices, repeat steps for each K-TAP device by using the name ktap_xxxx_x(solaris 10) or guard_ktap_x (solaris 11)
3. For AIX WPARs: on WPARs, change permission on K-TAP devices. Go to the WPARs device directory, for example: wpars device directory=/wpars/odin3/dev
Parent topic: Linux and UNIX systems: Configuring and Activating A-TAP in Special Environments
Linux and UNIX systems: Configure and activate A-TAP steps for Teradata database
Step 1: Determine the user running gtwgateway and the path
For Example:
Path to gtwgateway is /usr/tgtw/bin/gtwgateway. This is the default value for the parameter tdc_gtwgateway and as such does not need to be specified.
For Example:
su11u1x64-tera:~ # ls -l /proc/4608/exe
/opt/teradata/tdat/pde/15h.00.00.07/bin/pdemain
Checking the inodes for this file and /usr/pde/bin/pdemain, we see that they are the same.
/opt/teradata/tdat/pde/15h.00.00.07/bin/pdemain
/usr/pde/bin/pdemain
Since the inodes are the same and the default value for --db-home=/usr/pde, the parameter in this case does not need to be specified. Otherwise, you can specify --
db-home=/opt/teradata/tdat/pde/15h.00.00.07 or --db-home=/usr/pde since bin/pdemain in both paths is the same file hardlinked in this case.
For Example:
For Example:
Step 5: Store the configuration for A-TAP using the parameters determined in steps 1 and 2.
For Example:
/usr/local/guardium/guard_stap/guardctl --db-instance=teradata
--tdc_gtwgateway=/usr/tgtw/bin/gtwgateway --db-type=teradata
For Example:
For Example:
Parent topic: Linux and UNIX systems: Configuring and Activating A-TAP in Special Environments
However, in case A-TAP was not properly deactivated prior to Oracle patch installation, DO NOT try to deactivate it after patch installation. Instead follow these steps:
a. If ATAP IS OK is displayed, the A-TAP is still active and there is no need to do anything.
b. If ATAP IS OK is NOT displayed, remove $ORACLE_HOME/bin/oracle-guard and activate the A-TAP.
Remove $ORACLE_HOME/bin/oracle-guard
Run relink all
In 'BEQUEATH' access from the user other than the one that installed the database the permissions have to be set manually:
add user running sqlplus to group 'guardium'
open the read permissions 'chmod a+rx' on the following two directories:
/usr/local/guardium/xxx/etc/guard
/usr/local/guardium/xxx/etc/guard/executor
make sure that the SUID and SGID bits are on ${ORACLE_HOME}/bin/oracle.
If not, run the command chmod ug+s ${ORACLE_HOME}/bin/oracle')
If the UID or EUID are not members of OWNER group GID, the reason for permission denied is that the user matching UID or EUID does not belong to group
matching OWNER GID.
To make it easier, not having to handle different OS syntaxes for adding users and groups, while disabling the automatic addition to group Guardium, two commands
are available within guardctl which can be used irrespective of the method you use to activate ATAP (i.e. guardctl or guard_tap.ini):
#/path/to/guardium/bin/guardctl is-user-authorized
#/path/to/guardium/bin/guardctl authorize-user ...
Note: Group Guardium can be removed on most OS's with groupdel guardium. However, after removal, only the guard_ktap_loader parameter can correctly re-create it
and change the K-TAP device permissions.
Parent topic: Linux and UNIX systems: Configuring and Activating A-TAP in Special Environments
Activation command fails. Wrong db_home parameter All  Always specify the value of
$ORACLE_HOME as db_home
name.
Activation command fails. OS user logged in All  Always make sure the OS user is
not logged in. Use w command
to see which users are logged
in.
Database does not start. Wrong instance name All Failed to execute Always specify the value of
oracleon1jumbo-guard: No such $ORACLE_SID as db_instance
file or directory: No such file or name.
directory ERROR: ORA-12547:
TNS:lost contact
Traffic is not logged. Wrong or missing db_version AIX Â Always specify numeric version
(for example, 10.2 or 9.2 ). The
version number can have only
one digit after the decimal point.
Fails to activate. Missing Oracle-guard- AIX Missing Oracle-guard- Instrument command must be
instrumented instrumented. run first to create a re-linked
instrumented Oracle executable
Error during ATAP activation Insufficient disk space, install  Matching module found - oracle Clean oracle files and retry.
exits is supported by Change db_space=8 to
/ngs/lpp/guardium/modules/AT db_space=1
AP/current/files/lib/libguard-
atap-oraclesta tic-any Testing
for disk space... cp : 0653-447
Requested a write of 131072
bytes, but wrote only 126976.
Insufficient disk space - please
delete some files and try again.
guard_stap log shows that GIM_ROOT_DIR not set to   When activating A-TAP through
guard-atap-ctl failed absolute path to the modules, the guard_tap.ini file,
for example encryption=1 silently fails. This
/usr/local/guardium/modules is especially important when
running guard_stap manually -
be sure you have defined this
environment variable when
running guard_stap.
Table 2. DB2 Common Mistakes
Symptoms Mistake Platform Error Message(s) How to Avoid
Traffic is not logged. Wrong or missing db2_* Linux  See how to determine DB2
parameter parameters in Linux and UNIX
systems: Inspection engine
parameters
Table 3. Informix Common Mistakes
Symptoms Mistake Platform Error Message(s) How to Avoid
Traffic is not logged properly. Wrong or missing db_version Linux  Always specify numeric version
(e.g. 7 or 11 ).
Parent topic: Linux and UNIX systems: A-TAP management
The DB2 exit library is a dynamic linked library. The DB2 database loads during database starts.
DB2 exit supports firewall (from STAP 10.1.2, also requires DB2 version 10.1 or later), terminate, and UID chain.
If there is no other Inspection Engine (IE) on the S-TAP that requires K-TAP, then you don't need to load K-TAP: set ktap_installed=0 in guard_tap.ini, or with GIM set
ktap_enabled to no, in the GIM dialog for that STAP. You can upgrade the Linux OS and the STAP without being concerned about K-TAP module compatibility. However, if
there is another IE in the S-TAP that requires the K-TAP module, you must ensure that a compatible K-TAP module is available when you upgrade your Linux version.
The Guardium installer has two versions of the DB2 EXIT library: 32- and 64-bit. Use the one that matches your installed DB2. Both versions are in the Guardium
installation directory in the lib sub-directory. On Linux servers, the 64-bit version is in lib64.
Library names
libguard_db2_exit_32.so
libguard_db2_exit_64.so
Procedure
1. Determine the DB2's bitwise. Log in as root and run db2level. The output is similar to
DB21085I Instance db2inst1 uses 64 bits and DB2 code release SQL09070, with level identifier 08010107
2. Locate the communication buffer exit library location (DB2PATH)
a. Log in as DB2 user trip
b. In the DB2 clp, run db2 get database manager configuration
c. In the output, look for default database path: Default database path
(DFTDBPATH) = /DB2/trip
DFTDBPATH is the value you need for the environment parameter DB2PATH.
3. Set up the DB2 Exit library.
a. Log in as user root
b. Set the environment parameter: # export DB2PATH=/DB2/trip
c. Create the directory by entering one of these commands. (This is done only the first time the library is installed, as the directory does not exist)
mkdir $DB2_PATH/sqllib/security/plugin/commexit
mkdir $DB2_PATH/sqllib/security64/plugin/commexit
d. Change permission: # chown ${DB2 user}:${DB2 group} $DB2PATH/security64/plugin/commexit
e. Copy Guardium's libguard file to commexit by entering one of:
# cp /opt/IBM/guardium/module/modules/STAP/libguard_db2_exit_64.so $DB2PATH/security64/plugin/commexit
# cp /opt/IBM/guardium/module/modules/STAP/libguard_db2_exit_64.so $DB2PATH/security/plugin/commexit
If the copy fails with the error ....: Text file busy, remove the file from the target directory, make a copy and repeat.
4. Add the DB2 instance to the Guardium group. The Guardium group is created during S-TAP installation. This requirement increases the security of shared memory
regions that are created by the S-TAP.
a. If DB2 user is 'trip', verify if 'trip' has been authorized already. Use guardctl under the ATAP folder.
5. Enable db2 exit in DB2 (so it will send the SQL traffic to the S-TAP).
a. Log in as db2 user and use the db2 clp commands to enable:
db2 UPDATE DBM CFG USING COMM_EXIT_LIST libguard_db2_exit_64
b. Once enabled, db2 sends SQL traffic to the STAP. Verify if db2 exit is successfully enabled by entering
6. Restart DB2.
c. If the restart was unsuccessful, stop db2 exit to clear any warnings in DB2 by entering:
db2 restart
cd /usr/local
tar -cvf - guardium | ssh root@subzonehost 'cd /usr/local && tar -xvf -'
Linux and UNIX systems: Informix Exit integration with UNIX S-TAP
The Informix Exit ifxguard utility (Informix 12.10 and higher) monitors connections to your Informix databases.
A shared library, Informix Exit, is part of the Guardium Unix S-TAP installation. S-TAP includes 32bit and 64bit.so. They are located under
<guardium_installation_directory>/guard_stap, for example:
/usr/local/guardium/guard_stap /usr/local/guardium/guard_stap/libguard_informix_exit_32.so
/usr/local/guardium/guard_stap/libguard_informix_exit_64.so.
Procedure
1. Login as user informix to the database and locate its instance name (INFORMIXSERVER) and its installation directory (INFORMIXDIR) by running these Unix
commands:
$ echo $INFORMIXSERVER
INFORMIXSERVER=test117
$ echo $INFORMIXDIR
INFORMIXDIR=/home/informix
2. Install and start up the S-TAP in the db host. See Linux and UNIX systems: Install the S-TAP agent.
3. As user root, make sure the user informix is in the guardium group, for example,
/usr/local/guardium/bin/guardctl authorize-user informix
or with unix
# chgroup users=informix guardium (AIX only).
4. Login as user informix and enter:
$ iduid=501(informix) gid=205(informix) groups=215(guardium)
5. As user informix, copy the correct informix exit library from the guard_stap directory to the informix user's lib directory, for example,
cp /usr/local/guardium/guard_stap/libguard_informix_exit_64.so
$INFORMIXDIR/lib/libguard_informix.so
6. Set up ifxguard. Create a config file under $INFORMIXDIR/etc/ifxguard.$INFORMIXSERVER with these lines:
NAME ol_informix1210
WORKERS 2
LIBPATH /home/informix/12.10.FC6/lib/libguard_informix.so
DEBUG 1
LOGFILE /home/informix/12.10.FC6/etc/ifxguard.msg.txtg.txt
Note: INFORMIXDIR=/home/informix/12.10.FC6
7. Bring up ifxguard as user informix
a. Make sure Informix database server is online (onstat -).
$ id
uid=501(informix) gid=205(informix) groups=215(guardium) $ onstat -
IBM Informix Dynamic Server Version 12.10.FC6 -- On-Line -- Up 6 days 00:22:25 -- 253104 Kbytes
b. If the ifxguard config file is setup as described above, bring up ifxguard with:
$ ifxguard
15:20:17 ifxguard set instance name ol_informix1210
Starting ifxguard ol_informix1210 ...
check log file: /home/informix/12.10.FC6/etc/ifxguard.msg.txt
You should not see any errors. In case of error, check the file indicated in LOGFILE.
c. If the ifxguard config file is not under $INFORMIXDIR/etc, specify the file's full path with -c option, - for example
$ ifxguard -c /mnt/conf/ifxguard.ol_informix1210
d. If ifxguard config file is not set up at all, you can still bring up the agent but must specify the .so library using full-path with -p option and message log file
with -l option, for example
You can ignore the password file error. It's a debug message. You can define one password file and run 'onpassword' to encrypt it. Ifxguard reads user informix's
password from the encrypted file and connects to Informix Dynamic Server (IDS). If the password file is not defined, then ifxguard connects to IDS as trusted host
connection (no password).
9. Add the INFX_EXIT inspection engine either via GRDAPI (create_stap_inspection_engine) or the GUI (Manage > Activity Monitoring > S-TAP Control) with these
specific Informix values:
Parameter in GUI Parameter in GRDAPI Value
Teradata exit embeds a Guardium library into DB2 via the exit module. The exit module communicates directly with the Guardium S-TAP to forward all Teradata traffic.
Teradata exit supports terminate and firewall. It does not support UID chain or redaction.
The location of libguard_teradata_exit_64.so and other Guardium files varies depending on the installation method and directory chosen.
Procedure
1. Stop the Teradata service:
/etc/init.d/tpa stop
/etc/init.d/tgtw stop
[DB_0]
connect_to_ip=127.0.0.1
db_exec_file=/opt/teradata/tdat/tgtw/16.00.00.05sks/bin/gtwgateway
db_install_dir=/root
db_type=trd_exit
intercept_types=NULL
tap_identifier=NULL
networks=0.0.0.0/0.0.0.0
exclude_networks=
6. On the DB, load the Exit library into the Teradata database: /usr/tgtw/bin/gtwcontrol --monitorlib load=yes
7. Start the Teradata service:
/etc/init.d/tpa start
/etc/init.d/tgtw start
You can some modify parameters in the GUI. See Linux and UNIX systems: Configure S-TAP from the GUI.
GIM is an easy method for modifying parameters, if the S-TAP bundle was installed with GIM. See the instructions for v10.1.4 and higher: Set up by Client; and for v10.1-
10.1.3: GIM user interfaces.
If it is necessary to modify the configuration file from the database server, follow the procedure described in this section. The guard_tap.ini file contains comments that
explain many of the parameters.
The S-TAP needs restarting after you modify the guard_tap.ini. If you're using GIM, it restarts the S-TAP automatically.
CAUTION:
Parameters must be added to their relevant section: [TAP], [SQLGuard], [DB_<name>].
Pool  connection_pool_size 0 The number of connections to open between the S-TAP and the sniffer process on a Guardium host.
size Increasing the value provides additional throughput that may be required when enabling encryption such
as TLS. The maximum number of pooled connections is 50. The total is the sum of (connection_pool_size
x num_main_threads) in all of the [SQLGuard_n] sections in the guard_tap.ini.
Valid values:Â Â
0: disable pooling
1-10 (for each defined host)
Default = 0
Main  num_main_threads 1 The number of threads used between the S-TAP and one or more Guardium hosts.
threa
ds Valid values: 1-510 (maximum total of 510 for all defined Guardium hosts) (Until V10.1.3 maximum was
5.)
Default = 1
Note: Enterprise load balancing does not support using multiple threads for a single managed unit. When
using enterprise load balancing, set this parameter to 1.
 primary  Indicates the primary Guardium system for this S-TAP. In guard_tap.ini: 1=Primary, 2=Seconday,
(chec 3=tertiary, and so on
kmark
indica
tes
the
prima
ry
host)
  sqlguard_port 16016 Read only. Port used for S-TAP to connect to Guardium system.
Guard STAP_ sqlguard_ip NULL IP address or hostname of the Guardium system that acts as the host for the S-TAP. You can define
ium SQLG multiple hosts by adding [SQLGuard_1], [SQLGuard_2], and so on.
Host UARD
_IP
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
These parameters are stored in the [TAP] section of the S-TAP properties file.
Table 1. S-TAP configuration parameters in the [TAP] section
Default
GUI GIM guard_tap.ini value Description
stap=UNIX
ztap=Z/OS
Versi  tap_version  Read only. The S-TAP version that is installed on the DB server, added to the file during
on installation or upgrade only.
S- STAP tap_ip  Read only. IP address or hostname for the database server system on which S-TAP is
TAP _TAP installed
Host _IP
Devic STAP devices none Which interfaces to listen on. Use ifconfig to find the correct interface.
es _DEV
ICES
All STAP all_can_control 0 0=S-TAP can be controlled only from the primary Guardium system. 1=S-TAP can be
can _ALL controlled from any Guardium system.
contr _CAN
ol _CON
TROL
Use the primary parameter in the SQLGUARD section to specify primary, secondary, etc.
servers. If this parameter is set to 0, and you have more than one Guardium system
monitoring traffic, then the non-primary Guardium systems are available for failover.
Note: Guardium does not support failover with a v10.x S-TAP and a v9.x collector.
  connection_timeout_sec 10 Number of seconds after which the S-TAP considers a Guardium server to be unavailable.
It can have any integer value.
TLS STAP use_tls 0 1=use SSL to encrypt traffic between the agent and the Guardium system.
Use _USE
_TLS 0=do not encrypt.
Warning: The traffic between the agent and Guardium system is in clear text.
Guardium recommends encrypting network traffic between the S-TAP and the collector
whenever possible, only in cases where the performance is a higher priority than security
should this be disabled.
Decrypting login packets isn't supported when TLS is enabled. This means that DB_USER
is not populated and failed logins are not associated with an access.
 STAP wait_for_db_exec -1 Specifies how the S-TAP starts monitoring its databases after a restart.
_WAI
T_FO 1 and greater: When S-TAP restarts, either from a system reboot or user initiated S-TAP
R_DB stop / start commands, S-TAP polls all databases that have been configured to be
_EXE monitored and begins monitoring them when available. Any configuration anomalies
C (either on the database side or the S-TAP side) that limits S-TAP ability to monitor a
database does not limit S-TAP from monitoring other databases with valid configurations.
Instead, S-TAP starts successfully, monitors all valid configurations, and continues to poll
other databases until they become available and then starts monitoring them as well. It is
recommended to use existing alerts and reports to monitor and report on any failed S-TAP
status.
For example, after relinking Oracle, Oracle BEQ traffic is not logged for 15 minutes, this is
the time it takes for S-TAP to run periodically and check if an Oracle device node has been
changed.
0 and less: S-TAP exits with error message if it cannot access the db_install_dir. If the
STAP has multiple IEs, it exits at the first occurrence of not reaching a DB.
 STAP tap_run_as_root TAPUSER To allow S-TAP to run as regular user. 0 = runs as guardium user, 1= runs as root
_RUN
_AS_ In some cases you need to run the S-TAP as guardium (and not root). This can cause
ROOT other issues and should only be used when necessary. Running S-TAP as the guardium
user can cause a database or protocol to stop working because of permission levels.
Verify that the database path or exec file gives the Guardium user read permission.
Depending on your environment, typical limitations are:
wait_for_db_exec might not work. For cluster, check the database path or exec file
for Guardium user read permission.
Database on AIX® WPAR and Solaris Zones may not work, check the permission to
access the install path or exec file
For Oracle BEQ, restart S-TAP after starting or restarting the database.
For Informix® shared memory, restart S-TAP after starting or restarting the
database.
For DB2 shared memory, if shmctl failed because of permission issue, then in most
cases S-TAP® should be changed to run as root.
If shared memory segment has read permission by group, then make sure
the DB2 instance has been added to user (Guardium) group. But still on
each server, only one set of configuration of DB2® can be supported.
If shared memory segment has read permission by db2 user only, then S-
TAP has to run as root. (open a DB2 shared memory session, run command
ipcs -ma, check MODE on the output)
Alter STAP alternate_ips NULL Comma-separated list of alternate or virtual IP addresses used to connect to this
nate _ALT database server. This is used only when your server has multiple network cards with
ips ERNA multiple IPs, or virtual IPs. S-TAP only monitors traffic when the destination IP matches
TE_I either the S-TAP Host IP defined for this S-TAP, or one of the alternate IPs listed here, so
PS it's recommend that you list all virtual IPs here.
 tee_msg_buf_len 128 Size of the buffer for Tee in MB. It can take any integer value.
 STAP buffer_file_size 50 Advanced. Size in MB of the buffer allocated for the packets queue. If the buffer size is set
_BUF too large, the S-TAP might not be able to start. Files larger than 2560 MB are known to
FER_ cause this problem.
FILE_
SIZE
Trace tracefiles_dir  The Directory in which access tracer files will be stored. The default is INSTALLDIR.
files
dir
  tap_min_heartbeat_interval 180 Number of seconds after which the S-TAP should fail over.
 msg_aggregate_timeout 100 time in milliseconds at which K-TAP sends the packets accumulated in its buffer to the S-
TAP. Can be any integer value.
 msg_count_watermark 64 Number of packets at which K-TAP sends the packets accumulated in its buffer to S-TAP.
Can be any integer value.
 log_program_name 0 To boost performance you may consider disabling getting the sourceprogram name, in
doing so you won't be able to tell which program name was using the connection (but all
other connection information like user and client address will be available). 0 = don't send
source_program name to Guardium system, 1=send source_program name to Guardium
system.
 max_server_write_size 16384 The maximum number of bytes that the S-TAP sends to the Guardium system at once.
Can be any integer value.
 sqlguard_cert_cn NULL The common name to expect from the Sqlguard certificate.
 guardium_crl_path NULL The path to the Certificate Revocation list file or directory.
 tap_failover_session_size 1024 The maximum number of failover sessions in the list per Guardium system. 0=failover
feature is disabled. Can be any integer value.
 tap_failover_session_quiesce 60 The number of minutes after S-TAP failover, when unused sessions in the failover list from
the previous active servers are removed from the current active server. This includes
cleaning the session's policy and removing the session from the firewalled and scrubbed
lists.
 STAP db_ignore_response NULL Comma-separated list of db types to be response-ignored. If it is set to none, no response
_DB_ is ignored; if it is set to all, the responses from all DBs are ignored. Note: If using
IGNO db_ignore_response=all to set the Oracle database response to be ignored (not captured
RE_R to reduce traffic load), then be aware that more than just database server responses are
ESPO involved. Database server responses can also contain important database protocol
NSE metadata information used by the application for following database requests
interpretation.
 STAP stap_statistic 0 Interval at which S-TAP sends statistic information about S-TAP/K-TAP to sniffer ; 0=do
_STA not send. Specify a positive integer for hours or a negative integer for minutes.
TISTI
C
0 - Guardium V9
 STAP upload_feature 1 If=1, when a new K-TAP is built, upload it automatically to the Guardium system to which
_UPL this S-TAP reports.
OAD_
FEAT
URE
 add_to_verification schedule 0 Add the Inspection Engines defined in guard_tap.ini to S-TAP Verification schedule. STAP
Verification will test traffic capture. 0=OFF, 1=ON, default is 0.
 STAP db_ignore_response_bypass_bytes 4096 Integer of bytes size of the result set, that when a result set is greater than the size to
_DB_ ignore the response.
IGNO
RE_B
YPAS
S_BY
TES
 STAP db_ignore_response_filter 0.0.0.0/0.0. Comma separated list of IP/MASKs to be response-ignored, by default it filters all traffic
_DB_ 0.0
IGNO Any DB responses of the type specified by DB_IGNORE_RESPONSE to the specified
RE_R IP/MASKs are ignored.
ESPO
0=no filtering of responses occurs
NSE_
FILTE 0.0.0.0/0.0.0.0=all IPs are filtered
R
 STAP db_ignore_response_local 1 Filtering of local db responses. TCP traffic is not considered local traffic for this
_DB_ parameter.
IGNO
RE_R 0=no
ESPO
1=yes
NSE_
LOCA
L
 debug_snapshot 0 Advanced. Collects a debug dump from a STAP. Should be triggered from the GUI (S-TAP
Control > S-TAP commands). After triggering a dump from the GUI, the parameter reverts
to its default of 0.
 debug_snapshot_level 1 Advanced. The value of tap_debug_output_level that is run for the debug dump:
1: basic debug
4: verbose debug
 debug_snapshot_time 60 Advanced. The time interval, in seconds, for which the diagnostic runs. The value can be
any integer value.
 force_log_limited 0 Controls sending certain types of information to the collector. Useful when you are
concerned about the possibility of storing private data on the Guardium collector.
0=unrestricted. Default
0: Disable.
1: Enable. For local TCP/IP connections including Solaris zones and AIX WPARS; or
remote TCP/IP connection when appserver_installed = 1
Load STAP load_balancer_ip  IP address of the load balancer unit. If not defined, S-TAP does not use Enterprise Load
Balan _LOA Balancing.
cer D_BA
IP LANC
ER_I
P
Mana STAP load_balancer_num_mus 1 Number of managed units to request from load balancer
ged _LOA
Units D_BA
LANC
ER_N
UM_
MUS
 merge_with_template 0 Specifies whether or not the configuration from the collector is merged with the template
config file when it is pushed to STAP.
0=no
1=yes
 shmid_blacklist NULL Comma separated list of shared memory IDs that KTAP filters.
 shmid_blacklist_wait 0 Wait to activate interception until shmid_blacklist items are discovered 0: no, 1: yes (0)
 blacklist_shmem_ops_by_proc NULL ktap uses blacklist_shmem_ops_by_proc to filter the shmem interception for the
specified processes (comma separated list)
In GIM installations from v10.1.4, the default is disabled, and in earlier versions it is
enabled by default. In shell installations, the default is enabled in all 10.0 and 10.1
version.
Inclu STAP uid_chain_sshd_ip 0 Introduced in v10.1.4. Encode the client IP into the UID chain when ssh is identified as
de _UID one of the processes in the chain.
client _CHA
IP in IN_T 0=disabled, 1=enabled
UID RACE
chain
for
SSH
daem
on
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
These parameters are stored in the individual [DB_<name>] inspection engine section of the S-TAP properties file, with the name of a data repository. There can be
multiple sections in a properties file, each describing one inspection engine used by this S-TAP.
GUI guard_tap.ini Default value Description
Port port_range_start  Starting port range specific to the database instance. Together with port_range_end defines the
range range of ports monitored for this database instance. There is usually only a single port in the range.
For a Kerberos inspection engine, set the start and end values to 88-88. If a range is used, do not
include extra ports in the range, as this could result in excessive resource consumption while the S-
TAP attempts to analyze unwanted traffic.
KTAP real_db_port 4100 Used only when the K-TAP monitoring mechanism is used. Identifies the database port to be
DB monitored by the K-TAP mechanism.
Real
Port
Client networks  Identifies the clients to be monitored, using a list of addresses in IP address/mask format:
Ip/Ma n.n.n.n/m.m.m.m. If an improper IP address/mask is entered, the S-TAP does not start. Valid values:
sk
null=select all clients
127.0.0.1/255.255.255.255=local traffic only
Client Ip/Mask (networks) and Exclude Client Ip/Mask (exclude networks) cannot be specified
simultaneously.
If the IP address is the same as the IP address for the database server, and a mask of
255.255.255.255 is used, only local traffic will be monitored. An address/mask value of
1.1.1.1/0.0.0.0 monitors all clients.
Exclu exclude_networks  A list of client IP addresses and corresponding masks that are excluded from monitoring. This option
de allows you to configure the S-TAP to monitor all clients, except for a certain client or subnet (or a
Client collection of these). Client Ip/Mask (networks) and Exclude Client Ip/Mask (exclude networks)
Ip/Ma cannot be specified simultaneously.
sk
TEE tee_listen_port 12344 Deprecated. Replaced by the parameter real_db_port when the K-TAP monitoring mechanism is
Listen used.
Port-
Real Was required when the TEE monitoring mechanism. The Listen Port is the port on which S-TAP listens
Port for and accepts local database traffic. The Real Port is the port to which S-TAP forwards traffic.
Conne connect_to_ip 127.0.0.1 IP address for S-TAP to use to connect to the database. Some databases accept local connection
ct To only on the real IP address of the machine, and not on the default (127.0.0.1). When K-TAP is
Ip enabled, this parameter is used for Solaris zones and AIX WPARs and it should be the zone IP
address in order to capture traffic.
DB db_install_dir NULL DB2, Informix, or Oracle: Enter the full path name for the database installation directory. For
Install example: /home/oracle10. All other database types enter: NULL. For DB2 exit and Informix exit,
Dir db_install_dir must be exactly the same as the $HOME value in the database (or $DB2_HOME for
DB2 Exit); otherwise tap_identifier does not function properly.
Proce db_exec_file NULL For a DB2, Oracle, or Informix database, enter the full path name for the database executable. For
ss example:
Name
Oracle: there is no standard path, it depends on the directory where the database is installed.
Informix: /INFORMIXTMP/.inf.sqlexec. Applies to all Informix platforms but Linux.
Informix with Linux, example: /home/informix11/bin/oninit
MYSQL: mysql
All other database types: NULL
Encry encryption 0 Activate ASO or SSL encrypted traffic for Oracle (versions 11 and 12) and Sybase on Solaris, HPUX
ption and AIX.
For Oracle12 SSL, instrument on all platforms. For Oracle11 SSL, instrument on AIX.
For any Oracle requiring instrumentation, if you are using encryption=1 in the guard_tap.ini (which is
not supported on Linux), you must instrument prior to setting that parameter.
 load_balanced 1 1=database traffic participates in load balancing. 0=database traffic does not participate in load
balancing.
Interc intercept_types NULL Protocol types that are intercepted by the IE. Valid values:
ept
Types NULL: auto intercepts all protocols the Database supports
Comma separated list: IE intercepts these protocol types only.
Identi tap_identifier NULL Optional. Used to distinguish inspection engines from one another. If you do not provide a value for
fier this field, Guardium auto-populates the field with a unique name using the database type and GUI
display sequence number.
Unix unix_domain_socket_marker Null Specifies UNIX domain sockets marker for Oracle, MySQL and Postgres. Usually the default value is
Socke correct, but when the named pipe or UNIX domain socket traffic does not work then you need to
t make sure this value is set correctly. For example, for Oracle, unix_domain_socket_marker should be
Marke set to the KEY of IPC defined in tnsnames.ora. If it is NULL or not set, the S-TAP uses defined default
r markers identified as: * MySQL - "mysql.sock" * Oracle - "/.oracle/" * Postgres - ".s.PGSQL.5432"
DB2 Shared db2_fix_pack_adjustment 20 Required when DB2 is selected as the database type, and shared memory connections are
Mem. Adjust. monitored. The offset to the server's portion of the shared memory area. Offset to the beginning
of the DB2 shared memory packet, depends on the DB2 version: 32 in pre-8.2.1, and 80 in 8.2.1
and higher.
DB2 Sh. Mem. db2_shmem_client_position 61440 The offset to the client's portion of the shared memory area. Required when DB2 is selected as
Client Pos. the database type, and shared memory connections are monitored. The client offset can be
calculated by taking the value of the DB2 parameter ASLHEAPSZ and multiplying by 4096 to get
the appropriate offset. The default for this parameter is 61440 decimal. This parameter is
calculated by taking the DB2 database configuration value of ASLHEAPSZ and multiplying by
4096. To get the value for ASLHEAPSZ, execute the following DB2 command: db2 get dbm cfg
and look for the value of ASLHEAPSZ. This value is typically 15 which yields the 61440 default. If
it's not 15, take the value and multiply by 4096 to get the appropriate client offset.
db2bp_path Null Only used when using ATAP on DB2. If the program 'db2bp' (part of DB2) is in the standard
location, this does not need to be set. If it is non-standard, then this parameter points to its
location. The value of this parameter should be the full path of the relevant db2bp as seen from
the global zone/wpar. For example, if the file is /data/db2inst1/sqllib/bin/db2bp and the zone is
installed in /data/zones/oracle2nd/root/ then the full path to db2bp that should be set in the
db2bp_path parameter is /data/zones/oracle2nd/root/data/db2inst1/sqllib/bin/db2bp
DB2 Shared db2_shmem_size 131072 DB2 shared memory segment size. Required when DB2 is selected as the database type, and
Mem. Size shared memory connections are monitored.
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
G
I
M guard_tap.ini Default value Description
S firewall_timeout 10 Time, in seconds to, wait for a verdict from the Guardium system if the firewall timed out. Look at
T firewall_fail_close value to know whether to block or allow the connection. The value can be any integer
A value.
P
_
F
I
R
E
W
A
L
L
_
T
I
M
E
O
U
T
S firewall_fail_close 0 If the verdict does not come back from the Guardium system and the firewall_timeout expires: if
T firewall_close = 0 the connection goes through; if firewall_close=1 the connection is blocked.
A
P
_
F
I
R
E
W
A
L
L
_
F
A
I
L
_
C
L
O
S
E
S firewall_default_state 0 0: An event triggers traffic in a session to be watched and checked for firewall policy violations.
T 1: All traffic is watched by default for firewall policy violations
A
P
_
F
I
R
E
W
A
L
L
_
D
E
F
A
U
L
T
_
S
T
A
T
E
S firewall_force_watch NULL When the firewall feature is enabled and firewall_default_state is 0, the session is watched automatically
T when its client IP matches one of this list of IP/MASK values. The list itself is separated with commas, for
A example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2
P
_
F
I
R
E
W
A
L
L
_
F
O
R
C
E
_
W
A
T
C
H
S firewall_force_unwatch NULL When the firewall feature is enabled and firewall_default_state is 1, the session is unwatched
T automatically when its client IP matches one of this list of IP/MASK values. The list itself is separated with
A commas, for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2,
P
_
F
I
R
E
W
A
L
L
_
F
O
R
C
E
_
U
N
W
A
T
C
H
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
These parameters are stored in the [TAP] section of the S-TAP properties file.
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
GIM guard_tap.ini Default Description
Value
STAP_QRW_INSTALLED qrw_installed 0 Enable / disable the Dynamic Data Masking for Databases feature. When set to 0, all other
parameters in this group are ignored.
0=No
1=Yes
STAP_QRW_DEFAULT_S qrw_default_state 0 Sets the query rewrite activation trigger. Must be 0 if firewall_default_state=1.
TATE
0=QRW activated per session when triggered by a rule in the installed policy
1=QRW activated for every session regardless of the installed policy
STAP_QRW_FORCE_WA qrw_force_watch NULL Comma separated list of client IP/MASKs (for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2) to watch
TCH automatically. Valid when qrw_default_state is 0. Cannot be configured to the same range as
firewall_force_watch.
STAP_QRW_FORCE_UN qrw_force_unwatch NULL Comma separated list of client IP/MASKs (for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2) to exclude
WATCH from watching. Valid when firewall_default_state is 1. Cannot be configured to the same range as
firewall_force_unwatch.
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
These parameters are stored in the [TAP] section of the S-TAP properties file.
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
Parameter Default value Description
0=No
1=Yes
0=SSM activated per session when triggered by a rule in the installed policy
server_side_masking_force_watch NULL Comma separated list of client IP/MASKs (for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2) whose sessions are
watched automatically. Valid when server_side_masking_installed=1 and qrw_default_state=0.
server_side_masking_force_unwatc NULL Comma separated list of client IP/MASKs (for example, 1.1.1.1/1.1.1.1,2.2.2.2/2.2.2.2) whose sessions are
h not watched. Valid when server_side_masking_installed is 1 and firewall_default_state is 1.
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
GIM guard_tap.ini Default value Description
STAP_DISCOVE discovery_interval 24 The time interval, in hours, at which auto-discovery runs. Set to 0 to disable.
RY_INTERVAL
DISCOVERY_DB discovery_dbs oracle:db2:inform Colon (':') separated list of database types to discover.
S ix:mysql:postgres
:sybase:hadoop:t
eradata:netezza:
memsql
STAP_DISCOVE discovery_port 8443 The Guardium port the S-TAP Discovery uses to connect to the Guardium system.
RY_PORT
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
Default
GUI GIM guard_tap.ini value Description
 STAP appserver_installed 0 0 is default, S-TAP acts as normal. 1=S-TAP is set in 'client mode', switches S2C and C2S packets to
_AP reflect S-TAP being installed on client, not db server. Also, if 1, checks to see if the other appserver_*
PSE parameters are filled in, and if so, examines http packets on the supplied port to grab session
RVE information about the end-user of the java-application that resides on the client system.
R_IN
STAL
LED
Ports STAP appserver_ports 8080 Comma-separated list of ports, or hyphens for inclusive ranges of ports, on which the Java
_AP application is accessed via web browser.
PSE
RVE
R_P
ORT
S
Login STAP appserver_login_pattern  Comma-separated list of strings specifying the login pattern passed to the application. This is the
pattern _AP pattern that the Java application is passed to identify a user login.
PSE
RVE
R_L
OGI
N_P
ATTE
RN
Username STAP appserver_username_prefix  Comma-separated list of strings specifying the prefix to the username for a given session. This is the
prefix _AP pattern the Java application uses to indicate the username of the given session.
PSE
RVE
R_U
SER
NAM
E_P
REFI
X
Username STAP appserver_username_postfix  Comma-separated list of strings specifying the postfix to the username for a given session. This is
postfix _AP the pattern (or character) used by the Java application to indicate the end of the value for the given
PSE variable that indicates the username.
RVE
R_U
SER
NAM
E_P
OST
FIX
Session STAP appserver_session_pattern  Comma-separated list of strings specify the start of an end-user session, using a particular database
pattern _AP session. This is the pattern specifying [change of] end-user session for a given database connection.
PSE
RVE
R_S
ESSI
ON_
PATT
ERN
Session STAP appserver_session_prefix  Comma-separated list of strings specifying the session identifier
prefix _AP
PSE
RVE
R_S
ESSI
ON_
PRE
FIX
Session STAP appserver_session_postfix  Comma-separated list of strings specifying where the session id ends.
postfix _AP
PSE
RVE
R_S
ESSI
ON_
POS
TFIX
Session ID STAP appserver_usersess_pattern  Comma-separated list of strings specifying the identifier for marking which end-session a given
pattern _AP connection is continuing with.
PSE
RVE
R_U
SER
SESS
_PAT
TER
N
Session ID STAP appserver_usersess_prefix  Comma-separated list of strings specifying what identifies/precedes the session_id in a given
prefix _AP usersess indicator packet.
PSE
RVE
R_U
SER
SESS
_PR
EFIX
Session ID STAP appserver_usersess_postfix  Comma-separated list of strings specifying where the session id ends.
postfix _AP
PSE
RVE
R_U
SER
SESS
_PO
STFI
X
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
0 is disabled (default).
1 is enabled.
log4j_port Integer. The port where the Guardium S-TAP will listen for
Ranger audits.
Default = 5555.
log4j_listen_address IP address Ranger plugins will connect to this address.
0.0.0.0 indicates any IP address of the system The default value of 0.0.0.0 is recommended, as this
(default). enables the S-TAP to receive traffic from any host.
localhost indicates the loopback address of the Use localhost if configuring the system for high
system. availability.
Default = 1
0 is disabled (default).
1 is enabled.
1 is enabled.
0 is disabled (default).
1 is enabled.
kafka_keytab NULL The path to the Kerberos keytab file on the S-TAP
server.
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
Task checkpoint cas_task_checkpoint task_checkpoint Internal handle program machine state in case of host failure.
Client checkpoint cas_client_checkpoint client_checkpoint File used to restart processing. A series of files is created. Each version of the file ends
with a unique number. The default is task_checkpoint and client_checkpoint
Fail over file cas_fail_over_file fail_over_file Name of the outgoing messages buffer. The database writes to this file when the
Guardium system cannot be reached. During this time, the file can grow to the
maximum size specified. When the limit is reached, a second file is created, using the
same name with the digit 2 appended to the end of the name. (This is the point at which
CAS begins trying to connect to a secondary server.) If that file also reaches the
maximum size, the first file is overwritten. If the first file fills again, the second file is
overwritten. Thus, following an extended outage, you may lose data, but you will have
an amount of data up to twice the size of the Failover File Size Limit.
Fail over file size limit cas_fail_over_file_size_limit 50000 Failover file maximum size, in KB. There are two of these files, so the disk space
requirement is twice what you specify here. If you specify -1, there is no limit on the file
size, but it's recommend that the file size is capped.
Max rec. attempts cas_max_reconnect_attem 5000 Number of reconnect attempts when connection is lost. After losing a connection to the
pts Guardium system, the maximum number of times CAS attempts to reconnect. Set this
value to -1 to remove any maximum (CAS attempts to reconnect indefinitely). The
default cas_max_reconnect_attempts and cas_reconnect_interval define an interval of
about 3.5 days. After the maximum has been met, CAS continues to run, writing to the
failover files, but it does not attempt to reconnect with a Guardium host.
Raw data limit cas_raw_data_limit 1000 Maximum number of kilobytes written for an item when the Keep data checkbox is
marked in the item template. If you specify -1, there is no limit.
Md5 data limit cas_md5_size_limit 1000 Maximum size of a data item, kilobytes, on which the MD5 checksum calculation is
performed. If you specify -1, there is no limit.
 cas_command_wait 300 Wait time in seconds before killing a long-running data collection process
 cas_server_failover_delay 60 Wait time in minutes before trying to connect to another Guardium system
Table 1. CAS
deprecated
parameters
guard_tap.ini
cas_task_baseline
cas_client_baseline
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
These parameters are in the [TAP] section on the guard_tap.ini file.
Table 1. S-TAP configuration parameters for debugging
GUI GIM guard_tap.ini Default value Description
Messages Syslog STAP_SYSLOG_MESSA syslog_messages 1 1= send messages to syslog. 0=do not send messages.
GES
0: disable
1: basic debug
4: verbose debug
6: Appserver debug
10: Exit engine debug. Debug info is logged into both S-TAP log and
db2_exit log (db2diag.log).
11: exit engine debug. Debug info is only logged into db2_exit log
(db2diag.log).
Messages Remote STAP_REMOTE_MESSA remote_messages 1 Send messages to the active Guardium host.
GES
0=Do not send messages
1=Send messages to the active Guardium system.
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
These parameters are located in the [TAP] section of the S-TAP properties.
CAUTION:
These are advanced parameters and are usually modified by IBM Technical Support only.
Table 1. K-TAP configuration parameters
Default
guard_tap.ini value Description
ktap_installed 1 Is Kernel Monitor module installed: 0=NO, 1=YES. ktap_installed and tee_installed are mutually exclusive; only
one can be set to on.
ktap_request_timeout 5 The timeout, in seconds, for waiting for K-TAP reply. K-TAP sends ioctl to S-TAP to ask for some information, and
waits for the reply from S-TAP. It can have any value.
ktap_dbgev_ev_list 0 It is used to enable K-TAP trace log either through GUI or through guard_tap.ini file: 0=disable, 1=enable ktap
trace log located under /var/tmp directory
ktap_dbgev_func_name all List of functions to log in K-TAP trace log. all= all the functions or we can specify specific function such as accept
so we log in the log file only the accept functions. If you specify a function that is not relevant to the K-TAP trace
log it won't log anything to the log.
ktap_fast_file_verdict 1 For TLI connection, K-TAP sends ioctl to S-TAP to confirm that session is the database connection configured in
our IE by checking ports and Ips, when ktap_fast_file_verdict is set to 1, then K-TAP does not send the request to
S-TAP as long as session's ports are in the range. it can have either 1 or 0 values (1).
ktap_buffer_size 4194304 Advanced. The size of theK-TAP buffer in Bytes. The range of values is between 1 MB and 16 MB
ktap_buffer_flush 0 Advanced. The way to send messages from K-TAP to S-TAP. If = 1 the S-TAP reads the entire K-TAP buffer and
process all the packets in the buffer. If ktap_flush_buffer=0 , the S-TAP reads a fixed amount rather than the entire
buffer.
ktap_local_tcp 0 1=only intercept local connections (although previously intercepted connections will still be captured) (this
parameter is used for TCP connections)
khash_table_length 24593 Number of sessions that can be stored in the Khash table. It is an integer and can have any value.
khash_max_entries 8192 Length of the table that contains all the information for the specific session. It is an integer and can have any value.
0=KTAP sends ioctl to the STAP to confirm that the session is the database connection configured in the IE
by checking the process ID
1= K-TAP does not send the request to S-TAP as long as session's db2_shmem_size matches the attached
shared memory segment.
atap_exec_location /var/guard Location of the executable that is used when activating A-TAP by enabling the encryption box in the inspection
engine section
pcap_read_timeout 0 only PCAP traffic (non-K-TAP): how long should S-TAP wait between PCAP sampling. Do not change this value
without consulting with Technical Support, after examining the problem and determining the losses (not
capturing all the traffic) are caused due to PCAP/S-TAP related bottleneck.
pcap_dispatch_count 16 Optimization of PCAP capturing; number of packets to bundle (group) before reporting back to S-TAP. Grouping
the packets together can reduce the PCAP-to-S-TAP communication, and boost performance. Do not change this
value without consulting with Technical Support, after examining the problem and determining the losses (not
capturing all the traffic) are caused due to PCAP/S-TAP related bottleneck.
pcap_buffer_size -1 Size of PCAP socket buffer. This parameter is used for LINUX only. This integer's default value is -1, means to get
the maximal buffer possible. Any other case, this is buffer size in kilobytes. 0 is not legal - if it is 0, it means 60
other than that it can be any value up to 65535. Larger buffer mean that it's likely to have losses when there are
busts of high volume traffic. The scenario; Burst of high traffic, PCAP captures everything, but the S-TAP (or
PCAP-to-S-TAP flow) is not fast enough and cannot keep up with the traffic. To avoid losses, the yet-to-be-
processed packets are buffered. The larger the buffer is, the more resilient against higher and longer bursts of
high traffic. Do not change this value without consulting with Technical Support, after examining the problem and
determining the losses (not capturing all the traffic) are caused due to PCAP/S-TAP related bottleneck.
pcap_backup_ktap 1 When this parameter is enabled, always start PCAP regardless if ktap_installed is enabled or not, as long as there
is DB2 defined in IE.
Add parameter to control use of custom KTAP modules distribution via GIM GUI
GIM users - Compile a custom built KTAP into a custom bundle and use it on other database servers.
Non-GIM users - No custom bundles needed, custom KTAP could be compiled and copied between databases server manually.
Valid values: '1' - allow custom bundles installations . '0' - Reject custom bundle installations
Default value: 1
During GIM scratch installation (DB server) - User can specify a new optional installation parameter, --install_custom_bundles.
If specified, custom bundles installations (for example, custom bundle STAP) will be allowed (GIM_ALLOW_CUSTOMED_BUNDLES will be set to '1') on that DB server.
Otherwise won't be allowed (GIM_ALLOW_CUSTOMED_BUNDLES will be set to '0').
During GIM upgrade (via GIM GUI) from a GIM version that did NOT have this parameter - Default value will be '1' (in order not to disable this functionality for customers
that might have been using this feature until now).
This parameter cannot be set to '1' from the GUI if the previous value is '0'.
Note: This functionality will be checked during installation time (on the DB server) and NOT while you are assigning or scheduling a bundle installation or a parameter
update (like all the other params are validated).
In v10
1. STAP_UPLOAD_FEATURE indicator by default is turned on (1) so custom KTAPs when compiled automatically uploaded to appliance
2. In order to compile custom GIM bundle to include new custom KTAP, user need to run grdapi make_bundle_with_uploaded_kernel_module command (need to
have exact syntax of the command)
3. In order to use already compiled CUSTOM BUNDLE on any server customer need to turn on GIM_ALLOW_CUSTOM_BUNDLES indicator to 1 (for security reasons
this have to be done manually on each DB server). Turning GIM_ALLOW_CUSTOM_BUNDLES indicator back to off could be done from appliance.
Parent topic: Linux and UNIX systems: Editing the S-TAP configuration parameters
You can use GIM to stop S-TAP without ever having to log into the database server. Complete the following steps to change the STAP_ENABLED parameter and schedule
the change on the database server.
Procedure
1. Click Manage > Module installation > Set up by Client (v10.1.4: Legacy) to open the Client Search Criteria.
2. Perform a filtered search of registered clients or click Search to view all of the registered clients.
3. Select the clients are the target for the action (stopping S-TAP). If there are more than 20 clients, then the list of clients spreads onto additional pages.
Note: Clicking Select All selects only the clients on the current page being viewed.
4. Click Next to open the Common Modules panel.
5. Select the Module for S-TAP.
6. Click Next button to open the Module Parameters panel.
7. Select the client that is the target for the action (stopping S-TAP).
8. Change the STAP_ENABLED parameter to 0 (zero).
9. Click Apply to Clients to apply to the targeted clients.
10. Click Install/Update to schedule the update to the targeted clients. This update can be scheduled for NOW or some time in the future.
Parent topic: Linux and UNIX systems: S-TAP operation and performance
Procedure
1. Click Manage > Module installation > Set up by Client (v10.1.4: Legacy) to open the Client Search Criteria
2. Perform a filtered search of registered clients or click Search to perform an unfiltered search of all registered clients.
3. Select the clients that are the target for the action (starting S-TAP). If there are more than 20 clients, then the list of clients spreads onto additional pages.
Note: Clicking Select All selects only the clients on the current page being viewed.
4. Click Next to open the Common Modules panel.
5. Select the Module for S-TAP.
6. Click Next to open the Module Parameters panel.
7. Select the client that is the target for the action (starting S-TAP).
8. Change the STAP_ENABLED parameter to 1 (one).
9. Click Apply to Clients to apply to the targeted clients.
10. Click Install/Update to schedule the update to the targeted clients. This update can be scheduled for NOW or some time in the future.
Parent topic: Linux and UNIX systems: S-TAP operation and performance
Procedure
1. Log on to the database server system by using the root account.
2. For Red Hat
a. Find the S-TAP process ID by using ps -fe | grep guard_stap | grep -v grep
b. Kill that process using the command kill
3. For Solaris:
4. From the Guardium system to which this S-TAP® reports, verify that the Status light in the S-TAP control panel is red.
Parent topic: Linux and UNIX systems: S-TAP operation and performance
Procedure
1. Log on to the database server system by using the root account.
2. For all non-Red Hat Enterprise Linux
a. Open the /etc/inittab file for editing.
b. Un-comment the following two statements by deleting the comment character (: for AIX®, # for all others) at the start of each line:
#utap:2345:respawn:/usr/local/guardium/guard_stap/guard_stap /usr/local/guardium/guard_stap/guard_tap.ini
c. Optional. If you are using the TEE monitoring mechanism, un-comment the following two statements by deleting the comment character (: for AIX, # for all
others) at the start of each line.
Note: These processes are not used in the default configuration and must not be started if you are using the K-Tap monitoring mechanism.
#utee:2345:respawn:/usr/local/guardium/guard_stap/guard_tee /usr/local/guardium/guard_stap/guard_tap.ini
#hsof:2345:respawn:/usr/local/guardium/guard_stap/guard_hnt
b. Start each of the agents by using the start <agent> command where agent would be the first entry in the list from a. See the following example.
start gim_33264
start gsvr_33264
start guard_utap
Parent topic: Linux and UNIX systems: S-TAP operation and performance
Parent topic: Linux and UNIX systems: S-TAP operation and performance
Linux and UNIX systems: How S-TAP/GIM processes are initialized by different OS
types/versions
OS Version  Initialization method
Upstart servers
When using Upstart servers, the following are the start and stop commands on the Database server:
stop utap
start utap
stop gim_revision#
stop gsvr_revision#
start gim_revision#
start gsvr_revision#
initctl list
status utap
Systemd servers
When using systemd servers, the following are the commands on the Database server:
Services servers
When using services servers, the following are the commands on the Database server:
Parent topic: Linux and UNIX systems: S-TAP operation and performance
Parent topic: Linux and UNIX systems: S-TAP operation and performance
You can configure any S-TAP to create multiple threads to increase the throughput of data. If the S-TAP configuration file defines more than one Guardium system, a
thread can be created for each Guardium system. S-TAP creates extra threads, matching the number of Guardium systems, in v10.1.4 and higher up to 10 threads. When
participate_in_load_balancing parameter is set to 4, the K-TAP creates a similar number of buffers matching the number of Guardium systems up to 5 threads. The K-TAP
alternates between the buffers, placing entire packets in each buffer. Each S-TAP thread reads from a different K-TAP buffer, and sends traffic data to a single Guardium
system.
In this configuration, no one Guardium receives all the data from the S-TAP. The distribution is similar to that used when participate_in_load_balancing is set to 1.
Attention: Prior to V10 GPU200, when a Guardium system becomes unavailable, no failover is provided. Data that was being sent to a Guardium system is lost until the
system becomes available or the configuration is changed.
Attention: Prior to V10 GPU300, if the S-TAP configuration file defines more than one Guardium system, a thread can be created for each Guardium system. This feature is
activated only when participate_in_load_balancing parameter is set to 4.
Encrypted and unencrypted A-TAP traffic cannot be sent to the same Guardium system. This is similar to the situation when participate_in_load_balancing is set to 1
You can define new queries or reports on the Rogue Connections domain, and you can create alerts that are based on exceptions that are created by S-TAPs, but other
domains that are used by S-TAP reports are system-private and cannot be accessed by users. Â
System View
S-TAP Status Monitor in the System Monitor window: For each S-TAP reporting to this Guardium system, this report identifies the S-TAP Host, S-TAP Version, DB Server
Type, Status (active or inactive), Last Response Received (date and time), Instance Name, Primary Host Name, and true/false indicators for: KTAP, TEE, MS SQL Server
Shared Memory, DB2® Shared Memory, Win TCP, Local TCP monitoring, Named Pipes Usage, Encryption, Firewall, DB install Dir, DB port Min and DB Port Max.
Note: The DB2 shared memory driver has been superseded by the DB2 Tap feature.
S-TAP Status Monitor: For each S-TAP reporting to this Guardium system, this report identifies the S-TAP Host, DB Server Type, S-TAP Version, Status (active or inactive),
Inspection Engine status, Last Response Received (date and time), Primary Host Name, and true/false indicators for: Firewall and Encrypted. Click the S-TAP Status and
the Inspection Engine status to see the Verification status on all Inspection Engines.
S-TAP Events: For each S-TAP reporting to this Guardium system, this report identifies the S-TAP Host, Timestamp, Event type (Success, Error Type, and so on), and Tap
Message.
If no messages display in the S-TAP Events panel, the production of event messages may have been disabled in the configuration file for that S-TAP®. If this is the case,
you may be able to locate S-TAP event messages on the host system in the syslog file.
Tap Monitor
Rogue Connections: This report is available only when the Hunter option is enabled. The Hunter option is only used when the Tee monitoring method is used. This report
lists all local processes that have circumvented S-TAP to connect to the database.
Primary Guardium® Host Change Log: Log of primary host changes for S-TAPs. The primary host is the Guardium system to which the S-TAP sends data. Each line of the
report lists the S-TAP Host, Guardium Host Name, Period Start, and Period End.
S-TAP Status: Displays status information about each inspection engine that is defined on each S-TAP Host. This report does not have From and To date parameters, since
it is reporting current status. Each row of the report lists the S-TAP Host, DB Server Type, Status, Last Response, Primary Host Name, Yes/No indicators for the following
attributes: K-TAP Installed, TEE Installed, Shared Memory Driver Installed, DB2 Shared Memory Driver Installed, Named Pipes Driver Installed, and App Server Installed.
In addition, it lists the Hunter DBS.
Inactive S-TAPs Since: Lists all inactive S-TAPs that are defined on the system. It has a single runtime parameter: QUERY_FROM_DATE, which is set to now -1 hour by
default. Use this parameter to control how you want to define inactive. This report contains the same columns of data as the S-TAP Status report, with the addition of a
count for each row of the report.
Parent topic: Linux and UNIX systems: S-TAP operation and performance
To access, use the GUI. You can create alerts based on results.
The time interval is in hours (example, 5 is every 5 hours). Use - (minus) for a time interval less than 1 hour.
Fields in Table
TIMESTAMP
SOFTWARE_TAP_HOST
TOTAL_BYTES_SO_FAR
TOTAL_BYTES_DROPPED_SO_FAR
TOTAL_BYTES_IGNORED
TOTAL_BUFFER_INIT
IOCTL_REQUESTS
TOTAL_RESPONSE_BYTES_IGNORED
System CPU%
System Idle%
STAP CPU%
Buffer recycled
Parent topic: Linux and UNIX systems: S-TAP operation and performance
Note: On HP-UX 11.11, the information about the process command is limited to 64-characters. This means that if the full path to the guard_stap binary is longer than 64-
characters, the Guardium monitor cannot recognize it.
Monitoring covers:
CPU utilization: checked with the ps command or using cpu time from procfs
CPU responsiveness to polling: checked by sending the S-TAP process a console request and waiting for a response.
If S-TAP CPU utilization exceeds the configured threshold, or if S-TAP does not respond to the console request, the following actions can be taken:
Guard Monitor installs automatically at the end of the S-TAP installation. There are no user prompts and no install progress is shown. During S-TAP uninstall, Guard
Monitor is automatically uninstalled. The user no longer has the option to reboot in the installer and is instead just notified that a reboot is necessary to complete the
uninstall. This reboot is not critical but it is necessary if the user intends to install S-TAP again on the system. If the user uninstalls, does not reboot, and then tries to
reinstall there will be an popup blocking the installation notifying the user that S-TAP is partially installed and the server needs to be rebooted.
The guard_monitor runs with its configuration file, guard_monitor.ini as its argument. The monitor is controlled by using the guard_monitor.ini file. For Shell installations,
you can make all configuration changes directly on the configuration file. For GIM, use the interface in the GUI to make any changes.
Guard_monitor is not enabled by default. In shell installations, enable it from inittab by uncommenting the “umon†line, or by using the services control facility for
the particular Operating Systems (initctl for RedHat 6, systemctl for RedHat 7, SMF for Solaris 10 and up). For GIM installations, guard_monitor is enabled by setting
STAP-UTILS_START_MONITOR=y.
The default location for the S-TAP Monitor output is /var/tmp/monitor. This location can be configured from guard_monitor.ini (configuration file). See the example of the
guard_monitor.ini file at end of this topic.
After enabling guard_monitor, make sure the process is running on the database server.
Default thresholds are provided for each function. For example, you might want to monitor CPU usage, and set one threshold (75%) for gathering diagnostic
information and a higher threshold (85%) at which the S-TAP is killed. You would set auto_diag=1 to enable gathering of diagnostic information, and
diag_high_cpu_level=7500 to gather diagnostic information when CPU usage reaches 75%. Then set auto_kill_on_cpu_enable=1 to enable automatic killing of the
S-TAP process, and set auto_kill_on_cpu_level=8500 to kill the process when CPU usage reaches 85%.
But you may not want to keep killing the S-TAP process repeatedly, so you can set a limit on that as well. You can limit how many times the process can be killed
within one hour by setting kill_num_in_hour=5. Then specify what should happen when the limit is reached: code final_action=1 to disable the S-TAP, or
final_action=2 to allow it to continue running.
diags
trace
NULL
Auto-Diag action
If S-TAP CPU utilization exceeds the configured threshold, the most basic action guard_monitor takes is an automatic guard_diag.
By default, the output from the guard_diag is placed in /var/tmp. The file name is derived from the machine name, and the time/date run; it always starts with
diag.ustap.
Auto-Kill action
auto_kill_on_cpu_level STAP-UTILS_MONITOR_AUTO_KILL_ON_CPU_LEVEL The S-TAP CPU threshold at which guard_monitor kills 8500
S-TAP. Enter (%CPU threshold*100). v10.1.4 and
higher: When cpu_measurement_mode=1, the % can
be higher than 100.
final_action STAP-UTILS_MONITOR_FINAL_ACTION Action taken when max kills per hour is reached.
1 = Disable S-TAP.
2 = Stop killing S-TAP and let it continue.
Some S-TAP issues, such as when S-TAP gets stuck in a loop, require more information than provided in the guard_diag output.
The guard_monitor performs automatic core dumping of the S-TAP process. The guard_monitor core dumps S-TAP before killing the process (if S-TAP auto kill is
enabled).
sigsegv: This is the most portable of the options, but requires the SA to configure
ulimit to enable core dumping.
gcore: The most useful, but requires gcore to be installed on the system. Linux
platforms only.
pstack: Least useful of the options, but may be the only utility available on certain
systems. Linux platforms only.
NULL: disabled
Introduced in v10.1.4 limitsexceeded: collect core when S-TAP is killed due to exceeding a resource limit
kill_oldcore_saved Integer. Specifies whether generated core dumps are saved. When set to non-zero,
guard_diag keeps all core dumps generated. Otherwise, it deletes the old core
dumps each time a new one is generated.
Example of guard_monitor.ini
The following section header is required for GIM to recognize this .ini file.
; otherwise, it serves no purpose
[TAP]
; output dir for monitor logs, diags, traces, etc.
monitor_output_dir=/var/tmp
; location of guardium installation (need not be where monitor is installed, for example, /usr/local)
stap_dir=/usr/local
; ip to connect to for downloading configuration file and uploading diags and trace output
; this is parsed out of the guard_tap.ini, but backup value here is kept in sync
sqlguard_ip=NULL
; polling interval to verify that server end is still alive (secs)
poll_server_interval=20
; polling interval to check CPU level (secs)
poll_cpu_interval=10
; polling interval to communicate with STAP (secs)
poll_stap_interval=10
; maximum file size of monitor log file (KB)
monitor_log_rotate_size=1024
; number of rotated monitor logs to keep
monitor_log_rotate_num_kept=5
; maximum file size of log files (KB)
log_rotate_size=4096
; number of rotated logs to keep
log_rotate_num_kept=5
; logs to rotate
logs_to_rotate=/tmp/guard_stap.stderr.txt,/tmp/guard_stap.stdout.txt,/usr/local/guardium/guard_stap/ktap/ktap_install.log,/us
r/local/guardium/guard_stap/guard_discovery.stderr.log
; maximum number of STAP kills per hour (doesn't count kills resulting from auto_kill_on_intercept)
kill_num_in_hour=5
; disable STAP when kills per hour limit hit or disable kills and let STAP continue
; disable STAP: 1; disable kill: 2
final_action=2
; automatic kill STAP on CPU level on/off (1/0)
auto_kill_on_cpu_enable=0
; CPU level for kill (% * 100)
auto_kill_on_cpu_level=8500
; snif timeout for kill (secs, 0 disabled)
auto_kill_on_snif_timeout=0
; KTAP timeout for kill (secs, 0 disabled)
auto_kill_on_ktap_timeout=0
; PCAP timeout for kill (secs, 0 disabled)
auto_kill_on_pcap_timeout=0
; TEE timeout for kill (secs, 0 disabled)
auto_kill_on_tee_timeout=0
; SHMEM timeout for kill (secs, 0 disabled)
auto_kill_on_shmem_timeout=0
; automatic diags on/off (1/0)
auto_diag=1
; number of diags runs
diag_num=2
; time between diags runs (mins)
diag_interval=2
Parent topic: Linux and UNIX systems: S-TAP operation and performance
From the GUI, the S-TAP version number is displayed in Manage > System View > S-TAP Status Monitor
Alternatively, you can display the S-TAP version number from the command line of the database server.
Run debug from the command line to quickly identify configuration issues
Use the syntax <stap_program> <parameter_file> <debug_level>, where 4 is the level for normal debug. (Other values do different things, not all of them debug).
For example: /usr/local/guardium/guard_stap/guard_stap /usr/local/guardium/guard_stap/guard_tap.ini 4
Verify the connection between the database server and the Guardium system
Verify that you can ping the Guardium system at sqlguard_ip from the database server.
If the ping is successful, verify that you can telnet to the following ports on the Guardium system: 16016/16018
1. Click Manage > Activity Monitoring > S-TAP Control to open S-TAP Control.
2. Locate the S-TAP Host for the IP address that corresponds to your database server.
3. Expand the Guardium Hosts subsection, and verify that the active Guardium Host is correctly configured.
4. If necessary, click Modify to update the Guardium Hosts.
1. Click Manage > Activity Monitoring > S-TAP Certification to open S-TAP Certification.
2. Look at the S-TAP Approval Needed check box. If this box is checked, new S-TAPs can connect to this Guardium system only after they have been added to
the list of approved S-TAPs.
3. If S-TAP Approval is turned on, select Daily Monitor > Approved Tap Clients to view a list of approved S-TAPs. If the S-TAP that you are investigating is not on
this list, return to the S-TAP Certification pane, enter the IP address of the S-TAP in the Client Host field, and click Add.
The verification process attempts to log in to your database's STAP client with an erroneous user ID and password, to verify that this attempt is recognized and
communicated to the Guardium system. Your S-TAP could be configured in a way that prevents the inspection engine message from reaching the Guardium system
from which the request was made.
Load balancing: if the S-TAP is configured to return responses to more than one Guardium system, the error message could be sent to a different Guardium
system.
Failover: If secondary Guardium systems are configured for the S-TAP, the error message could be sent to a secondary Guardium system if the primary
Guardium system is too busy.
Db_ignore_response: if the S-TAP is configured to ignore all responses from the database, it does not send error messages to the Guardium system.
Client IP/mask: if any mask is defined that is not 0.0.0.0, it could prevent the error message from being sent.
Exclude IP/mask: if any mask is defined that is not 0.0.0.0, it could prevent the error message from being sent.
Related topics:
Parent topic: Linux and UNIX systems: S-TAP operation and performance
You can use information gathered by the Guardium DB2 for i S-TAP to create activity reports, help you meet auditing requirements, and generate alerts of unauthorized
activity. Detailed auditing information includes:
SQL Performance Monitor (otherwise known as database monitor) data for SQL applications
Audit entries from the QSYS/QAUDJRN audit journal for applications using non-SQL interfaces
Any SQL access whether it is initiated on the IBM i server or from a client
Any native access that is captured in the audit journal
The S-TAP sends this data to the Guardium system in real time.
For more information about the DB2 for i S-TAP and related topics, refer to these sources:
Using IBM Security Guardium for monitoring and auditing IBM DB2 for i database activity: this developerWorks article introduces IBM Guardium, the DB2 for i S-
TAP, and key related details.
IBM i on IBM Knowledge Center: look here for information about IBM i, audit journaling, and other related topics.
Note: i S-TAP TLS support and load balancing is supported only for IBM i 7.1 and 7.2.
Similar to UNIX S-TAPs, i S-TAP configuration parameters are saved in a guard_tap.ini file in the /usr/local/guardium directory on the IBM i server.
Administrators configure the S-TAP is done using the same APIs and UI (S-TAP Control) as other UNIX S-TAPS. When the GUI or API is used to make a change to the S-
TAP configuration, the Guardium sniffer sends a message to the S-TAP, which backs up the old .ini file, saves the configuration to the new .ini file and then restarts itself.
Administrators can set up encrypted communication between the S-TAP and the appliance using the S-TAP configuration controls as well as set up various load balancing
options.
The failover and load balancing options for the i S-TAP are similar to what exists for UNIX S-TAPs. Use the participate_in_load_balancing parameter to determine
whether to use failover or load balancing behavior, and use the SQLGuard sections of your S-TAP to set up primary, secondary, and tertiary Guardium hosts.
One difference is that there is no need for participate_in_load_balancing=3; because of the way the I S-TAP communication is architected, complete session
information is available on each message. This means that even before the enhancements delivered in this patch, you could have used hardware balancing (such as
F5) with participate_in_load_balancing=1 and a virtual IP address in the primary SQLGuard section of the configuration file.
In a failover configuration, the S-TAP is configured to register with multiple collectors, but only send traffic to one collector at a time
(participate_in_load_balancing=0). The S-TAP in this configuration sends all its traffic to one collector unless it encounters connectivity issues to that collector that
triggers a failover to a secondary collector.
The first two bytes represent ccsid of the encoding of the following bytes. For example, 0x04B8 stands for ccsid 1208. The following bytes need to have the syntax as
below:
SELECT
‘GuardAppEvent:Start’,
‘GuardAppEventType:type’,
‘GuardAppEventUserName:name’,
‘GuardAppEventStrValue:string’,
‘GuardAppEventNumValue:number’,
‘GuardAppEventDateValue:date’
FROM DUAL
For further reference for type, name, string, number, date, check GuardAppEvent API.
Monitoring strategy
Make your monitoring and auditing effective and efficient by developing a strategy that recognizes and fulfills your regulatory and other requirements.
Installing the S-TAP for IBM i
Follow these steps to install or uninstall the S-TAP.
Defining the S-TAP for IBM i
After you install the S-TAP, ensure that it can communicate with the Guardium system.
Monitoring strategy
Make your monitoring and auditing effective and efficient by developing a strategy that recognizes and fulfills your regulatory and other requirements.
After you know what data you need, develop a strategy for collecting it with as little extraneous data as possible. Monitoring and logging data that you do not need uses up
disk space and processing power, and generates extra network traffic. There are several areas where you can implement your strategy:
Database monitoring
The global SQL monitor captures SQL information and puts it into a queue for the S-TAP. You can use the filtering capabilities of the monitor to control which types
of users and objects are queued. By default, these types of entries are not forwarded from the S-TAP to the Guardium system:
SQL Abbreviation Meaning
AD ALLOCATE DESCRIPTOR
CL CLOSE
DA DEALLOCATE DESCRIPTOR
DE DESCRIBE
FE FETCH
FL FREE LOCATOR
GD GET DIAGNOSTICS
GS GET DESCRIPTOR
RE RELEASE
RG RESIGNAL
SC SET CONNECTION
SD SET DESCRIPTOR
SG SIGNAL
Audit journal
You can configure the system audit journal to capture only those entries that concern objects of interest or users of interest. By default, entries of these types are
sent from the S-TAP to the Guardium system:
SQL Abbreviation Meaning
ZR Read object
ZC Change object
CA Authority change
AD Auditing change
AF Authority failure
CO Create object
DO Delete object
OW Change owner
OR Object restored
Ignoring data after it has been sent over the network is inefficient. Wherever possible, filter out information that you do not need before it is queued for the S-TAP.
Parent topic: DB2 for IBM i S-TAP
You must know the IP address of the Guardium system to which this S-TAP will connect.
When you download the S-TAP, be sure to filter for the IBM i platform, to ensure that you download the correct package.
You can use 5250 emulator software to connect to the IBM i system remotely.
Procedure
1. On the IBM i server, enter this command to open the PASE shell: call qp2term.
2. In the PASE shell environment, create a temporary directory to hold the S-TAP installation script, such as /tmp.
3. Use FTP to move the following S-TAP installation shell script to that temporary directory: guard-itap-9.0.0_rnnnnn-aix-5.3-aix-powerpc.sh
4. In the same directory, run this command:
Results
The S-TAP is installed in /usr/local/guardium. After the installation is complete, the S-TAP attempts to start the processes that enable activity monitoring and to connect to
the Guardium system by using the IP address that was specified with the installation command.
What to do next
To validate the successful installation and start of the audit process, log in to the IBM Guardium web console as an administrator, navigate to the System View tab, and
check the status of the S-TAP.
Procedure
1. Define DB2 for i as a recognized data source to IBM Guardium and test the connection.
2. Populate the Guardium system with information from the configuration file on IBM i that was created when you installed the DB2 for i S-TAP, using the Custom
Table Builder process.
3. Create a DB2 for i configuration report. It is from this report interface that you can invoke the Guardium APIs that enable you to start and stop the monitoring
process, get status information, and update configuration parameters, including filtering values.
Procedure
1. Click Setup > Tools and Views > Datasource Definitions to open the Datasource Builder. Select Custom Domain from the Application Selection box. Click Next.
2. In the Datasource Finder, click New, which opens the Datasource Builder.
3. Select DB2 for i as the Database Type and then add the appropriate information for the host, service name, and credentials. Click Apply.
4. Click Test Connection to ensure that the configuration succeeded.
5. Click Tools > Report Building.
6. Click Custom Table Builder. Select DB2 for i S-TAP Configuration and then click Upload Data. The Datasource Finder displays a list of DB2 for i S-TAPs.
7. Select your DB2 for i data source from the list/ Click Add.
8. On the Import Data screen, ensure the DB2 for i data source appears. Click Apply and then click Run Once Now. You should see a message that the operation ended
successfully with one row inserted.
9. Click Customize in the Guardium title bar. Then click Add Pane.
10. Give the pane a new name, such as My New Reports, and then click Apply.
11. My New Reports appears in the Customize pane. Click the icon next to the name. In the Layout dropdown list, choose Menu Pane. Click Save. Your new pane
appears as a tab.
12. Click Report Building in the navigation pane.
13. From the query dropdown list, click DB2 for i S-TAP configuration, then click Search.
14. Select the DB2 for i S-TAP configuration and then click Add to My New Reports (or the name that you specified in step 10).
15. Open the My New Reports tab, which now displays the IBM i report row. Double-click a row in the report and select Invoke. A list of IBM Guardium APIs that you
can select is displayed.
16. Select update_istap_config.
17. When you select a Guardium API, the parameters for that API are displayed. You can change any values that you need to. Change the value of the start_monitor
parameter to 1. Click Invoke Now.
Results
Using the data that you have entered, the update_istap_config API performs these tasks:
Creates the message queue that will be used to send entries from the S-TAP to the Guardium system and starts a global database monitor using a view with an
INSTEAD OF trigger, which sends the entries to the message queue.
Starts PASE and the S-TAP.
Receives journal entries from QAUDJRN and adds them to the message queue.
The GIM component includes a GIM server, which is installed as part of the Guardium system, and a GIM client, which must be installed on servers that host databases or
file systems that you want to monitor. The GIM client is a set of Perl scripts that run on each managed server. After you install the GIM client, it works with the GIM server
to perform these tasks:
For example, you can use GIM to install your S-TAP modules and keep them up-to-date.
The GIM client uses port 8444 to communicate with the GIM server.
You can use the GIM server through the Guardium user interface or through the command-line interface (CLI).
The software modules that you can deploy by using GIM are packaged as GIM bundles. A bundle is a file of type gim that contains software that can be deployed by using
GIM.
If your environment includes a Guardium system that is configured as a central manager, you must decide which Guardium systems you want to use as GIM servers. You
can either manage all of your GIM clients, up to 4000, from a single Guardium system, such as the central manager, or you can manage them in groups from the different
Guardium systems. If you manage all of your GIM clients from a single Guardium system, then you can view the status of all the GIM clients and perform related tasks
from that one UI. If you choose to manage your GIM clients in groups from separate Guardium systems, then you can use each UI to work with the GIM clients that it
manages; no overall view is available.
If you upgrade to Version 10.0 from V9.0 GPU patch 50 or later, there is no change in how you can view information about GIM clients. If you upgrade from an older
version, these restrictions apply: After you upgrade your Central Manager, you can still view information about GIM clients that are assigned to other Guardium systems,
but you can no longer do provisioning to those GIM clients from the Central Manager. After you upgrade all your Guardium systems, you can view each GIM client only from
the Guardium system that is its GIM server.
To manage large numbers of GIM installations, you can create groups of GIM clients. Then, you can use the groups to install, update, and manage software bundles.
The GIM client monitors the processes that you install by using GIM. It checks the heartbeat of each process once each minute, and passes status changes for the
processes to the GIM server. The status of each process is displayed on the Process Monitoring panel. Changes are reflected within three minutes. Changes to the status
of the GIM client itself are reflected according to the interval at which the client polls the server and delivers its "alive message".
Note: When performing a system backup and restore from one server, which has GIM defined, to another server, then the user must configure a GIM failover to the restore
server. This GIM configuration applies to a Backup Central Manager or a System backup and restore.
The deploy monitoring agents tool simplifies the process of establishing a Guardium deployment. Building on existing Guardium installation manager (GIM) infrastructure,
the deploy monitoring agents tools helps you quickly find database servers, install monitoring agents (S-TAPs), and configure inspection engines for your databases. In
Before using the deploy monitoring agents tool to install S-TAPs and configure inspection engines on your database servers, verify the following prerequisites.
The target S-TAP installation directory must be empty or not exist. You cannot install an S-TAP into a directory that already contains any files.
Install GIM clients in listener mode
Install GIM clients in listener mode on one or more database servers in your environment. To install the GIM client in listener mode on Windows systems, omit the -
-host parameter. To install the GIM client in listener mode on systems such as AIX and Linux, omit the --sqlguardip parameter. For more information about GIM
listener mode, see GIM Server Allocation.
Important: You may need to open a port between the GIM client on the database server and the Guardium system where you will run the deploy monitoring agents
tool. The default port 8445 is used unless you specify a different port when installing the GIM client.
Upload GIM S-TAP modules to the Guardium system
Run the deploy monitoring agents tool as an administrative user from any Guardium system that is not configured as an aggregator. Before you begin, use the
following procedure to upload GIM S-TAP modules to the Guardium system.
For information about S-TAP offerings and supported platforms, see System requirements and supported platforms for IBM Security Guardium.
Inspection engines can be automatically configured for some databases, including the following:
To allow the auto-configuration of inspection engines, verify that databases servers are running before deploying monitoring agents.
For more information about automatically discovering database instances, see Discover database instances.
Procedure
1. Open the deploy monitoring agents tool by navigating to Setup > Quick Start > Deploy Monitoring Agents.
2. In the Identify database servers section, use the IP addresses field to specify a range of IP address to search for GIM clients in listener mode. Use the icon to
specify additional IP addresses. Include wildcard (*) or range (-) characters to expand the search. For example, 10.0.0-5.*. Use commas to separate complete
IP addresses or ranges. For example, 9.70.145.165,9.70.145-148.165,9.70.145.*.
Important: Scanning a large number of IP addresses is time intensive and may time-out before the scan completes. Use the IP addresses fields to define a narrow
range of IP addresses where you expect to find GIM clients in listener mode.
However, it is possible to streamline the process by automatically installing S-TAPs on all compatible GIM clients that are discovered while scanning IP addresses.
To enable the automated mode, click to open the Customize settings dialog and select Automatically deploy agents on discovered database servers. When
using the automated mode, after specifying the IP addresses to scan, simply click the Discover and Deploy button.
4. In the Database server status section, select the database servers where you would like to deploy monitoring agents and click Deploy Agents to open the Configure
monitoring agents dialog.
5. From the Configure monitoring agents dialog, review and adjust the installation parameters. Click Deploy to begin installing monitoring agents.
The default parameters should work well for most new deployments. However, you may want to adjust the following settings for your specific environment.
Specify an installation directory for S-TAPs deployed on Windows database servers. The parameter is ignored and default installation paths are used when
deploying on other platforms. For more information about S-TAP installation parameters, see S-TAP command line and GIM installation parameters and S-
TAP install script parameters.
Select Use enterprise load balancing to automatically assign S-TAPs based on the relative load or availability of Guardium collectors in a centrally-managed
environment. For more information, see Enterprise load balancing.
6. In the Database server status section, use the S-TAP installation status column to monitor the progress of module installation. A status of Installed indicates
successful and complete installation.
What to do next
If the S-TAP installation status of a database server is marked Failed, click the icon to learn more about the problem. If a database server disappears from the
Database server status after attempting to deploy monitoring agents, click Error log to learn more about the problem.
Tip: The Error log captures issues related to the Deploy monitoring agents tool. For example, if Deploy monitoring agents cannot find a module required for installation, a
message is added to the Error log. Other errors are recorded in component-specific logs and made available for investigation by clicking the icon in the S-TAP
installation status column.
After successfully deploying monitoring agents, you are ready to monitor traffic on your database servers and begin meeting security compliance requirements. To
configure compliance monitoring, navigate to Setup > Quick Start > Compliance monitoring and see Quick start for compliance monitoring for more information.
Set up by Client
Quickly deploy S-TAPs and other software packages using the Guardium Installation Manager (GIM) Set up by Client tool.
GIM clients are installed on database servers and connected to the Guardium system.
Compatible GIM bundles are uploaded and imported to the Guardium system.
Procedure
1. Navigate to Manage > Module Installation > Set up by Client.
2. In the Choose clients section, select the database servers where you want to install or update software using GIM. Select individual clients using check boxes in the
table, or use the Select client group menu to select a group of clients. Click Next to continue.
Attention:
If you add new clients while using the Set up by Client tool, refresh the browser to see the new clients.
When creating or updating a group and editing the Client Name or Client IP address of GIM clients, the name and address must reflect valid values for a GIM
client connected to the Guardium system. If an invalid name or address is specified, the edited client will no longer appear as a member of the group.
3. In the Choose bundle section, use the Select a bundle menu to identify the software you want to install or update. Click Next to continue. After selecting a software
bundle, the Selected bundle action column indicates the action that will be performed for each client:
Install
The selected bundle will be installed on the client. This action indicates a first-time installation of the software on the client.
Tip:
Clear the Show only latest versions check box to view and work with earlier versions of a bundle.
Clear the Show only bundles check box to identify individual modules within a bundle.
Select the Show only compatible clients check box to hide clients that are not compatible with the selected bundle.
Attention:
By default, the Select a bundle menu shows only the latest uploaded bundle version regardless of platform or compatibility with selected clients. To install a
different bundle version for a specific platform or client, clear the Show only latest versions check box and select the required bundle.
If you upload and import new bundles while using the Set up by Client tool, refresh the browser to see the new bundles.
If you already have a bundle scheduled for installation, installing a new bundle removes the existing schedule.
4. In the Choose parameters section, specify values for required and optional parameters. Use the or icons to add or remove optional parameters. Use the
icon to search for parameters by name or description. Click Next to continue.
Important: Unless identified as a client-specific parameter, values provided in the Choose parameters section are applied to all clients where the software will be
installed, upgraded, or updated. For client-specific parameters, the value field is disabled and values are defined per-client in the Configure clients section.
5. In the Configure clients section, use the table to review and edit parameter values for each client. Editable parameters show a icon next to the parameter value.
What to do next
Use the Choose bundle section to monitor the software installation. Installation status is shown in the Status column. Use the icon to refresh the installation status.
Parent topic: Managing software with GIM
Users may also interact with GIM through the CLI. See GIM command line interface for information on installing and upgrading modules with GIM using CLI.
You can use the GUI of the Guardium Installation Manager (GIM) for these tasks:
Process Monitoring
Upload Module Package
Configure, Install, or Update Modules (by client)
Configure, Install, or Update Modules (by module)
Rollback Mechanism
Note: If A-TAP is being used, A-TAP must first be disabled on the database server before performing a GIM-based S-TAP® upgrade or uninstall.
Note: GIM does not support the installation of native S-TAP installers (rpm, dept, bff, etc.)
Note: Installation of modules on a specific client for the FIRST TIME using the GIM utility must be in the form of a BUNDLE. Future upgrades of specific modules which are
part of the installed bundle can be either as single modules or bundles.
Process Monitoring
Displays the status for GIM processes on servers.
Supervisor
The GIM Supervisor is a process with the main purpose of supervising and monitoring Guardium® processes. Specifically, it is responsible for starting, stopping, and
making sure all of Guardium processes are running at all times and restarting them if they fail.
Note: For Guardium V9.0, on Solaris 5.10/5.11, GIM and SUPERVISOR are now SMF services. They are not inittab entries anymore.
GIM
The GIM process is the GIM client process, which is responsible for such duties as registering to the GIM server, initiate a request to check for software updates, installing
the new software, updating module parameters, and uninstalling modules.
You can use this option to configure/install a module for any number of clients from packages already loaded.
The simplest, safest, and quickest way to install or uninstall modules is by using bundles. Using bundles guarantees automatic dependency and order resolution.
If you have already created groups of clients, you can use a group to specify the clients to be the target for the specified action. Otherwise use these steps to select a list
of clients.
1. Click Manage > Install Management > Set up by Client (Legacy) to open the Client Search Criteria.
2. Click the Search button to perform filtered search and display the Clients panel.
3. Select the clients that will be the target for the specified action.
If there are more than 20 clients then the list of clients will be split onto additional pages
Note: Clicking the Select All button will only select the clients on the current page being viewed
4. From the Clients panel, two actions can be taken:
Configure/install common parameters
Configure/install module
Reset Clients - By clicking Reset Clients, you can disassociate modules from selected clients and remove the client definition from the Guardium system
database. Note: Resetting a client does NOT trigger module removal on the database server.
View installation state of this client - By clicking on the information icon you can open up the Installation Status panel and view the installation status of a
client. This panel displays all modules on the client which are installed or scheduled for update or uninstall. From this panel, you can use the Edit this module
icon to configure parameters for each module individually.
1. Click Manage > Module Installation > Set up by Module to open the Modules Search Criteria.
2. Click Search to perform filtered search and display the Modules panel showing all the available modules and bundles.
3. Select one or more modules and click Next to open the Clients panel.
4. Select the clients that are the target for the specified action.
Note: If there are more than 20 clients then the list of clients splits onto additional pages Clicking the Select All button only selects the clients on the current page
being viewed
5. From the Clients panel, these actions can be taken:
Install/update modules: Select one or more target clients and click Next, then click Install/Update
Modify module parameter configuration: Select one or more target clients and click Next, modify parameter values, select the target clients and click Apply to
Clients
Click Reset Clients to disassociate modules from selected clients and remove the client definition from the Guardium system database. Note: Resetting a
client does NOT trigger module removal on the database server.
View installation state of this client by clicking the icon to open the Installation Status panel and view the installation status of a client. This panel
displays all modules on the client which are installed or scheduled for update. From this panel, the Edit this module icon can be used to configure parameters
for each module individually.
Click Run Diagnostics: the diagnostic report is run the next time the clients sends an alive message, and is recorded in the GIM Events List.
CAUTION:
There is no validation of input to this field.
For example, the following command line options skip the installation of CAS and Named Pipes support.
CAS=0 NamedPipes=0
If you are installing an S-TAP and you do not want it to automatically discover MSSQL databases, type START=0 in the WINSTAP_CMD_LINE column to prevent the S-TAP
from starting when it is installed. You can also specify this parameter for a single database server by using the GIM API:
Additional guard_tap.ini parameters may also be set at installation. An example is paramValue="START=1 !client_timeout_sec=120&use_tls=1!"
Note: When using GuardAPI commands, the WINSTAP_CMD_LINE paramValue should be quoted and each parameter separated by spaces, such as
paramValue="START=1 CAS=0" as in the prior example. A lack of spaces can cause the subsequent installation to not complete as anticipated.
Configure/install module
1. If configuring, installing, or updating:
a. by client
i. Click Next to display the Common Modules panel where a list of all available common modules and bundles that can be installed on the selected
clients.
ii. Select a module or bundle to configure/install for the selected clients.
Note: The status of a module or bundle will be displayed only if its version matches either an installed version or a scheduled version.
iii. Click Next after selecting a module or bundle from the list.
b. by module
i. Click Next after selecting the clients from the list
2. Depending on the module or bundle selected, and possible dependencies, you will then see options based on the selection types:
Bundle
Clicking Next for a bundle will take you to the Module Parameters panel that will display all the parameters for all modules of the bundle. Modify any of the
listed module parameters within the Client Module Parameters section.
Note: A bundle is treated as a regular module.
module with no mandatory dependencies
Clicking Next for modules with no mandatory dependencies will take you to the Module Parameters panel that will display the module's parameters. Modify
any of the listed module parameters within the Client Module Parameters section.
Rollback Mechanism
GIM's rollback mechanism purpose is to handle errors during installation and recover modules to their prior state. The Rollback mechanism supports the following
recovery scenarios:
Linux : shutdown -r
SuSe : reboot
HP : shutdown -r
Solaris : shutdown -i [6|0] (Note : '0' can be used only if shutdown is done from the terminal server)
AIX : reboot
Tru64 : reboot
1. Click Manage > Install Management > Set up by Module to change the GIM server for a GIM client.
2. Select a GIM bundle that is installed on the clients that you want to reassign. Click Next.
3. Select the clients to be changed. You can click Select All or select clients individually. Click Next.
4. Click Select All.
5. For the GIM_URL parameter, enter the hostname or IP address of the GIM server (Guardium system) to which you want to reassign the selected GIM clients. Click
Apply to Selected.
6. On the same panel click Apply to Clients, then click Install/Update and schedule the update.
After the update has been processed, the GIM client will be managed by the new GIM server.
Parent topic: Managing software with GIM
The following examples are presented only to cover some of the more common scenarios. For more information and a complete list of all supported CLI commands refer
to GuardAPI GIM Functions.
1. Get the list of registered clients (i.e. database servers installed with GIM client that have registered with GIM server):
grdapi gim_list_registered_clients
ID=0
####### ENTRY 0 #######
CLIENT_ID: 1
IP: 192.168.2.204
OS: HP-UX
OS_RELEASE: B.11.00
OS_VENDOR: hp
OS_VENDOR_VERSION: B.11.00
OS_BITS: 64
PROCESSOR 9000
####### ENTRY 1 #######
CLIENT_ID: 2
IP: 192.168.2.210
OS: Linux
OS_RELEASE: 2.6.16.54-0.2.5-smp
OS_VENDOR: suse
OS_VENDOR_VERSION: 10.1
OS_BITS: 64
PROCESSOR x86_64
2. Assign (i.e. prepare to install; NOT a request to actually install it on the client) the latest bundle available for a specific client
Note: In order to assign a specific bundle or module to a client, step 2 should be replaced with the following sequence:
GIM scheduling
All time is relative to Guardium system time. Now means right now as specified by the Guardium system. Now +30 minute is the current Guardium system time + 30
minutes. This can be seen when looking at the installation status by clicking on the small "i" next to a client, for example in Manage > Module Installation > Set up by
Client (Legacy). If the time on the database server has passed the time on the Guardium system specified for install, then the install begins.
Example one, set up three clients (a) set for Guardium system time - 1 hour, (b) set for Guardium system time, and (c) set for Guardium system time + 1 hour.
Guardium system (a), which is already 30 minutes ahead of the time set for installation, will install immediately.
Guardium system (c) will take another hour after (b) to install.
Example two - Same setup as example one but this time specify "now".
Uninstalling a module/bundle
grdapi gim_uninstall_module clientIP=192.168.2.210 module=BUNDLE-STAP date=now
You can specify date=now or use the format of YYYY-MM-DD HH:mm. The uninstallation will take place the next time GIM client checks for updates (GIM_INTERVAL).
Installation Status
Additional information about the latest status the client has sent can be retrieved by running the following command (The status message will appear as an entry in
GIM_EVENTS table from which a report can be generated):
The general status message can be obtained by running the following CLI command:
ID=0
OK
BUNDLE-STAP-8.0_r2609_1 INSTALLED
STAP-UTILS-8.0_r2609_1 INSTALLED
COMPONENTS-8.0_r2609_1 INSTALLED
KTAP-8.0_r2609_1 INSTALLED
STAP-8.0_r2609_1 INSTALLED
TEE-8.0_r2609_1 INSTALLED
ATAP-8.0_r2609_1 INSTALLED
INSTALLED
Module is installed.
PENDING-INSTALL
Module is pending to be scheduled for installation.
PENDING-UNINSTALL
Module is pending to be scheduled for uninstallation.
PENDING-UPDATE
Module is pending to be scheduled for update.
IP
Module installation is in progress.
FAILED
Module's last operation failed.
IP-PR
Module requires client reboot in order to complete the installation process. Prior to rebooting, deactivate all A-TAP instances. Rebooting the database server is
different per OS (Any other way of rebooting the system will keep the pending modules in a pending state).
 AIX: reboot
Linux   : shutdown -r
SuSe: reboot
HP-UX: shutdown -r
Solaris: shutdown -i [6|0]  (Note : '0' can be used only if shutdown is done  from the terminal server)
Tru64: reboot
Output example
Overview
The following process (also called GIM Auto-Discovery) allows you to remotely connect to a pre-installed and inactive GIM agent and make it connect to a collector
without accessing the database server.
1. An inactive GIM client runs in listener mode and waits for a connection from any collector.
2. From the collector's graphic user interface (GUI) or the GuardAPI, you can send the IP address of any collector to the inactive GIM client.
3. The inactive GIM client accepts the collector's IP address and connects to it.
If GIM is installed without specifying a collector's IP address (--sqlguardip) it will run in server mode. When the GIM agent is running in server mode, it accepts messages
only from verified collectors over SSL that have certificate authentication and shared secret verification. If there are 30 or more consecutive authentication failures, the
GIM agent stops listening for requests and runs in server mode. This action prevents denial of service (DoS) attacks.
You can define your own certificates, shared secret, and port number. To use other certificates, specify the certificate/key full path name in the installation parameters: --
key_file and --cert_file. Load the certificates to the collector key store with the GuardAPI command store certificate gim.
To set a shared secret other than the default one, use the GuardAPI command grdapi gim_set_global_param paramName=gim_listener_default_shared_secret
paramValue=<password>. The format should be a string. The shared secret must be identical on the database server and collector.
Note: Do not specify the unencrypted shared secret in the command line.
To use a port other than the default one, specify the port in the installation parameter --listener_port. Set the GIM global parameter gim_listener_default_port with the
new port in the GIM Global Parameters.
Note: The default or user defined port must be enabled in the firewall.
Parameters
The following list describes the GIM installation parameters:
--sqlguardip - Sets the collector IP address/hostname that the GIM client is connecting to. If it is not specified, the GIM client will work in “Listener mode".
--ca_file - Full file name path to the Certificate Authority PEM file.
--key_file - Full file name path to the private key PEM file.
--cert_file - Full file name path to the certificate PEM file.
--shared_secret - specify a shared secret to verify collectors.
--listener_port - specify a port number that is different than the default.
--no_listener - disables GIM from running in "Listener mode" even if --sqlguardip is not specified.
update parameters
install modules
uninstall GIM directly on the database server
causes the GIM agent to exit server mode and process the request. If the GIM client cannot connect to the designated collector, it returns to server mode. After the GIM
agent is assigned to a valid collector's IP address or host name, you cannot set the GIM server to run in server mode again. All new GIM agent server mode parameters
appear as READ-ONLY.
Note: The following parameters must exist in the file system or the installation fails:
ca_file
key_file
cert_file
GIM and Consolidated Installers for GIM have an additional command line parameter:
--allow_ip_hostname_combo <0|1>
param description : If Enabled, and the GIM_CLIENT_IP is different than the db server's hostname, GIM_CLIENTS.GIM_CLIENT_NAME will be set with a value that
is the combination of `hostname`_<GIM_CLIENT_IP>.
If GIM_CLIENT_IP is set with an IP address and the GIM_ALLOW_IP_HOST_COMBO is enabled, GIM's hostname will be a combination of the
<hostname>_<GIM_CLIENT_IP> This will allow GIM clients uniqueness across database servers with "common" hostname.
LIMITATION: You can NOT set GIM_CLIENT_IP with a "common" hostname. This will be considered as an attempt to register with a duplicate identifier.
grdapi gim_set_global_param
paramName=gim_listener_default_shared_secret
paramValue=<password>
This value is encrypted and stored in the database. The value must be identical to the unencrypted value as the shared secret if you install the GIM agent on the database
server.
To set up a new default server mode GIM port, use the following GuardAPI command:
This value must be identical to the unencrypted value of the shared secret if you install the GIM agent on the database server.
Note: If you use a different port or shared secret, you must specify the shared secret or port every time you connect the collector IP/hostname to the server mode GIM
agent.
Note: You must enter an IP address / host name or select a server group, but the GIM listener port and GIM listener password are optional. When you install the GIM client
in listener mode, the settings of the shared secret and certificates cannot be changed unless you reinstall the GIM client.
Note: If the "Collector IP" field in GIM Remote Activation is blank, the hostname of the collector is sent to the server. If IP is specified, this is sent instead.
1. To open the GIM Global Parameters, click Manage > Module Installation > GIM Global Parameters.
2. Select gim_listener_default_shared_secret to set the shared secret or gim_listener_default_port to set the port.
Installing the GIM client using an interactive installer: GIM client version 10.1.2 or older
A wizard is provided to help you install the GIM client on each database server.
Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Run the setup.exe file to start the wizard that installs the GIM client. The setup.exe file is located in the Windows_GimClient folder.
3. Follow and answer the questions in the installation wizard.
What to do next
You can view the results of the installation in the log file at c:\guardiumstaplog.txt.
Installing the GIM client using an interactive installer: GIM client version 10.1.3 and newer
A wizard is provided to help you install the GIM client on each database server.
Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Run the setup.exe file to start the wizard that installs the GIM client. The setup.exe file is located in the GIM-Installer-10.2* folder.
3. Follow and answer the questions in the installation wizard.
What to do next
You can view the results of the installation in the log file at C:\IBM Windows GIM.ctl..
Installing the GIM client using silent installation: GIM client version 10.1.2 or older
If you prefer, you can install the GIM client from the command line instead of using the wizard.
Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Open a command prompt and navigate to the Windows_GimClient folder under the folder where you placed the installer.
3. Enter this command, with no linebreak. setup.exe /s /z" --host=g10.guardium.com --path=c:\\program files (x86)\\guardium\\GIM --
perl=c:\\perl\\bin --localip=192.168.1.100" Include all the spaces and quotes exactly as in this example. Removing or adding spaces causes the
installer to fail. The --perl= parameter indicates where Perl is installed on this computer. This parameter is optional. If you do not specify it, the installer installs a
Perl instance.
Attention:
Omit the --host parameter to install the client in GIM listener mode. Listener mode makes the GIM client available for remote registration from a Guardium
system. Example of how to install as listener: setup.exe /s /z"--path=c:\program files (x86)\guardium\GIM --host=GIM_HOST" For more information, see
GIM Remote Activation and Create a GIM Auto-discovery Process.
What to do next
You can view the results of the installation in the log file at c:\guardiumstaplog.txt.
Installing the GIM client using silent installation: GIM client version 10.1.3 or newer
If you prefer, you can install the GIM client from the command line instead of using the wizard.
Procedure
1. Place the GIM client installer on the database server, in any folder.
2. Open a command prompt and navigate to the GIM_Installer* folder under the folder where you placed the installer.
3. Enter this command, with no linebreak. setup.exe -UNATTENDED -INSTALLPATH "c:\Program Files(x86)\Guardium Installation Manager" -
LOCALIP 10.9.876.543
Attention:
The UNATTENDED and LOCALIP parameters are required. APPLIANCE is optional and if not supplied, will trigger Listener Mode. If using parameter
AUTO_ASSIGN_IP, LOCALIP is not required.
Omit the -APPLIANCE parameter to install the client in GIM listener mode. Listener mode makes the GIM client available for remote registration from
a Guardium system. Example of how to install as listener: setup.exe -UNATTENDED -INSTALLPATH C:\program files (x86)\guardium\GIM -LOCALIP
10.9.876.543. For more information, see GIM Remote Activation and Create a GIM Auto-discovery Process.
When cloning database servers and establishing large deployments, use --auto_assign_ip=1 to allocate a random IP address from one of the valid IP
addresses of a database server. Do not specify both auto_assign_ip and localip when installing the GIM client. When updating the
GIM_AUTO_SET_CLIENT_IP parameter using Manage > Module Installation > Set up by Client or Set up by Module, you must restart the GIM client
service for the new setting to take effect.
Windows GIM command line installation reference
Parameters applicable to all .NET installers
Parameter Description
-INSTALLPATH This is the install directory. Default install path is "C:\Program Files (x86)\Guardium\Guardium Installation Manager"
-APPLIANCE To set the appliance address that GIM connects to. Absence of this parameter will result in GIM installation using Listener Mode.
-SHARED_SECRET To set shared secret for registration with appliance if not specified using -APPLIANCE parameter
-LISTENER_PORT Set listener port for registration with appliance if not using the -APPLIANCE parameter. Default value is 8445.
-AUTO_ASSIGN_IP When value set to 1, a local IP is automatically assigned and should NOT be specified using -LOCALIP. Default value is 0.
What to do next
You can view the results of the installation in the log file at C:\IBM Windows GIM.ctl.
Procedure
1. Open a command prompt and navigate to the Windows_GimClient* folder under the folder where you installed the client.
2. Enter this command: For Installshield, use
Uninstalling the GIM client: GIM client version 10.1.3 and newer
Procedure
1. Open a command prompt and navigate to the GIM_Installer* folder under the folder where you installed the client.
2. Enter this command:
On Solaris, the GIM client and supervisor in each slave zone are controlled by the GIM supervisor process that runs in the master zone. If the supervisor process on the
master zone is shut down, all GIM processes on the slave zones are shut down as well.
Note: GIM requires 300 MB minimum of disk space, and 700 MB if FAM module is also being installed.
Procedure
1. Place the GIM client installer on the database server in any folder.
2. Run the installer: ./<installer_name> [-- --dir <install_dir> <--sqlguardip> <g-machine ip> --tapip <db server ip address> --perl
<perl dir> -q] The installer name has the syntax: guard-bundle-GIM-<release build>-<DB>-<OS>_<bit>.gim.sh, for example:
guard-bundle-GIM-10.5.0_r103224_v10_5_1-rhel-6-linux-x86_64.gim.sh
Attention:
Omit the --sqlguardip parameter to install the client in GIM listener mode. Listener mode makes the GIM client available for remote registration from a
Guardium system. For more information, see GIM Remote Activation and Create a GIM Auto-discovery Process.
When cloning database servers and establishing large deployments, use --auto_set_gim_tapip to allocate a random IP address from one of the valid IP
addresses of a database server. Do not specify both auto_set_gim_tapip and tapip when installing the GIM client. Update the GIM_AUTO_SET_CLIENT_IP
parameter after GIM client installation by using Manage > Module Installation > Set up by Client or Set up by Module.
3. On Red Hat Linux, version 6 or later, run these commands to verify that the files have been added:
ls -la /etc/init/gim*
ls -la /etc/gsvr*
ls /lib/svc/method/guard_g*
On all other platforms, run these commands to verify that the following new entries were added to /etc/inittab:
Where modules install dir is the directory where all GIM modules are installed, for example, /usr/local/guardium/modules.
4. Enter this command to verify that the GIM client, SUPERVISOR process, and modules are running:
5. Log in to the Guardium system and check the Process Monitoring status.
Procedure
1. To uninstall using the Guardium GUI.
a. Schedule an uninstall of the S-TAP bundle (Setup by Client).
b. Schedule an uninstall of the GIM bundle (Setup by Client).
c. Reboot the database server to remove K-TAP from the drivers.
2. Alternatively, uninstall on the DB server itself:
a. Uninstall both the GIM bundle and the S-TAP bundle by executing as root: /full/path/modules/GIM/current/uninstall.pl
b. Reboot the database server to remove K-TAP from the drivers.
Procedure
1. Upload the latest available BUNDLE-GIM.gim file to the Guardium system.
2. Use the GIM GUI to schedule the installation of the new BUNDLE-GIM.gim file.
Procedure
1. Click Setup > Tools and Views > Group Builder. In the Group Builder, create a new group. For the Group Type Description choose Client Hostname. The new group is
added to the list of existing groups.
2. Choose the new group in the Modify Existing Groups list and add members to the group. You can add them manually or populate the list from a query. To populate
the list from a query, click Populate from Query and note these requirements:
a. For Query select a report name that begins with GIM.
b. For Fetch Member from Column, select GIM Client Name.
c. In each Enter (Like) field, enter a value to be matched, or % if this field is not used to identify clients.
d. Save the group and run or schedule the query.
Results
You can use the group in the Manage > Module Installation > Set up by Client screen to work with this set of clients as a group rather than individually.
Parent topic: Guardium Installation Manager
Procedure
1. Use GIM to install the S-TAP on the Linux database server. The installer determines that a custom K-TAP module is required and builds it.
2. The custom K-TAP module, along with its sha256sum value, is uploaded automatically to the Guardium system for which the S-TAP is configured. Note that this
might not be the same Guardium system that you use as a GIM server.
3. On the Guardium system to which the K-TAP is uploaded, run this CLI command: grdapi make_bundle_with_uploaded_kernel_module, This adds the newly built K-
TAP module to the corresponding S-TAP bundle. There must be at least one S-TAP bundle whose build number and operating system attributes match those of the
uploaded K-TAP module. Loaded bundles are stored in /var/gim_dist_packages. The script creates a new S-TAP bundle with _8XX appended to the build number.
The new bundle is located in /var/dump. After running the GuardAPI command, grdpi make_bundle_with_uploaded_kernel_module, there is a need to load the new
GIM bundle. Otherwise it will not be visible in GIM GUI. If the GuardAPI command, grdpi make_bundle_with_uploaded_kernel_module, is successful, the following
example of a message containing the name of the new STAP bundle will be printed: Created guard-bundle-STAP-9.0.0_r71327_v90_800-suse-11-linux-
x86_64.gim with kernel ktap-71327-suse-11-linux-x86_64-xCUSTOMxeagle910-3.0.101-303.gefb7031-default-x86_64-SMP. Then run the GuardAPI command,
grdapi gim_load_package, and supply the name of the new bundle printed in the previous step.
4. If the new bundle is on a Guardium system that is not your GIM server, copy the new bundle to the GIM server.
5. Use the GIM GUI or CLI to distribute the new bundle to other database servers that are running the same Linux distribution as the server where the custom K-TAP
was built. There are hundreds of Linux distributions available, and the list is growing. This means that there might not be a K-TAP already available for your Linux
distribution. If the correct K-TAP is not available, the S-TAP installation process can build it for you. When you build a new K-TAP module for a Linux database
server, you can copy that module to other database servers that run the same Linux distribution.
Each GIM client sends an "alive" message to its GIM server regularly, to check whether any updates are ready to be processed. This polling interval is calculated and
updated based on conditions at the GIM server. The interval is calculated regularly, and the new value is passed to the GIM client in response to its "alive" message. This
feature is enabled by default, but you can turn it off if you prefer a fixed interval.
In the event that a GIM client fails to connect to its GIM sever after five consecutive attempts, the GIM client automatically connects to a failover server if one is specified.
The GIM client resumes connecting to its original GIM server when that server becomes available. The GIM server and failover server are configured using the GIM_URL
and GIM_FAILOVER_URL parameters, respectively.
Dynamic updating is controlled by the Guardium API command gim_set_global_param, with these parameters.
dynamic_alive_enabled
For example:
When each GIM client sends its alive message to the server, the server responds with the new polling interval as well as any other updates that have been scheduled for
that client.
These parameters were valid in 10.0, and removed from 10.1 and higher:
dynamic_alive_default_load_factor
dynamic_alive_cpu_level1_threshold
dynamic_alive_cpu_level2_threshold
dynamic_alive_db_conn_level1_threshold
dynamic_alive_db_conn_level2_threshold
dynamic_alive_cpu_load_sample_time
Procedure
1. For each module that you have installed on your database server, locate the GIM bundle containing the latest version of this module that supports the new
operating-system version. The build number of each bundle must be the same or greater than the bundle that is currently installed. Load each bundle onto the GIM
server.
2. Use the gim_set_global_param command to set the value of the global parameter auto_install_on_db_server_os_upgrade to 1. This enables the automatic update
option on the GIM server.
Results
At first boot after OS upgrade, the GIM client recognizes that the operating system has been upgraded and because the automatic update option is enabled, the client
takes these steps:
1. Changes the configuration files for all GIM-installed modules to support the new operating system attributes.
2. Re-registers all the modules to the GIM server with the updated attributes.
3. Records an alert in the GIM_EVENTS report saying that an OS upgrade has occurred and listing actions that should be taken.
When the modules are re-registered, the GIM server looks first for a bundle that has the same build number as the previously installed bundle, but is compatible with the
upgraded OS. If it does not find such a bundle, it looks for the latest bundles that support the new OS attributes. If the server cannot find appropriate bundles, it issues an
error message. If the server finds appropriate bundles, it schedules them for upgrade and runs the upgrade process immediately.
What to do next
Review the messages in the GIM_EVENTS report. If the GIM server reports that the modules have been upgraded successfully, verify the proper operation of the modules
as you would do after any update.
If error messages have been written to the GIM_EVENTS report, indicating that the upgrade was not successful, review the error messages for guidance.
After completing your planned OS upgrade, disable the automatic update option on the GIM server. This prevents a GIM client from erroneously starting an update
process.
You can re-enable the automatic update option when you perform another OS upgrade.
Parent topic: Guardium Installation Manager
The time required for distribution depends on the size of the bundles and network conditions. In a network with substantial latency, transfers can take several hours.
Procedure
1. Copy the bundles that you want to distribute into the /var/gim/dist_packages directory on your Central Manager. All files in this directory will be distributed; you
cannot select which bundles you want to distribute.
2. Choose the managed units to which you want to distribute the bundles.
3. Click Distribute GIM bundles. The bundles are copied to the selected managed units.
Results
You can install the bundles from each managed unit to the GIM clients that it manages.
Parent topic: Guardium Installation Manager
You can use two new Guardium API commands to identify and remove unused GIM bundles. Perform this procedure on each Guardium system that acts as a GIM server.
Procedure
1. Run the gim_list_unused_bundles command to identify unused bundles for FAM install. Use the includeLatest parameter to indicate whether you want the list that
is returned by the command to include the latest version of each GIM bundle. You might have some bundles that you have not yet distributed, or you might want to
keep one older version so that you can reinstall it if needed. Set includeLatest to 0 to exclude the latest unused version of each bundle from the command results.
Set it to 1 to include all unused versions. This parameter is required and no default value is provided. For example:
gim_list_unused_bundles includelatest=0
The command returns a list of GIM bundles that are found on the GIM server but are not installed on any database server whose GIM client works with this GIM
server.
2. If step 1 identifies some unused bundles, use the gim_remove_bundle command to remove each unwanted bundle. This command takes a single parameter,
bundlePackageName, which identifies the bundle to be removed. This parameter is required and no default value is provided. Use names that are returned by the
gim_list_unused_bundles command.
The named bundle is removed only if:
The name specified in bundlePackageName matches the name of one and only one specific GIM bundle.
There is no GIM bundle whose name matches bundlePackageName installed on any database server whose GIM client works with this GIM server.
For example:
gim_remove_bundle bundlePackageName=name
where name is a bundle name that was returned by the gim_list_unused_bundles command.
Results
GIM bundles that are not needed are removed from your GIM server.
Parent topic: Guardium Installation Manager
You can run GIM diagnostics either from the Guardium user interface or from the command line. To run from the command line, use this command:
The value of clientIP can be either an IP address or a hostname. You must run the command on the Guardium system that is the GIM server for this client.
Procedure
1. Use the check boxes next to each client to choose the clients for which you want to run GIM diagnostics.
Results
You can review the results in the GIM_EVENTS report.
Parent topic: Guardium Installation Manager
Procedure
1. Edit the GIM properties file: /opt/IBM/Guardium/tomcat/gimserver/ROOT/WEB-INF/conf/gimserver.log4j.properties.
2. Change the value ERROR to DEBUG.
3. Save the file.
Results
Debugging will be turned on in a few seconds and debug messages will be written to the daily debug log file in /var/log/guard/debug-logs/.
What to do next
When you have finished debugging, edit the file again and change DEBUG back to ERROR.
To enable debugging on the GIM client, change the parameter module_DEBUG to 1, where module is the name of the installed module whose operation you want to
debug. You can modify the parameter by using the CLI or the user interface. Set the value to 0 when you complete your debugging.
Procedure
1. Stop the supervisor by running the command svcadm -v disable guard_gsvr.
2. Run the command svccfg delete -f guard_gsvr.
3. Restart the supervisor with the command svccfg import <gim install dir>/SUPERVISOR/current/guard_gsvr.xml where <gim install dir> is the file path to the GIM
installation directory.
Results
The supervisor is restarted for Solaris with SMF support.
Parent topic: Guardium Installation Manager
This document also provides information on how to customize the partitioning on the appliance and how to install on a remote drive (SAN).
1. Assemble configuration information and the hardware required before you begin.
2. Set up the physical appliance or the virtual appliance.
3. Install the Guardium® image.
4. Set up initial and basic configurations.
5. Verify successful installation.
Hardware offering – a fully configured software solution delivered on physical appliances provided by IBM®.
Software offering – the solution delivered as software images to be deployed by the customers on their own hardware either directly or as virtual appliances.
Operating modes
You can deploy a Guardium system in any of several operating modes.
License keys
Establishing a functional Guardium system requires both a base license and one or more append licenses.
Hardware Requirements
Detailed hardware requirements and sizing recommendations are available on the IBM Support Portal.
Guardium port requirements
Each Guardium system must have ports available for several types of communication. This table lists these connections and the default port numbers that are
assigned to them.
Step 1. Assemble the following before you begin
To prepare for the deployment of the Guardium system, the network administrator needs to supply the following information.
Step 2. Set up the physical or virtual appliance
The setup instructions in this section are different when installing to a physical appliance or a virtual appliance.
Step 3. Install the Guardium image
This section explains how to install the image and partition the disk.
Step 4. Set up initial and basic configuration
The initial step should be the network configuration, which must be done locally through the Command Line Interface (CLI) accessible through the serial port or the
system console.
Step 5. What to do next
This section details the steps of verifying the installation, installing license keys, and installing any available maintenance patches.
Creating the Virtual Image
Use this section to install the virtual image.
Custom Partitioning
If you customize the partitioning of the hard drive, you must make several choices.
How to partition with an encrypted LVM
If you want to use an encrypted disk, follow these steps to create an encrypted LVM volume that contains the / and /var logical volumes.
Example of SAN Configuration
This appendix details the steps involved in moving to a command prompt in order to pre-partition a hard drive (as is needed for SAN installation).
Operating modes
You can deploy a Guardium system in any of several operating modes.
As you plan your Guardium environment, you might deploy systems in any or all of these operating modes:
Collector
A collector receives data about database activities or file activities from agents that are deployed on database servers and file servers. The collector processes this
data and responds according to policies that are installed on the collector. A collector can export data to an aggregator.
Aggregator
An aggregator collects data from several collectors, to provide an aggregated view of the data. The aggregator is not connected directly to database servers and file
servers. You can allocate collectors to aggregators according to location or function. For example, you might want to connect the collectors that monitor your
human resources database servers to a single aggregator, so that you can view data that is related to all those servers in one location. If you want, you can
implement a second tier of aggregation by deploying an aggregator that collects data from all your other aggregators, rather than from collectors.
Note: If you plan to use the appliance as a central manager you MUST select Aggregator option.
Central manager
There is only one central manager in a Guardium environment, although you can designate another Guardium system as a backup central manager. You can use the
central manager to define policies and distribute them to all collectors, to perform other configuration tasks that affect all your Guardium systems, and to perform
various other administrative tasks from a single console. Your central manager can also function as an aggregator, collecting data from collectors or from other
aggregators. This model provides an enterprise-wide view of activities and enables you to view reports that are based on data that is aggregated from all your
Guardium systems.
The number of monitored database servers and file servers that you assign to an collector depends on the amount of data that flows from the servers to the collector. For
information about how many collectors and aggregators your environment requires, and how to locate your Guardium systems for best results, refer to the Deployment
Guide for IBM Guardium.
If you are using the Guardium Vulnerability Assessment component, you must decide where to run assessment tests. Some customers dedicate a separate Guardium
system for this function. You can also run tests from any Guardium system that is deployed as a collector, an aggregator, or a central manager.
License keys
Establishing a functional Guardium system requires both a base license and one or more append licenses.
Base license keys (also known as reset keys) reflect the machine type of the system. For example, establishing collector system requires a collector base license.
Append license keys enable specific sets of features. For example, typical data activity monitoring features require a DAM Standard append license. Multiple
append licenses can be installed in combination to enable expanded Guardium functionality.
When applying a base license, the machine type is checked to verify compatibility. There are two types of base licenses:
Table 1. Base license types
Base License Type License Description
Collector Collector base licenses are valid for establishing a standalone system or a collector.
Aggregator Aggregator base licenses are valid when establishing an aggegator or a central manager
system.
DAM Advanced DAM Standard functionality plus fine-grained access control, masking, quarantine, and blocking (activity terminate).
VA Standard Vulnerability assessment plus database protection service (DPS), change audit system (CAS), and database entitlement
reporting.
For information about installing Guardium licenses, see Install license keys.
Hardware Requirements
Detailed hardware requirements and sizing recommendations are available on the IBM Support Portal.
For detailed hardware specifications and sizing recommendations, refer to the following: IBM Guardium V10.1 Software Appliance Technical Requirements.
Open ports
Ports used in/by the Guardium system.
TCP 16016 – Unix STAP, both directions, registration, heartbeat, and data (including IBM i S-TAP running in PASE)
TCP 16017 – Windows/Unix CAS, both directions, templates and data
TCP 16018 – Unix STAP (TLS), both directions, registration, heartbeat, and data
TCP 16019 – Windows/Unix CAS (TLS), both directions, templates and data
TCP 16020 - From STAP agent Clear UNIX STAP connection pooling
TLS 16021 - From STAP agent Encrypted UNIX STAP connection pooling
TCP 8081 – Guardium Installation Manager, both directions, database server to collector/Central Manager
TCP 9500 – Windows STAP, both directions, DB Server to Collector, STAP registration and data
TCP 9501 – Windows STAP (TLS), both directions, DB Server to Collector, STAP registration and data
TCP 3306 – MySQL, opened to specific sources (for instance, the Central Manager is open to all managed units; a managed unit is open to the Central Manager)
TLS 8447 - Used for remote messaging service infrastructure (and profile distribution infrastructure) for communication between Guardium systems in the federated
environment / centrally-managed environment. Configuration profiles allow the definition of configuration and scheduling settings from a Central Manager and
conveniently distribute those settings to managed unit groups without altering the configuration of the Central Manager itself.
TCP/TLS 16022/16023 - Universal Feed. 16022 (FAM monitoring, unencrypted) and 16023 (FAM monitoring, encrypted) both need to be open bidirectionally. The sniffer
needs the block from 16016 to 16023 open bidirectionally.
8445 - GIM client listener, both directions. The GIM client is doing the listening. Any GIM server on either the Central Manager or the collector can reach out to it (the GIM
client).
8446 - GIM authenticated TLS, both directions. Use between the GIM client and the GIM server (on the Central Manager or collector). If GIM_USE_SSL is NOT disabled,
then the gim_client will attempt to communicate its certificate via port 8446. IF port 8446 is NOT open, then it defaults to 8444, BUT no certificate is passed (for example,
TLS without verification).
8081 - TLS - To use 8081 for the GIM client to connect to the GIM server, there is a need to disable the GIM_USE_SSL parameter - it is ON by default. This parameter is
part of the GIM common parameters in the GUI. If GIM_USE_SSL is NOT disabled, then the gim_client will attempt to communicate its certificate via port 8446. IF port
8446 is NOT open, then it defaults to 8444, BUT no certificate is passed (for example, TLS without verification).
TLS 8443 - S-TAP load balancer - This is needed for UNIX/Linux S-TAPs to communicate instances to the collector. However this port is also used for the Central Manager
load balancer. The S-TAP initiates a request to Central Manager (load balancer) on 8443 sending HTTPS message, if installation indicates to use Enterprise load balancer.
Between the database server and Central Manager, there will be the capability to use a proxy server, if customer doesn't want an open port directly from database to
Central Manager.
TCP 8443 – user to system, GUI connectivity (configurable), both directions
UDP/TCP 514 – remote syslog message from/to other systems, typically SIEMNote: The local port is 514, but the remote port must be entered into the configuration. If
encryption is used, the protocol must be TCP, not UDP.
TCP 16022 – connects S-TAP to DB2 z/OS, S-TAP IMS, S-TAP VSAM (S-TAP Data Set)
TCP 16023 - TLS connections, specifically IBM‘s Application Transparent Transport Layer Security (AT-TLS)
Protoc
Port ol Purpose
8075 UDP Windows S-TAP heartbeat signal (two-way traffic). Note: The UNIX S-TAP agent does not use UDP for heartbeat signals, so there is no corresponding
UNIX port for this function.
Por Protoc
t ol Purpose
844 TCP Web browser access (https) to the Guardium user interface. Note: This port can be changed by the Guardium administrator, and is also used to register a
3 managed unit to the Central Manager.
16022 TCP Connects to S-TAP for DB2 z/OS, S-TAP for IMS, S-TAP for Data Sets
16023 TCP TLS connections, specifically IBM's Application Transport Layer Security (AT-TLS)
41500 TCP Default starting port for internal message logging communications – LOG_PORT_SCAN_START
39987 TCP Default agent-specific communications port between the agent and the agent secondary address spaces – ADS_LISTENER_PORT
Pr
ot
o
Por c
t ol Purpose
53 T DNS Servers
C
P
636 T LDAP, for example, Active Directory or Sun One Directory over SSL (optional)
C
P
use T Database Server listener ports, for example, 1521 for Oracle or 1433 for MS-SQL, for Guardium datasource access (optional). Use this port for S-TAP
r- C verification and Discovery.
defi P
ned
If GIM_USE_SSL is NOT disabled, then the gim_client will attempt to communicate its certificate via port 8446. IF port 8446 is NOT open, then it defaults to
8444 BUT no certificate is passed (for example, TLS without verification).
844 T Used for remote messaging service infrastructure (and profile distribution infrastructure) for communication between Guardium systems in the federated
7 L environment / centrally-managed environment. Configuration profiles allow the definition of configuration and scheduling settings from a Central Manager and
S conveniently distribute those settings to managed unit groups without altering the configuration of the Central Manager itself.
However this port is also used for the Central Manager load balancer. If the installation wants to use Enterprise load balancer, then the S-TAP initiates a request
to the Central Manager on port 8443 by sending an HTTPS message.
So between database server and Central Manager, there will be the capability to use a proxy server, if customer doesn't want an open port directly from
database to Central Manager.
808 T To use 8081 for the GIM client to connect to the GIM server - need to disable the GIM_USE_SSL parameter - it is ON by default. This parameter is part of the
1 L GIM common parameters in the GUI. If GIM_USE_SSL is NOT disabled, then the gim_client will attempt to communicate its certificate via port 8446. IF port
S 8446 is NOT open, then it defaults to 8444 BUT no certificate is passed (for example, TLS without verification).
Physical Appliance
After the appliance has been loaded into the customer's rack, connect the appliance to the network in the following manner:
How to identify eth0 and other network ports
Use the following CLI commands to map the network ports.
Default passwords for physical appliances
Default passwords are supplied for predefined users.
Virtual appliance
The IBM Security Guardium Virtual Machine (VM) is a software-only solution licensed and installed on a guest virtual machine such as VMware ESX Server.
Physical Appliance
After the appliance has been loaded into the customer's rack, connect the appliance to the network in the following manner:
1. Find the power connections. Plug the appropriate power cord(s) into these connections.
2. Connect the network cable to the eth0 network port. Connect any optional secondary network cables.
3. Connect a Keyboard, Video and Mouse directly or through a KVM connection (either serial or through the USB port) to the system.
4. Power up the system.
When you receive a physical appliance from IBM, use these passwords for your initial configuration.
Note: Be sure to change all default passwords when you complete the installation.
Table 1. Default passwords for
predefined users
User Default password
accessmgr guard1accessmgr
admin guard1admin
cli guard1cli
Parent topic: Step 2. Set up the physical or virtual appliance
To install the Guardium VM, follow the steps in Creating the Virtual Image. The steps are:
After installing the VM, return to Step 4, Setup Initial and Basic Configuration, for further instructions on how to configure your Guardium system.
1. Make sure your UEFI/BIOS “boot sequence†settings are set to attempt startup from the removable media (the CD/DVD drive) before using the hard drive.
Note: Installation can take place from DVD. If needed, get the UEFI/BIOS password from Technical Support.
2. Load the Guardium image from the installation DVD.
3. The following two options appear:
Standard Installation: this is the default. Use this choice in most cases when partitioning the disk.
Custom Partition Installation: allows more customization of all partitions (locally or on a SAN disk). See Custom partitioning for further information on how to
implement this option.
Note:
The Standard Installation wipes the disk, repartitions and reformats the disk, and installs a new operating system.
On the first boot after installation, the user is asked to accept a Licensing Agreement. They can use PgDn to read through the agreement or Q to skip to the
end. To accept the terms of the agreement, enter q to exit and then type yes. The user must enter yes to the agreement or the machine will not boot up.
4. The system boots up from DVD. It takes about 12 minutes for this installation.
(d) The installation process will now ask you to choose a collector or aggregator (will be set to “Collector†automatically after 10 seconds if no input is
provided). See the Product Overview for an explanation of Collector and Aggregator. If you wanted to choose aggregator and you did not choose it within 10
seconds, you must reinstall in order to get back to this point where you have a choice of aggregator.
Note: If you plan to use the appliance as a central manager you MUST select Aggregator option.
5. The system automatically reboots at this point to complete the installation. The first login after a reboot requires a changing of passwords.
In the following steps, you will supply various network parameters to integrate the Guardium system into your environment, using CLI commands.
In the CLI syntax, variables are indicated by angled brackets, for example: <ip_address>
Replace each variable with the appropriate value for your network and installation. Do not include the brackets.
The default network interface mask is 255.255.255.0. If this value is the correct mask for your network, you can skip the second command.
To assign a secondary IP address, use the CLI command, store network interface secondary [on <interface> <ip> <mask> <gw> | off], that can be used to enable/disable
the secondary interface.
Next you must restart the network by using the CLI command, restart network. Assigning a secondary IP address cannot be done by using the GUI, only through the CLI.
The remaining network interface cards on the appliance can be used to monitor database traffic, and do not have an assigned IP address.
SMTP Server
An SMTP server is required to send system alerts. Enter the following commands to set your SMTP server IP address, set a return address for messages, and enable SMTP
alerts on startup.
Note: You can also configure the SMTP server by using the user interface. ClickSetup > Alerter.
Parent topic: Step 4. Set up initial and basic configuration
1. Set timezone
2. Set date and time - Option 1 - set ntp. Option 2 - store system clock datetime
Provide the details of an accessible NTP server and enable its use.
Note: When setting up a new timezone, internal services will restart and data monitoring will be disabled for a few minutes during this restart.
Note: Do not change the hostname and the time zone in the same CLI session.
store unit type standalone - use this command for all appliances.
Unit type standalone and unit type stap are set by default. Unit type manager (if needed) must be specified.
Note: Unit type settings can be done at a later stage, when the appliance is fully operational.
Parent topic: Step 4. Set up initial and basic configuration
Save the passkey used in your documentation to allow future Technical Support root accessibility. To see the current pass key use the following CLI command:
Questions - How secure is the Guardium system root password? Who has access to it?
Guardium appliances are "black box" environments with the end user only having access to limited access Operating System accounts, such as:
The Graphical User Interface user accounts (for example admin and accessmgr) are not defined by the Guardium system's operating system, but are application IDs
defined and managed via an application interface (accessmgr).
Being a secured server, root access is not readily available to anyone, but, is often required by Guardium support to gain access to the Guardium apoliances to
troubleshoot and resolve issues. Guardium support does not use sudo, or any other userid other than root, to gain access to Guardium appliances.
The root password is secured using a "joint password" mechanism. The customer holds the keys to the appliance in the form of a eight-digit numeric passkey. IBM
holds the passkey decoder. Without having both, the passkey and passkey decoder, neither IBM nor the customer can access the appliance as root.
The passkey is managed by the customer via the CLI interface. The customer can change the passkey at any time, without notifying IBM, by using the following CLI
command:
Anyone with CLI access can retrieve the passkey for root by using the following CLI command:
When involving Guardium support, on a remote desktop sharing session, the support analyst will request the root passkey for the Guardium appliance in question.
Once the passkey has been decoded, Guardium support will use the root password to gain access to the appliance as root. After the remote desktop sharing session
terminates, the customer can change the passkey using the above CLI command, thereby ensuring IBM no longer has the root password for this appliance.
Being an eight-digit numeric key, the passkey has a range of 10000000 to 99999999. This range provides 89,999,999 possible passwords. All encoded passwords
are hardened. They do not contain any common passwords, any dictionary words, their length varies and they contain national, special, alphabetic (upper and lower
case) and/or numeric characters.
Access to the passkey decoder is restricted to a select few IBM Guardium employees, such as Guardium R&D, Guardium QA and Guardium support staff members.
It is not available to IBM staff.
The CLI userids mentioned above (cli, guardcli1, guardcli2, guardcli3, guardcli4, guardcli5) do not use the passkey mechanism and their passwords are 100%
governed by the customer with IBM having no access to their passwords. For this reason, IBM recommends keeping the root passkey in a password vault to ensure
the appliance is accessible even if the CLI account passwords have been forgotten or misplaced.
stop system
The system shuts down. Move the system to its final location, re-cable the system, and power the system back on. After the system is powered on, it is accessible (using
the CLI and GUI) through the network, using the provided IP address or host name.
Login to the Guardium web-based interface and go to the embedded online help for more information on any of the following tasks.
Use the CLI command store unit type to set the type of each Guardium system.
Establishing a functional Guardium system requires installing both a base and at least one append licenses. The base license must be installed and accepted before
installing and accepting any append licenses.
For more information about Guardium license keys, see License keys.
When upgrading a Guardium system, you will not need to apply licenses. License keys will be automatically generated based on your preexisting installation, but you will
need to review and accept the license agreements before you can begin using your Guardium system. To review and accept licenses on an upgraded system, navigate to
Setup > Tools and Views > License and click the Read and accept license link.
Procedure
1. Log in to your Guardium system as the admin user.
2. Verify that the Machine Type displayed in the Guardium banner is correct for the system you are licensing. The machine type will be one of the following:
Standalone
Central Manager
Aggregator
Attention: If you are setting up a central manager and the Machine Type indicates an aggregator, convert the system from an aggregator to a central manager using
the following CLI command: store unit type manager.
3. Install a base license.
a. Navigate to Setup > Tools and Views > License.
b. On the License page, enter the base key for your system in the License key field and click Apply to continue.
Attention: Depending on the system you are setting up, you will need to apply either a base collector key or a base aggregator key. A base aggregator key is
required when setting up a central manager system.
c. From the License Agreement dialog, review the license agreement associated with the base key and click Accept when you are ready to accept the terms.
The Guardium interface will automatically refresh after accepting the agreement, but there will be no change in available functionality after installing a base
license key.
4. Install one or more append licenses. Repeat the following steps for each append licence you have purchased and want to install.
a. Navigate to Setup > Tools and Views > License.
b. On the License page, enter an append key in the License key field and click Apply to continue.
c. From the License Agreement dialog, review the license agreement associated with the append key and click Accept when you are ready to accept the terms.
The Guardium interface will automatically refresh after accepting the agreement, and any new functionality associated with the append license will become
available.
d. Repeat the steps in this section for each append license you want to install.
What to do next
In an environment with a central manager, you can distribute the new licenses by navigating to the Manage > Central Management > Central Management page and
clicking the icon to distribute licenses from the central manager to managed units.
In an environment with a central manager, the central manager and its managed units must use the same shared secret. Set the shared secret from the Setup > Tools and
Views > System page or by using the CLI command store system shared secret.
Note: In federated environments, maintenance patches can be applied to all of the appliances from the Central Manager.
There may not be any maintenance patches included with the installation materials. If any are included, follow these steps to apply them:
1. Log in to the Guardium® console, as the cli user, using the temporary cli password you defined in the previous installation procedure. You can do this by using an
ssh client.
2. Do one of the following:
If installing from a network location, enter the following command (selecting either ftp or scp):
And respond to the following prompts (be sure to supply the full path name to the patch file):
User on <hostname>
Password:
You will be prompted to select the patch to apply. Use wildcards in the pathname to get multiple patches. Also separate patch names by commas.
Patches are installed by a background process that may take a few minutes to complete.
Note: To avoid the Guardium UI from displaying a mixture of languages, set the Central Manager and managed units to the same language.
Navigate to the Manage > System View, and click S-TAP Status Monitor from the menu. All active S-TAPs display with a green background. A red background indicates that
the S-TAP is not active.
Navigate to Manage > Activity Monitoring > S-TAP Control, and confirm that there is a green status light for this S-TAP.
The VMware ESX Server on which you can install the Guardium VM is one component of the VMware infrastructure. Although not all VMware Infrastructure components
are required to support the Guardium VM, you should be familiar with all components that are in use at your installation.
ESX Server: This component is used to configure and control VMware virtual machines on a physical host referred to as the ESX Server host. To install an Guardium VM,
you first define a virtual machine on an ESX Server host, and then install and configure the Guardium VM image on that virtual machine. You can create multiple Guardium
VMs on a single ESX Server.
VI Client (Virtual Infrastructure Client): This component is used to connect to a standalone ESX Server, or to a VirtualCenter Server. In the latter case, you can
administer multiple virtual machines created over multiple ESX Server hosts.
Web Browser: Use a Web browser to download and use the VI Client software from an ESX Server host or the VirtualCenter server.
VirtualCenter Management Server (Optional): This component runs on a remote Windows machine, and can be used to manage multiple virtual machines on multiple
ESX Server hosts. It offers a single point of control over all the ESX Server hosts.
Database (Optional): The VirtualCenter Server uses a database to store configuration information for the infrastructure. The database is not needed if the VirtualCenter
Server is not used.
License Server (Optional): Stores and manages the licenses needed to maintain a VMware Infrastructure.
For more information, go to www.vmware.com and search for “ESX Quick Startâ€
VM Installation Overview
If you are installing multiple Guardium VM systems in a VMware VirtualCenter Management Server environment, you can create a template system from the first Guardium
VM that you create, and then clone that template as necessary. Then, all you need to do is set the IP address on each cloned system. For more information, see the note
following Step 7.
Note: The ESX server is only supported on a specific set of hardware devices. For more information, see the VMware Virtual Infrastructure documentation.
The following table describes how the Guardium VM uses network interfaces. Refer to this table to make the appropriate connections before you configure the virtual
switches for use by the Guardium VM.
Proxy interface (eth0) This interface is the main gateway to the appliance, and is used for these purposes:
Graphical web-based User Interface (GUI) to manage, configure, and use the solution
Command Line Interface (CLI) for initial setup and basic configuration
Connections with external systems such backup systems, database servers, and LDAP server
Communication with other Guardium components such as other appliances (aggregator, central manager) and agents that are installed
on database or file servers such as S-TAP or CAS clients
Application server This interface is required if you configure your Guardium system as a transparent proxy. It connects to the application servers whose content
interface (eth1) your Guardium system is configured to mask.
1. Open the VMware VI Client, and log on to either a VirtualCenter Server, or the ESX Server host on which you want to create a new virtual machine.
2. If you are logged in to a VirtualCenter Server, click Inventory in the navigation bar, and expand the inventory as needed to display the managed host or cluster on
which you plan to install a Guardium VM.
3. In the inventory display, click the host or cluster on which you plan to install a Guardium VM.
4. Click Configuration tab, click Networking in the Hardware box, and then click Add Networking.
This opens the Add Network Wizard, which is used for various purposes.
Use the Add Network Wizard to define a new virtual switch for the Guardium VM network interface. This is the connection over which you will access the Guardium
VM management console, and over which the Guardium VM will communicate with other Guardium components (S-TAPs, for example, which are software agents
that you will install later on one or more database servers).
5. In the Connection Types box, click Virtual Machine and click Next.
7. Optionally mark a second unclaimed network adapter if want to use the VMware IP teaming capability to provide a secondary (failover) network interface. Later, you
will designate this second adapter as a Standby Adapter (and of course, you must cable both NICs appropriately).
8. Click Next to continue to the Connection Settings page of the Add Network Wizard.
9. In the Network Label box, enter a name for the virtual machine port group, for example: GuardETH0, and click Next.
10. In the Summary page, click Finish. The new virtual switch is displayed in the Configuration tab.
11. Optional. If you have defined a second adapter for failover purposes: (a) Click Properties link for the virtual switch just created to open the virtual switch Properties
panel. (b) Click Ports tab and select the virtual port group just created (GuardETH0 in the example), and click Edit. (c) In the virtual port group Properties panel,
click NIC Teaming tab, mark the Override vSwitch Failover box, and then move the second adapter to the Standby Adapters list. (d) Click OK to close the virtual port
group Properties box, and click Close to close the virtual switch Properties box.
1. Open the VMware VI Client, and log on to either a VirtualCenter Server, or the ESX Server host on which you want to create a new virtual machine.
2. If you are logged in to a VirtualCenter Server, click Inventory in the navigation bar, expand the inventory as needed, and select the managed host or cluster to which
you want to add the new virtual machine.
3. From the File menu, click New – Virtual Machine to open the configuration Type panel of the New Virtual Machine wizard.
4. Click Typical as the configuration type, and click Next to continue with the Name and Folder panel.
5. On the Name and Folder panel:
Enter a name for the new virtual machine in the Virtual Machine Name field. This name appears in the VI Client inventory and is also used as the name of the virtual
machines files.
To set the inventory location for the new virtual machine, select a folder or the root location of a datacenter from the list under Virtual Machine Inventory Location.
Click Next.
6. If your host or cluster contains resource pools, the Resource Pool panel is displayed, and you must select the resource (host, cluster, or resource pool) in which you
want to run the virtual machine. Click Next.
7. On the Datastore panel, optionally select a datastore in which to store the new virtual machine files, and click Next.
8. In the Choose the Guest Operating System panel, choose the operating system that corresponds to the Guardium image that you are installing. Click Linux > RedHat
Enterprise Linux 6, 64-bit from the Version box, and click Next. .
The operating system is not installed now, but the OS type is needed to set appropriate default values for the virtual machine.
For VM minimum resources, refer to the Hardware Requirements in the Before you begin section.
9. On the Virtual CPUs panel, select the number of CPUs recommended for the type of Guardium VM being installed, and click Next.
10. On the Memory panel, select the amount of memory recommended for the type of Guardium VM being installed, and click Next. Important: the initial value must be
at least 16 GB. If customers want to work outside the required range, consult with Technical Support.
11. On the Network panel, click 1 as the number of ports that are required, and click Next.
12. For the selected port, use the Network pull-down menu to choose a port group configured for virtual network use. (You should have defined this port group in the
previous procedure.)
13. For the selected port group, mark the Connect at Power On check box (it should be marked by default), and click Next.
14. On the Virtual Disk Capacity panel, enter the amount of disk space to reserve for the new virtual machine in the Disk Size field.
15. On the Ready to Complete panel, verify your settings and click Finish.
This completes the definition of the new virtual machine. The operating system has not yet been installed, so if you attempt to start the virtual machine, that activity will
fail.
1. Open the VMware VI Client, and log on to either a VirtualCenter Server, or the ESX Server host on which you want to create a new virtual machine.
Datastore ISO File – Connect to the Guardium Installation ISO file on a datastore. If you have not already done so, copy the Guardium ISO files to a datastore
accessible from the ESX Server host on which the virtual machine is installed. Click Browse to select the file.
Caution: For the remaining options, you will place the Guardium Installation CD/DVD in a CD-ROM/DVD drive. If you reboot any system with an Guardium
Installation CD/DVD in its CD-ROM/DVD drive, you will install Guardium on that system, wiping out the host operating system and files.
Client Device – Connect to a CD-ROM/DVD device on the system on which you are running the VI Client. If you select this option, insert the Guardium CD/DVD in
the CD-ROM/DVD drive of the system on which the VI Client is running.
Host Device – Connect to a CD-ROM/DVD device on the ESX Server host machine on which the virtual machine is installed. If you select this option, choose the
device from a drop-down menu, and insert the Guardium CD/DVD in the CD-ROM/DVD drive of the ESX Server host machine.
6. Click OK.
7. Click Power On to start the virtual machine.
8. If you selected Client Device as your CD/DVD Drive option, click Virtual CD-ROM (ide0:0) in the toolbar, and select the local CD-ROM device to connect to.
9. Click Console tab to display the virtual machine console. You will need to respond to several prompts during the installation process.
10. Skip this step if you are using theGuardium DVD.
When prompted for the second CD, depending on option you use in step 5 you need to either put the second CD in its drive or select the second CD ISO image.
Continue by pressing Enter. When prompted for the cli password, enter a temporary password for use when logging in to the Guardium CLI, which you will need to
do to set the IP configuration parameters for the appliance.
11. When you are prompted for the GUI admin password, enter a temporary password for use when logging in to the Guardium user interface as the admin user.
12. When asked if building a collector or aggregator, choose the appropriate type.
13. Click No to the Master Passkey prompt.
Caution: If a CD-ROM/DVD drive was used, the CD/DVD ejects when the installation completes. Be sure to remove the installation CD/DVD from that drive. If the ISO
file was used, be sure to remove the ISO CD ROM by changing the virtual CD/DVD back to a Client or Host Device. Otherwise, the next time it is rebooted, you will
install Guardium on the host machine, wiping out the host machine operating system and all files.
The machine will reboot automatically, and you will be prompted to log in as the CLI user.
14. At this point, return to Step 4, Set up Initial and Basic Configurations for complete instructions on configuration of the Guardium system.
1. Use the VMware virtual infrastructure server product to clone the first Guardium VM that you configured to a template.
2. From the template, create a clone for each additional Guardium VM to be configured.
3. For each clone, log in to the Guardium VM console as the cli user by using the temporary cli password and reset any of the IP configuration parameters that you set
in the previous procedure. Mandatory tasks: reset the IP address, reset the GLOBAL_ID (GID), and reset the host name. The UNIQUE_ID (UID) is set automatically
and does not require manual configuration. Be sure to review all of the IP configuration settings entered in the previous procedure.
restart network
Note: The unique ID (UID) of the appliance is recalculated every time the hostname changes in order to avoid having multiple appliances with the same unique ID.
Note: The global ID (GID) can be any number so long as it is unique and less than 9223372036854775808. During the cloning process this unique number is
necessary. Please obtain the global IDs from your other appliances and use a number that is unique for this clone.
Procedure
1. Log into the Hyper-V server as an administrator.
2. Start the Hyper-V manager at Start menu > Administrative Tools > Hyper-V Manager.
3. Right-click on the Hyper-V server and select New > Virtual Machine.
What to do next
Verify that the virtual machine is functioning by pinging OTIS from the virtual machine and by logging into the virtual machine over SSH from a remote host.
Not replacing the default network adapter with the legacy adapter will not allow PXE.
Not replacing the legacy network adapter with the default network adapter leave the Guardium system without network connectivity.
Starting the machine before changing the MAC address after replacing the legacy network adapter generates a new MAC address and virtual adapter on the virtual
machine. This must be remedied for the system to work. Change the MAC address to your previously-recorded MAC address and use the normal method to clean up
the ifcfg-eth0 and 70-persistent-nework.rules.
Custom Partitioning
If you customize the partitioning of the hard drive, you must make several choices.
/ 25 GB
/boot 5 GB
All the available drives are also displayed on this screen. Choose the drive for the partitioning and then installation.
After the partitioning is finished, the Guardium® system software is installed automatically.
If values are created that exceed the space available on the disk, an error message appears.
Click OK to reboot the system and return to the beginning of Custom Partitioning.
See the Red Hat Enterprise Linux documentation for more information about how the Red Hat distribution handles partitioning.
Note: Non-default partitioned systems - Custom partitioned systems cannot be upgraded using an upgrade patch. Instead, you must use the backup, rebuild, and restore
method. If there is uncertainty regarding the partitioning of systems, download and install Health Check p9997. The resulting patch log contains information regarding
For the encrypted LVM installation, you are asked to enter an encryption key. Then, on EVERY reboot, the user is required to enter this key to unlock the LVM volume (This
means that the user must have console access to the appliance, either physical or remote access).
Important – The encryption key must be safeguarded and retained, as it is impossible to replace if lost.
Note: The boot loader, a special program that loads the operating system into memory, is part of a custom partitioning installation. An example of the password entry
screen is shown near the end of this topic.
The Bootloader configuration dialog is displayed. When a computer with Red Hat Enterprise Linux is turned on, the operating system is loaded into memory by a
special program that is called a boot loader. A boot loader usually exists on the system's primary hard disk (or other media device) and has the sole responsibility of
loading the Linux kernel with its required files or (in some cases) other operating systems into memory.
In most cases, the default options are acceptable, but depending on the situation, changing the defaults options may be necessary.
18. At this screen, click Next. This starts the encrypted installation.
During the installation and further re-boots, you are asked to enter the LUKS (Linux Unified Key Setup) passphrase for the LVM during boot. After you enter the LUKS
passphrase, the system completes the boot process.
First partition space on the SAN storage device, and then install the IBM Security Guardium OS. Choose one hard disk for this installation.
Note: Depending on what SAN hardware is used, specific instructions may be different. Installation on a SAN is supported; installation on a NAS is not supported.
Summary of steps
1. Enter system setup (press F1 on IBM® servers during initial boot) and modify the Start Options to select the appropriate PCI slot to boot from (where the QLogic
Card is).
2. Modify the BIOS for the QLogic card by pressing Ctrl-Q, when the QLogic BIOS is loading, to enable it to be a boot device. Then select the LUN (logical unit number)
of the boot device.
3. Boot from the RedHat 5.8 DVD and enter Rescue mode in order to run fdisk and create partitions on the SAN device using the specifications listed here:
Table 1. Partitions on SAN device
Partitions Space
3 25 GB for /
Note:
In the SAN environment, the single LUN is presented to RedHat 5.8 as multiple devices due to redundant paths within the network switch(es) on the SAN. (The SDD
storage was eight devices.)
It is very important to only edit the existing partitions that the IBM Guardium installation sees by adding the mount point and setting the file system (ext4 or swap,) and
not changing other settings (such as size) and to unselect all devices other than /dev/sda when selecting which device to load the OS on.
1. Assuming SAN is the only storage attached to the server, type fdisk /dev/sda. Type y if a warning appears regarding working on the whole device.
2. Type n for a new partition.
3. Type pfor a primary partition.
4. Type 1for partition #1.
5. Press Enter to accept the default start location.
6. Type +512M to make partition 1 500MB in size (this will be the /boot partition).
7. Type n for a new partition.
8. Type p for a primary partition.
9. Type 2 for partition #2.
10. Press Enter to accept the default start location.
11. Type +12288M to make partition 2 12GB in size (this assumes 8GB of physical RAM). The recommended size is physical RAM + 4GB (this will be the swap partition).
12. Type n for a new partition.
13. Type p for a primary partition.
14. Type 3 for partition #3.
15. Press Enter to accept the default start location.
16. Type +10240M to make partition 3 10 GB in size (this will be the / partition).
17. Type n for a new partition.
18. Type p for a primary partition (will default to partition #4).
19. Press Enter to accept the default start location.
20. Press Enter to fill to maximum size (this will be the /var partition).
21. Type w to write the partition table to the SAN.
22. Type exit to exit rescue mode and reboot to begin the Custom Partition Installation (Step 3, Install the IBM Security Guardium image).
1. Modify the BIOS for the QLogic card by pressing CTRL-D. This is the first screen presented after pressing Ctrl-Q when prompted to enter the Configuration Setup
Utility. This is a two-port card; select the appropriate port and press Enter.
4. Use your arrow keys to select Host Adapter BIOS and press Enter to toggle to Enabled.
7. Select the first Boot Port Name, LUN and press Enter to display a list of LUNs. If you are configuring the proper card/port, the LUN number(s) appear here. Select the
first one in the list.
8. Press Esc until you have backed out to the screen that says Reboot and select it to reboot the system. You are now ready to proceed with the IBM Security
Guardium installation.
Before beginning an upgrade, review the Planning an upgrade, Choosing an upgrade method, and Mixed-version environments during an upgrade sections.
In addition, the following resource are available to support your upgrade experience::
IBM Security Guardium high-level upgrade roadmap: contains an overview of the supported upgrade paths from various releases of Guardium.
IBM Guardium V10.1 Software Appliance Technical Requirements: describes the hardware requirements for both physical and virtual machine installations.
Hints and tips on upgrading to V10: provides videos with information about upgrade planning, execution, and troubleshooting.
Planning an upgrade
Learn about different upgrade scenarios and identify the correct approach for upgrading your Guardium systems with minimal downtime.
Common upgrade tasks
Tasks such as purging system data, monitoring installations, and cleaning up after an upgrade are common to all Guardium upgrade scenarios.
Upgrading a 32-bit environment
Upgrade your 32-bit Guardium environment without using a backup central manager.
Planning an upgrade
Learn about different upgrade scenarios and identify the correct approach for upgrading your Guardium systems with minimal downtime.
Determine your current Guardium version and patch level by clicking the icon in the main user interface and selecting About Guardium.
Upgrade to the latest version of Guardium using one of the following methods:
Upgrade patch
Use an upgrade patch to upgrade all systems in a managed environment. The upgrade patch preserves all data and configurations with the exception of UI
customizations due to a new UI architecture. Using an upgrade patch without defining a backup central manager is recommended for 64-bit environments with
default partitioning.
Backup, rebuild, restore
Use a backup, rebuild, and restore method. This requires taking a full system backup, rebuilding the system from the latest ISO, and restoring system data and
configuration from the backup. Using the backup, rebuild, and restore method with a backup central manager is recommended for 32-bit environments or systems
with custom partitioning.
Important: Custom partitioned systems cannot be upgraded to V10 using an upgrade patch. Instead, you must use the backup, rebuild, and restore method. If there is
uncertainty regarding the partitioning of systems, download and install Health Check p9997. The resulting patch log contains information regarding system partitioning.
Use the following tables to identify the best approach for upgrading your systems to the latest version of Guardium.
Backup V9, rebuild system to latest V10, restore from V9 backup Apply latest V10 upgrade patch
V8.2 or earlier No No
Table 2. Overview of V10 upgrade paths
Guardium level on current system Upgrade path to the latest V10
V8.2 You cannot upgrade V8.2 systems directly to V10 systems. You must rebuild your appliances with the latest V9 (64-bit)
ISO and then install the latest V9 to V10 upgrade patch.
Since the upgrade process cannot be completed on all systems (central managers, aggregators, and collectors) and all S-TAPs simultaneously, your Guardium
environment will enter a mixed-version state during upgrade. For example, after upgrading a central manager to the latest V10, managed units will continue operating at
V9 GPU 600. Although mixed-version environments are supported, several limitations must be considered as part of any upgrade plan. For example, data collection, data
assessment, and policies (with some restrictions) will continue to work while in a mixed state, but functions with new or enhanced capabilities will not work in a mixed
environment.
Important: Upgrade your entire environment to the latest patch level of V10 as soon as possible. Be aware of the following while operating in a mixed-version environment
during upgrade:
Complete Guardium functionality will not be available until the entire environment has been upgraded to the latest V10.
Do not make configuration changes while operating in a mixed-version environment.
Guardium V10 does not support mixed environments with managed units below V9 GPU 600.
Policies cannot be distributed from a V10 central manager to V9 patch 600 managed units. Policies already installed on the managed units prior to the
upgrade remain unchanged.
Patch backup settings cannot be distributed from a V10 central manager to V9 patch 600 or later managed units. Patch backup settings defined before the
upgrade remain unchanged.
UI layout customization and distribution is not supported on a V10 central manager with V9 (patch 600 or later) managed units.
Managed units
You cannot register additional V9 patch 600 or later managed units after upgrading the central manager to V10. Units registered before the upgrade remain
registered after the upgrade.
Quick search
Quick search for enterprise works in a mixed environment that consists of a V10 central manager and V9 patch 530 or later managed units. The user interface must
be restarted in order to reinitialize quick search for enterprise. Managed units prior to GPU 500 are unable to take advantage of enterprise search, although local
quick search is still available.
If a central manager is upgraded from V9 to the latest V10 and the managed units remain on V9, quick search is disabled on the V9 managed units until the
managed units are upgraded to V10.
Reports
Some reports will result in SQL errors or may not display data correctly when viewed on V9 patch 600 or later managed units, including the following:
Aggregation/Archive Log
Connections Quarantined
Installed Patches
Inactive Inspection Engines
S-TAP Verification
Connection Profiling List
Replay Statistics
Replay Summary
With the exception of Enterprise Buffer Usage Monitor data, data from V9 patch 600 or later managed units is not accessible in the following reports on a V10 central
manager:
A top-down approach is necessary because an upgraded aggregator can aggregate data from older releases, but an older aggregator cannot aggregate data from newer
releases. Similarly, an upgraded central manager can manage units running older releases, but the managed units will not enjoy full functionality until they are upgraded to
match the central manager.
To avoid these issues, upgrade a central manager before upgrading any of its managed units. If you have multiple central managers, first upgrade one central manager and
then upgrade its managed units before going on to upgrade the next central manager and its managed units.
Similarly, upgrade an aggregator before upgrading any units that export data to it. If you have several aggregators, first upgrade one aggregator and then upgrade the
collectors that report to it before going on to upgrade the next aggregator and its collectors.
Finally, upgrade a collector before upgrading the S-TAPs registered to it. Upgrade one collector and all the S-TAPs registered to it before going on to upgrade the next
collector and its S-TAPs.
This approach provides compatible systems--from central managers to aggregators, collectors, and S-TAPs--in each branch of your environment more quickly than
upgrading all your central managers or aggregators before upgrading any collectors.
Procedure
1. Open Manage > Data Management > Data Archive.
2. Click the Purge check box to define a purge operation.
Important: Changes made to the Data Archive purge configuration will also be applied to the Data Export purge configuration.
3. Define a Purge data older than time period. All data older than the specified period of days, weeks, or months will be purged from the system.
4. Click the Allow purge without archiving or exporting check box.
5. Click Save to save the configuration changes.
6. Click Run Once Now to execute the purge operation and purge old system data.
What to do next
Open Manage > Reports > Activity Monitoring > Scheduled Jobs to monitor the status of the data archive job.
Parent topic: Common upgrade tasks
Important: Patches downloaded in ZIP format must be unzipped outside the Guardium system before uploading and installing. Observe the following restrictions for any
patch with database structure changes:
Perform or schedule the patch installation during quiet time on the Guardium system to avoid conflicts with long-running processes such as heavy reports, audit
processes, backups, and imports.
The exact time required for patch installation depends on database utilization, data distribution, and other considerations.
Install patches in a top-down manner, first patching a central manager before patching aggregators and finally collectors.
To upload and install a patch using scp, issue the following CLI command: store system patch install scp
1. Initialize the fileserver using the following CLI command: fileserver [ip_address] where [ip_address] is the system being used to connect to the Guardium
system.
2. From a web browser, connect to the Guardium system.
a. Click Upload Patch.
b. Browse to select the patch file and then click Upload.
3. Issue the following CLI command to install the patch: store system patch install system.
Distribute a patch
To distribute a patch from a central manager to managed units, one of the following must have taken place:
Distribute the patch to managed units using the Central Management page on the central manager. Navigate to Manage > Central Management > Central Management and
click Patch Distribution.
Important: V9 patches will not available after the Guardium system is upgraded to V10.
Parent topic: Common upgrade tasks
Procedure
1. Log in to the Guardium system CLI.
2. Issue the diag command.
3. From the diag command menu:
a. Select 1 Output management and click OK.
b. Select 3 Export recorded files and click OK.
c. Choose the log files you need and click OK.
d. Select 1 FTP or 2 SCP and click OK.
e. Input the host name that you want to upload to and click OK.
f. Input the user name and click OK.
g. Input the password and click OK.
Note: If 2 SCP is chosen, the destination path is asked for before the password.
h. Input the destination path and click OK.
i. Check the information and click OK. The file uploads to the target system.
j. Select OK to exit.
k. Select 3 Exit and click OK.
Note: Return to 3a if you need to upload another file; otherwise, proceed to the next step.
l. Select 5 Exit to CLI and click OK.
Procedure
1. If you upgraded using an upgrade patch, log in as the CLI user and issue the following command: show upgrade-status. The command will output detailed
status information from the upgrade process, and the last line of output should indicate INFO:Migration Complete.
2. If you upgraded a central manager, verify that managed units are listed on the Manage > Central Management > Central Management page.
3. Verify that custom reports created in previous versions of Guardium are available at Reports > My Custom Reports.
My Custom Reports should contain any new reports that you created as well as any predefined reports that you modified in a previous version of Guardium.
4. Refresh all managed units on the Central Management page so you can distribute the licenses down to the upgraded MUs.
5. You may need to update the Guardium DPS file after upgrade or restore procedures. Download the latest DPS file, then use the Harden > Vulnerability Assessment >
Customer Uploads tool to upload and import the new DPS file.
6. Company logos uploaded before upgrade or restore procedures may need to be reloaded. To reload a customer logo, follow these steps:
a. Log in as an admin user.
b. Navigate to Setup > Tools and Views > Global Profile.
c. Browse for the company logo file.
d. Upload the logo file.
Before upgrading your 32-bit Guardium environment via the ISO without using a backup central manager, review the following checklist and complete each item before
attempting the upgrade.
Important: Before performing restore db on a V10 system, apply the latest maintenance patches after your system has been built to V10. If you are using a 32-bit
collector-based central manager, you must rebuild it to a 64-bit collector-based central manager before upgrading to V10.
Upgrade checklist
Download latest health check patch (p9997) from Fix Central. For more information, see: Guardium health check patch release notes.
Current systems must be at Guardium V9 and have 32-bit architecture.
Download the latest Guardium V9 release or get it later from Fix Central [optional].
Download the latest Guardium V10 ISO from Passport Advantage
Download all base and append licenses from Passport Advantage
Download the latest V10 GPU from Fix Central, if one is available
Record all network configuration parameters returned by the following Guardium CLI commands:
Procedure
1. Upgrade the system to V9 patch 600 or later.
2. Set the time to the local time zone and synchronize time across all Guardium systems using an NTP server.
3. Download and install the latest health check patch (p9997) and verify that the installation was successful. See Patch installation, distribution, and monitoring for
instructions.
4. Take a system backup of the central manager and verify that it was successful.
a. Navigate to Manage > Data Management > System Backup.
b. Configure the protocol based on your preferences and fill in all fields.
c. Back up both configuration and data.
Important: Create at least one valid backup before beginning the upgrade procedure.
5. Mount the latest Guardium V10 ISO.
a. Select a system type within the first five seconds of entering the Guardium installer. The default selection is Standard Installation (non CM) with a unit type of
standalone collector. When upgrading a central manager or an aggregator, select Aggregator.
b. Allow the installation to complete and the system to reboot.
6. Configure network parameters. Log into the Guardium CLI and issue the following commands:
7. Log into the Guardium user interface and validate the default components.
Note: If logging in for the first time, the default password is guardium.
a. Verify that only the Welcome and Setup navigation items are visible.
b. Navigate to Setup > Tools and Views > License or click the icon to verify that no licenses are installed on the system.
8. Install licenses.
a. Navigate to the license page by following the notification link or selecting Setup > Tools and Views > License.
b. Apply all relevant base and append licenses, and accept the license agreements.
What to do next
After successfully upgrading your 32-bit Guardium central manager, Upgrade 32-bit managed units.
Parent topic: Upgrading a 32-bit environment
Next topic: Upgrade 32-bit managed units
Important: You must upgrade your environment to V9 patch 600 or later before upgrading to the latest V10.
Procedure
1. Distribute the latest health check patch (p9997) to managed units and verify that it installed successfully. See Patch installation, distribution, and monitoring for
more information.
2. Take system backups of all managed units.
3. Rebuild the managed units using the following procedure:
a. Mount the latest Guardium V10 ISO image.
b. Select a system type within five seconds of entering the Guardium installer. Use the default selection of Standard Installation (non CM) with a unit type of
standalone collector, or allow for an automatic boot.
c. Allow the installation to complete and the system to reboot.
4. Configure network parameters. Log into the Guardium CLI and issue the following commands:
5. Log into the Guardium user interface and verify that no licenses are installed on the system.
Tip:
If logging in for the first time, the default password is guardium.
If you are working with a standalone system that will eventually become a managed unit, there is no need to install licenses.
a. On the main Guardium navigation, verify that only the Welcome and Setup navigation items are available.
b. Navigate to Setup > Tools and Views > License or click the icon to verify that no licenses are installed on the system.
6. Restore data and configuration on the managed units.
Note: When restoring a managed unit from a backup, any custom layouts for that managed unit will be lost if the central manager is down at the time of the restore.
a. Issue the following Guardium CLI command to import the backup files: import file.
b. Import the data and configuration files separately.
c. Perform the data and configuration restore by issuing the following CLI command: restore db-from-prev-version.
7. Once all managed units have been successfully upgraded, distribute licenses from the central manager to the managed units.
a. Log into the user interface of the central manager.
b. Navigate to Central Management > Manage > Central Management and verify that the managed units are listed.
c. Click the Select all check box to select all managed units.
d. Click the Refresh button to distribute licenses to the managed units.
e. Wait until the refresh process completes.
f. Log into the user interface of the managed units and navigate to Setup > Tools and Views > License to verify that the correct licenses have been installed.
When the correct licenses have been installed:
The expected navigation menu options will now be available on the managed units.
Reports on the managed units will be functional.
Reports will be accessible via remote data sources from the central manager.
8. If the latest Guardium V10 GPU (if newer than the latest V10 ISO) and maintenance patches were installed on the central manager, distribute the GPU and
maintenance patches to the managed units.
9. If you use VMware Tools, you must reinstall them after completing the upgrade. To reinstall VMware Tools, log into the Guardium CLI, issue the following command,
and follow the prompts: setup vmware_tools install.
Before upgrading your 64-bit Guardium environment via the ISO without using a backup central manager, you review the following checklist below and complete each
item before attempting the upgrade.
Important: Before performing restore db on a V10 system, apply the latest maintenance patches after your system has been built to V10. If you are using a 64-bit
collector-based central manager, the upgrade patch will handle the upgrade and convert the system from a collector-based central manager to an aggregator-based
central manager.
Upgrade checklist
Current systems must be at V9 patch 600 or above and have 64-bit architecture
Download the latest Guardium V9 release or get it later from Fix Central [optional].
Download upgrade patch p10000
Download the latest maintenance patches from Fix Central
Download latest health check patch (p9997) from Fix Central. For more information, see: Guardium health check patch release notes.
Procedure
1. Upgrade the system to V9 patch 600 or later.
2. Set the time to the local time zone and synchronize time across all Guardium systems using an NTP server.
3. Download and install the latest health check patch (p9997) and verify that the installation was successful. See Patch installation, distribution, and monitoring for
instructions.
4. Take a system backup of the central manager and verify that it was successful.
a. Navigate to Manage > Data Management > System Backup.
b. Configure the protocol based on your preferences and fill in all fields.
c. Back up both configuration and data.
Important: Create at least one valid backup before beginning the upgrade procedure.
5. Install p10000 on the central manager and monitor its installation.
Important: After the patch installation completes, the upgrade process automatically begins and the system is rebooted. Do not reboot the system manually.
6. Allow the operating system installation to complete.
Installation time depends on the amount of data involved as well as system specifications and configuration
Once the operating system installation has completed, the system reboots into the latest Guardium V10 for the first time.
Attention: After you successfully install the latest V10, the first boot into your system is followed by:
Network configuration, database data migration, database start up.
License upgrade, PSML upgrade, language setting.
Database restart, certificate and key migration, password migration, and file clean-up.
7. Confirm that the central manager has been successfully upgraded:
a. Log in to the Guardium CLI. If the CLI enters recovery mode, the upgrade is still in progress.
b. Issue the following CLI command: show upgrade-status The command can also be issued from the CLI recovery mode.
c. Verify that the last line in the output reads: 5.0:INFO:Migration Complete
d. If you are still in the CLI recovery mode, exit the CLI and log back in to enter the normal Guardium CLI mode.
e. Issue the following CLI command:show system patch install
f. Verify that p10000 status is the following:Phase 5: Migration completed
8. Log into the Guardium user interface and accept license agreements to enable product features.
a. Navigate to Setup > Tools and Views > License.
b. Accept the base license agreement.
c. Accept all applicable append license agreements.
Note: Skipping this step prevents Guardium features from being enabled.
What to do next
After successfully upgrading your 32-bit Guardium central manager, Upgrade 64-bit managed units.
Parent topic: Upgrading a 64-bit environment
Next topic: Upgrade 64-bit managed units
Important: You must upgrade your environment to V9 patch 600 or later before upgrading to the latest V10.
Procedure
1. Distribute the latest health check patch (p9997) to managed units and verify that it installed successfully. See Patch installation, distribution, and monitoring for
more information.
2. Take system backups of all managed units.
3. Distribute the p10000 upgrade patch to all managed units and monitor the patch installation. Read Patch installation, distribution, and monitoring for more
information.
Attention: After the patch installation completes, the upgrade process automatically begins and the system is rebooted. Do not reboot the system manually.
The time required for upgrade depends on the amount of data involved as well as system specifications and configuration. When the upgrade is complete and the
system reboots, the first boot of the upgraded system is followed by:
Network configuration, database data migration, database start up.
License upgrade, PSML upgrade, language setting.
Database restart, certificate and key migration, password migration, and file clean-up.
During this process, you will be unable to log in to upgraded managed units until the database migration completes.
4. Verify that the upgrade process has completed successfully on each managed unit.
a. Log in to the Guardium CLI of the system being upgraded. If the CLI enters recovery mode, the upgrade is still in process.
b. Issue the following CLI command: show upgrade-status. This command can also be issued from the CLI in recovery mode.
c. Verify that the last line of output reads: 5.0:INFO:Migration Complete.
d. If you are in CLI recovery mode, exit the CLI and log back in to enter the CLI mode.
e. Issue the following CLI command: show system patch install.
Attention: show system patch install will not return results until the upgrade completes after the first reboot.
f. Verify that the upgrade patch installation status read: Phase 5: Migration completed.
5. Once all managed units have been successfully upgraded, distribute licenses from the central manager to the managed units.
a. Log into the user interface of the central manager.
b. Navigate to Central Management > Manage > Central Management and verify that the managed units are listed.
c. Click the Select all check box to select all managed units.
d. Click the Refresh button to distribute licenses to the managed units.
e. Wait until the refresh process completes.
f. Log into the user interface of the managed units and navigate to Setup > Tools and Views > License to verify that the correct licenses have been installed.
When the correct licenses have been installed:
The expected navigation menu options will now be available on the managed units.
Reports on the managed units will be functional.
Reports will be accessible via remote data sources from the central manager.
6. If the latest V10 GPU and maintenance patches were installed on the central manager, distribute the GPU and maintenance patches to the managed units.
7. If you use VMware Tools, you must reinstall them after completing the upgrade. To reinstall VMware Tools, log into the Guardium CLI, issue the following command,
and follow the prompts: setup vmware_tools install.
Results
You have successfully completed an upgrade of your 64-bit Guardium environment to the latest V10. Please verify the stability of your Guardium environment.
Parent topic: Upgrading a 64-bit environment
Previous topic: Upgrading a 64-bit central manager
Before upgrading your 32-bit Guardium environment using a backup central manager, review the following checklist and complete each item before attempting the
upgrade.
Important: Before performing restore db on a V10 system, apply the latest maintenance patches after your system has been built to V10. If you are using a 32-bit
collector-based central manager, you must rebuild it to a 64-bit collector-based central manager before upgrading to V10.
Upgrade checklist
Procedure
1. Upgrade the system to V9 patch 600 or later.
2. Set the time to the local time zone and synchronize time across all Guardium systems using an NTP server.
3. Download and install the latest health check patch (p9997) and verify that the installation was successful. See Patch installation, distribution, and monitoring for
instructions.
Important: You will need to install the latest health check patch (p9997) on both the primary central manager and backup central manager candidate before
designating a backup central manager.
4. Define a backup central manager.
a. Navigate to the Central Management page on the primary central manager.
b. Select a managed aggregator.
c. Verify that the primary central manager and the backup central manager candidate have the same patches installed.
d. Designate the aggregator as a backup central manager.
e. Verify that the cm_sync_file.tgz file has been created by checking the Aggregation/Archive Log on the primary central manager.
5. Take a system backup of the backup central manager and verify that it was successful.
a. Navigate to Manage > Data Management > System Backup.
b. Configure the protocol based on your preferences and fill in all fields.
c. Be sure to backup both configuration and data.
Important: Create at least one valid backup before beginning the upgrade procedure.
6. Rebuild the backup central manager using the latest V10 ISO.
a. Mount the latest V10 ISO.
b. Select a system type within the first five seconds of entering the Guardium installer. The default selection is Standard Installation (non CM) with a unit type of
standalone collector.
7. Allow the installation to complete and the system to reboot.
8. Configure network parameters. Log into the Guardium CLI and issue the following commands:
9. Log into the Guardium user interface and validate the default components.
Note: If logging in for the first time, the default password is guardium.
a. Verify that only the Welcome and Setup navigation items are visible.
b. Navigate to Setup > Tools and Views > License or click the icon to verify that no licenses are installed on the system.
10. Install the license.
a. Navigate to the license page by either following the link in the notification or selecting Setup > Tools and Views > License.
b. Apply all relevant base and append licenses, and accept the license agreements.
11. Install the latest V10 GPU (if newer than the latest V10 ISO) and the latest maintenance patches on the central manager, and verify that they have installed
successfully.
12. Set the shared secret on the backup central manager by using either the CLI command store system shared secret or by navigating to Setup > Tools and
Views > System.
13. Restore data and configurations on the central manager.
The central manager version is lower than the version of this managed unit. Functionality is limited until the version
mismatch is corrected.
What to do next
After successfully upgrading your backup central manager and transitioning managed units, Upgrade old 32-bit primary central manager.
Parent topic: Upgrading a 32-bit environment with a backup central manager
Next topic: Upgrade old 32-bit primary central manager
Procedure
1. Reconfigure the old primary central manager by issuing the following CLI command: delete unit type manager. Before continuing, verify that the old primary
central manager is now a standalone aggregator.
2. Take a system backup from the old primary central manager. Include both data and configuration in the backup.
3. Rebuild the old primary central manager using the following procedure:
a. Mount the latest Guardium V10 ISO image.
b. Select a system type within five seconds of entering the Guardium installer. When working with an old primary central manager, select Aggregator.
c. Allow the installation to complete and the system to reboot.
4. Configure network parameters. Log into the Guardium CLI and issue the following commands:
5. Log into the Guardium user interface and verify that no licenses are installed on the system.
Tip:
If logging in for the first time, the default password is guardium.
If you are working with a standalone system that will eventually become a managed unit, there is no need to install licenses.
a. On the main Guardium navigation, verify that only the Welcome and Setup navigation items are available.
b. Navigate to Setup > Tools and Views > License or click the icon to verify that no licenses are installed on the system.
6. If the latest V10 GPU (if newer than the latest V10 ISO) and maintenance patches were installed on the old backup central manager prior to converting it to a
primary central manager, install the same GPU and maintenance patches on the old primary central manager.
7. Restore data and configuration on the old primary central manager.
a. Issue the following Guardium CLI command to import the backup files: import file.
b. Import the data and configuration files separately.
What to do next
Now that you have upgraded your central manager and backup central manager, Upgrade 32-bit managed units.
Parent topic: Upgrading a 32-bit environment with a backup central manager
Previous topic: Upgrading a 32-bit backup central manager
Next topic: Upgrade 32-bit managed units
Important: You must upgrade your environment to V9 patch 600 or later before upgrading to the latest V10.
Procedure
1. Distribute the latest health check patch (p9997) to managed units and verify that it installed successfully. See Patch installation, distribution, and monitoring for
more information.
2. Take system backups of all managed units.
3. Rebuild the managed units using the following procedure:
a. Mount the latest Guardium V10 ISO image.
b. Select a system type within five seconds of entering the Guardium installer. Use the default selection of Standard Installation (non CM) with a unit type of
standalone collector, or allow for an automatic boot.
c. Allow the installation to complete and the system to reboot.
4. Configure network parameters. Log into the Guardium CLI and issue the following commands:
5. Log into the Guardium user interface and verify that no licenses are installed on the system.
Tip:
If logging in for the first time, the default password is guardium.
If you are working with a standalone system that will eventually become a managed unit, there is no need to install licenses.
a. On the main Guardium navigation, verify that only the Welcome and Setup navigation items are available.
b. Navigate to Setup > Tools and Views > License or click the icon to verify that no licenses are installed on the system.
6. Restore data and configuration on the managed units.
Note: When restoring a managed unit from a backup, any custom layouts for that managed unit will be lost if the central manager is down at the time of the restore.
a. Issue the following Guardium CLI command to import the backup files: import file.
b. Import the data and configuration files separately.
c. Perform the data and configuration restore by issuing the following CLI command: restore db-from-prev-version.
7. Once all managed units have been successfully upgraded, distribute licenses from the central manager to the managed units.
a. Log into the user interface of the central manager.
b. Navigate to Central Management > Manage > Central Management and verify that the managed units are listed.
c. Click the Select all check box to select all managed units.
d. Click the Refresh button to distribute licenses to the managed units.
Results
You have successfully completed an upgrade of your 32-bit Guardium environment to the latest V10 using a backup central manager. Please verify the stability of your
Guardium environment.
Parent topic: Upgrading a 32-bit environment with a backup central manager
Previous topic: Upgrade old 32-bit primary central manager
Before upgrading your 64-bit Guardium environment using a backup central manager, review the following checklist and complete each item before attempting the
upgrade.
Important: Before performing restore db on a V10 system, apply the latest maintenance patches after your system has been built to V10. If you are using a 64-bit
collector-based central manager, the upgrade patch will handle the upgrade and convert the system from a collector-based central manager to an aggregator-based
central manager.
Upgrade checklist
Identify and record all managed units defined in the current environment.
Current systems must be at V9 patch 600 or above and have 64-bit architecture
Download the latest Guardium V9 release or get it later from Fix Central [optional].
Download upgrade patch p10000
Download the latest maintenance patches from Fix Central
Download latest health check patch (p9997) from Fix Central. For more information, see: Guardium health check patch release notes.
Procedure
1. Upgrade the system to V9 patch 600 or later.
2. Set the time to the local time zone and synchronize time across all Guardium systems using an NTP server.
3. Download and install the latest health check patch (p9997) and verify that the installation was successful. See Patch installation, distribution, and monitoring for
instructions.
Important: You will need to install the latest health check patch (p9997) on both the primary central manager and backup central manager candidate before
designating a backup central manager.
4. Define a backup central manager.
a. Navigate to the Central Management page on the primary central manager.
b. Select a managed aggregator.
c. Verify that the primary central manager and the backup central manager candidate have the same patches installed.
d. Designate the aggregator as a backup central manager.
e. Verify that the cm_sync_file.tgz file has been created by checking the Aggregation/Archive Log on the primary central manager.
5. Take a system backup of the backup central manager and verify that it was successful.
a. Navigate to Manage > Data Management > System Backup.
b. Configure the protocol based on your preferences and fill in all fields.
c. Be sure to backup both configuration and data.
Important: Create at least one valid backup before beginning the upgrade procedure.
6. Install p10000 on the central manager and monitor its installation.
The central manager version is lower than the version of this managed unit. Functionality is limited until the version
mismatch is corrected.
What to do next
After successfully upgrading your backup central manager and transitioning managed units, Upgrade old 64-bit primary central manager.
Parent topic: Upgrading a 64-bit environment with a backup central manager
Next topic: Upgrade old 64-bit primary central manager
Procedure
1. Reconfigure the old primary central manager by issuing the following CLI command: delete unit type manager. Before continuing, verify that the old primary
central manager is now a standalone aggregator.
2. Take a system backup from the old primary central manager. Include both data and configuration in the backup.
3. Upgrade the old primary central manager using the p10000 upgrade patch and monitor the patch installation. Read Patch installation, distribution, and monitoring
for more information.
Attention: After the patch installation completes, the upgrade process automatically begins and the system is rebooted. Do not reboot the system manually.
The time required for upgrade depends on the amount of data involved as well as system specifications and configuration. When the upgrade is complete and the
system reboots, the first boot of the upgraded system is followed by:
Network configuration, database data migration, database start up.
License upgrade, PSML upgrade, language setting.
Database restart, certificate and key migration, password migration, and file clean-up.
During this process, you will be unable to log in to upgraded managed units until the database migration completes.
What to do next
Now that you have upgraded your central manager and backup central manager, Upgrade 64-bit managed units.
Parent topic: Upgrading a 64-bit environment with a backup central manager
Previous topic: Upgrading a 64-bit backup central manager
Next topic: Upgrade 64-bit managed units
Important: You must upgrade your environment to V9 patch 600 or later before upgrading to the latest V10.
Procedure
1. Distribute the latest health check patch (p9997) to managed units and verify that it installed successfully. See Patch installation, distribution, and monitoring for
more information.
2. Take system backups of all managed units.
3. Transfer the p10000 upgrade patch to the central manager and make it available to the managed units.
a. Transfer the upgrade patch to the central manager. Read Patch installation, distribution, and monitoring for more information.
b. Make the upgrade patch available to the managed units by issuing the following CLI command from the central manager: show system patch available.
4. Distribute the p10000 upgrade patch to all managed units and monitor the patch installation. Read Patch installation, distribution, and monitoring for more
information.
Attention: After the patch installation completes, the upgrade process automatically begins and the system is rebooted. Do not reboot the system manually.
The time required for upgrade depends on the amount of data involved as well as system specifications and configuration. When the upgrade is complete and the
system reboots, the first boot of the upgraded system is followed by:
Network configuration, database data migration, database start up.
License upgrade, PSML upgrade, language setting.
Database restart, certificate and key migration, password migration, and file clean-up.
During this process, you will be unable to log in to upgraded managed units until the database migration completes.
5. Verify that the upgrade process has completed successfully on each managed unit.
a. Log in to the Guardium CLI of the system being upgraded. If the CLI enters recovery mode, the upgrade is still in process.
b. Issue the following CLI command: show upgrade-status. This command can also be issued from the CLI in recovery mode.
c. Verify that the last line of output reads: 5.0:INFO:Migration Complete.
d. If you are in CLI recovery mode, exit the CLI and log back in to enter the CLI mode.
e. Issue the following CLI command: show system patch install.
Attention: show system patch install will not return results until the upgrade completes after the first reboot.
Results
You have successfully completed an upgrade of your 64-bit Guardium environment to the latest V10 using a backup central manager. Please verify the stability of your
Guardium environment.
Parent topic: Upgrading a 64-bit environment with a backup central manager
Previous topic: Upgrade old 64-bit primary central manager
CLI Overview
The Guardium command line interface (CLI) is an administrative tool that allows for configuration, troubleshooting, and management of the Guardium system.
GuardAPI Reference
GuardAPI provides access to Guardium functionality from the command line.
CLI Overview
The Guardium® command line interface (CLI) is an administrative tool that allows for configuration, troubleshooting, and management of the Guardium system.
Documentation Conventions
All CLI command examples are written in courier text (for example, show system clock).
To illustrate syntax rules, some command descriptions use dependency delimiters. Such delimiters indicate which command arguments are mandatory, and in what
context. Each syntax description shows the dependencies between the command arguments by using special characters:
PC keyboard and monitor – A PC video monitor can be attached to either the front panel video connector or the video connector on the back of the appliance.
A PC keyboard with a PS/2 style connector can be attached to the PS/2 connector on the back of the appliance. Alternatively, a USB keyboard can be connected to the USB
connectors located at the front or back of the appliance.
Serial port access – Using a NULL modem cable, connect a terminal or another computer to the 9-pin serial port at the back of the appliance. The terminal or a terminal
emulator on the attached computer should be set to communicate as 19200-N-1 (19200 baud, no parity, 1 stop bit).
The SSH client may ask you to accept the cryptographic fingerprint of the Guardium appliance. Accept the fingerprint to proceed to the password prompt.
Note: If, after the first connection, you are asked again for a fingerprint, someone may be trying to induce you to log into the wrong machine.
CLI Login
Access to the CLI is either through the admin CLI account cli or one of the five CLI accounts (guardcli1,...,guardcli5). The five CLI accounts (guardcli1,...,guardcli5) exist to
aid in the separation of administrative duties.
Access to the GuardAPI, which is a set of CLI commands to aid in the automation of repetitive tasks, requires the creation of a user (GUI username/guiuser) by access
manager and giving those accounts either the admin or cli role. Proper login to the CLI for the purpose of using GuardAPI requires the login with one of the five CLI
accounts (guardcli1,...,guardcli5) and an additional login with guiuser by issuing the 'set guiuser' command. See GuardAPI Reference Overview or  Set guiuser
Authentication for additional information.
Password Hardening
In order to meet various auditing and compliancy requirements the following password enforcements will be in effect for CLI accounts:
For the account cli either use the cli password supplied or be sure to set a strong password to protect this account. If you have just rebuilt the system from an
installation DVD, the Guardium cli user has a default password of guardium. You should change that password immediately.
Enforcement of an expiration period for the CLI and five CLI accounts where the default is 90 days. When a password expires a required change of password will be
invoked during the login process.
Passwords must be a minimum of eight characters in length.
Passwords must contain at least one character from three of the following four classes
Any upper-case letter
Any lower-case letter
Any numeric (0,1,2,...)
Any non-alphanumeric (special) character
Once access is granted through the use of a separate GUI username (guiuser) the CLI audit trail will show the CLI_USER+GUI_USER pair used for login.
CLI users cannot be authenticated through LDAP as these are considered administrative accounts and should be able to login regardless of connectivity to an LDAP
server
The welcome message will add further information if the internal database is down due to maintenance or during an upgrade.
If this is the case, the number of CLI commands available will be limited.
The internal database on the appliance is currently down and CLI will be working
in "recovery mode"; only a limited set of commands will be available.
The CLI commands that available for use during recovery mode are as follows:
Syntax
Parameters
user@host:/path/filename For the file transfer operation, specifies a user, host, and full path name for the backup keys file. The user you specify must have the authority to
write to the specified directory.
Note: For more information about the shared secret use, see System Shared Secret.
Syntax
Note: For more information about the shared secret use, see System Shared Secret.
aggregator debug
Starts or stops writing debugging information relating to aggregation activities. Use these commands only when directed to do so by Guardium® Support, and be sure to
issue the stop command after you have gathered enough information.
Syntax
Syntax
Syntax
Parameters
Use the all option to move all files from the /var/dump directory ending with the suffix .decrypt_failed, or use the filename option to identify a single file to be moved.
Note: After moving the failed files, but before a restore or import operation runs, be sure that the system shared secret matches the shared secret used to encrypt the
exported or archived file.
Parameters
user@host:/path/filename For the file transfer operation, specifies a user, host, and full path name for the backup keys file.
Note: For more information about the shared secret use, see System Shared Secret.
Syntax
Use this CLI command to clean orphans on aggregators that will be scheduled to run on data older then 3 days and will run at the end of a purge.
This process will be started by the user with this CLI command, so in case of large database, the user will be aware of the time length of the process.
It will cover the whole data on the aggregator, but will run it all on a separate temporary database.
Note: On a collector, orphans cleanup is not changed - it runs with the small cleanup tactics and is invoked before export/archive.
store aggregator orphan_cleanup_flag <flag>, where flag is one of the words < small large analyze >
If set to one of small, large or analyze - orphans cleanup script is invoked after each run of merge process.
The orphans cleanup on an aggregator does not remove orphan records of the last 3 days - it does remove all orphans older then 3 days.
If small is specified, the process does not interfere with audit processes that can start after the merge is completed.
If large is specified, the process would run faster where there is a large number of orphans but it's run might interfere with audit processes - if large is specified, audit
processes will not start until orphans cleanup is complete.
If analyze is specified, the process first evaluates the number of orphans and uses the large tactics if there are more than 20% orphans - if analyze is specified, audit
processes will not start until orphans cleanup is complete.
Syntax
Show command
store archive_static_table
Use this CLI command to turn off/ turn on the archive static table
Show command
show archive_static_table
store next_export_static
The aggregation software makes a distinction between two types of tables:
static tables - grow slowly over time, data in these tables is not time dependent  (GDM_OBJECT, GDM_FIELD, GDM_SENTENCE, GDM_CONSTRUCT, etc.).
dynamic tables- grow quickly with time, data is time dependent (GDM_CONSTRUCT_INSTANCE, GDM_SESSION, GDM_CONSTRUCT_TEXT etc.).
As stated previously, the data of static tables is not time dependant. The data of dynamic tables that is time dependant is linked to static data. As static tables can grow to
be very large, the export/archive process does not archive the full static data every day - it archives the full static data the first time it runs, and then at the first day of each
month, on any day besides the first of the month, it only archives static data that changed during that day. For this reason when restoring data of any day, it is also required
that the first of the month be restored - this ensures that full static data is present and references are not broken.
Syntax
Show command
show next_export_static
store last_used
Use this CLI command during purging and aggregation.
Syntax
Show command
All Tables - 1
Only GDM_Object - 2
None - 0 (Default)
Note: Set the CLI command, last_used logging, prior to using this command.
When the LAST_USED column is updated by the Sniffer in Static tables, this column can be referenced when purging data from these tables or when archiving and
exporting data from these tables.
The value of this column can also be updated when importing data to an aggregator.
1. By default, the system behaves like it did in previous versions - the LAST_USED column is not considered in purge, archive and export and is not updated on import,
archive and export are done by TIMESTAMP.
2. LAST_USED_FOR_OBJECT_ONLY is considered only for GDM_OBJECT table.
3. LAST_USED is considered for GDM_CONSTRUCT, GDM_SENTENCE, GDM_OBJECT, GDM_FIELD, GDM_JOIN, GDM_JOIN_OBJECT
Note: Options 2 and 3 are only enabled when the sniffer is configured to collect and update this data.
Note: Validations performed only on a collector - If ADMINCONSOLE_PARAMETER.LAST_USED_LOGGING=0, then only TIMESTAMP is allowed. If
ADMINCONSOLE_PARAMETER.LAST_USED_LOGGING=1 then all parameters are allowed. If ADMINCONSOLE_PARAMETER.LAST_USED_LOGGING=2, then TIMESTAMP
and LAST_USED_FOR_OBJECT_ONLY are allowed. On an aggregator, all parameters are allowed.
Syntax
Show command
store archive_table_by_date
Use the CLI command, store archive_table_by_date, only on Aggregators. Use this CLI command to archive all static tables on a daily basis or archive static tables data at
the first time of running and every first day of the month. In default, archive data on an aggregator will run with full static tables on a daily basis. If this CLI command is set
to ENABLE, static tables will be archived only on the first day of month or the first time archive data is running.
store run_cleanup_orphans_daily
Use this CLI command to clean all the old construct records that are no longer in use. This CLI command is relevant for collectors and aggregators and by default is
enabled.
store run_cleanup_orphans_daily
Show command
show run_cleanup_orphans_daily
Show command
show max_number_collector
store purge_age_period
Set the period of purge age.
Show command
show purge_age_period
The Alerter subsystem transmits messages that have been queued by other components - correlation alerts that have been queued by the Anomaly Detection subsystem,
or run-time alerts that have been generated by security policies, for example. The Alerter subsystem can be configured to send messages to both SMTP and SNMP servers.
Alerts can also be sent to syslog or custom alerting classes, but no special configuration is required for those two options, beyond starting the Alerter. There are four types
of Alerter commands. Use the links in the lists, or browse the commands, which are listed in alphabetical sequence following the lists.
stop alerter
restart alerter
store alerter state operational
store alerter state startup
store alerter poll
store anomaly-detection poll
store anomaly-detection state
restart alerter
Restarts the Alerter. You can perform the same function using the store alerter state operational command to stop and then start the alerter:
Syntax
restart alerter
stop alerter
Stops the Alerter.
You can perform the same function using the store alerter state operational command:
Syntax
stop alerter
Syntax
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
auth: Username/password authentication. When used, set the user account and password using the following commands:
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
Syntax
Show Command
store syslog-trap
Usage: store syslog-trap ON | OFF
Note: Guardium does not provide certificate authority (CA) services and does not ship systems with different certificates than the one installed by default. A customer that
wants their own certificate must contact a third-party CA (such as VeriSign or Entrust).
Certification Expiration
Expired certificates will result in a loss of function. Run the show certificate warn_expire command periodically to check for expired certificates. The command displays
certificates that will expire within six months and certificates that have already expired. The user interface will also inform you of certificates that will expire. To see a
summary of all certificates, run the command show certificate summary.
New Certificates
Note: Do not perform this action until after the system network configuration parameters have been set.
create csr
Creates a Certificate Signed Request (CSR) for the Guardium system. Do not perform this action until after the system network configuration parameters are set. Within the
generated CSR, the common name (CN) is created automatically from the host and domain names assigned.
create csr gim creates a certificate request for gim (GIM Listener).
Syntax
restore certificate gim backup restores the gim certificate to the last saved sniffer gim certificate.
restore certificate gim default restores the gim certificate to the default gim certificate that was supplied with the system.
Syntax
restore certificate keystore backup restores the certificate keystore to the last saved certificate keystore.
restore certificate keystore default restores the certificate keystore to the default value that was supplied with the system.
Syntax
restore certificate mysql backup restores the last saved mysql certificate.
Syntax
restore certificate mysql backup client ca restores the last saved client certificate authority (CA) certificate.
restore certificate mysql backup client cert restores the last saved client certificate.
Syntax
restore certificate mysql backup server ca restores the last saved server certificate authority (CA) certificate.
restore certificate mysql backup server cert restores the last saved server certificate.
Syntax
restore certificate mysql default client ca restores the mysql client ca certificate to the default version that was supplied with the system.
Syntax
restore certificate mysql default server ca restores the mysql server ca certificate to the default version that was supplied with the system.
restore certificate mysql default server cert restores the mysql server certificate to the default version that was supplied with the system.
Syntax
restore certificate sniffer backup restores the sniffer certificate to the last saved sniffer certificate.
restore certificate sniffer default restores the sniffer certificate to the default sniffer certificate.
Syntax
restore cert_key mysql backup client restores the last saved mysql client cert key.
restore cert_key mysql backup server restores the last saved mysql server cert key.
Syntax
restore cert_key mysql default client restores the default mysql client cert key that was supplied with the system.
restore cert_key mysql default server restores the default mysql server cert key that was supplied with the system.
Syntax
show certificate
Displays the summary of all certificates, certificate information, alias list, certificates in the keystore, and expired or soon-to-expire certificates.
This certificate authenticity can be verified by a Guardium CA public key (contained in the CA certificate that is distributed with the client software). This certificate has
either a customer company-unique CN (Common Name - for example, acme.com, or a machine-specific CN (for example x4.acme.com). This permits any client to
establish that not only does the Guardium system have a valid certification (it is a real Guardium system), but also that it is a specific Guardium system (or a set of
Guardium systems) that the client is supposed to connect to.
show certificate gim displays all GIM certificate information (GIM Listener).
show certificate keystore displays all certificates in the keystore and an alias list for you to select which certificate to show.
show certificate mysql displays client and server mysql certificate information.
show certificate stap displays all S-TAP certificate information in the keystore.
show certificate warn_expired displays all expired certificates or certificates that expire in 6 months.
Syntax
show certificate keystore alias displays an alias list for you to select which certificate to show.
Syntax
Parameters
Syntax
store certificate
Stores a certificate. Paste your certificate in PEM format and include the BEGIN and END lines.
Parameter
store certificate alias stores a certificate in the keystore after a CSR has been generated. This CLI command supports the CLI command, create csr alias, which allows the
user to create an intermediate trusted certificate from scratch. Use both of these commands to create intermediate trusted certificates. These intermediate trusted
certificates can then be used to sign other certificates, if required.
store certificate gim will allow the custom gim certificate to be stored in keystore by prompting for certificate, key (optional) and CA certificate (GIM Listener).
store certificate gui stores the tomcat certificate in the keystore after a CSR has been generated.
store certificate keystore asks for a one-word alias to uniquely identify the trusted certificate and store it in the keystore.
Syntax
store certificate mysql client ca stores client certificate authority (CA) certificates.
Syntax
store certificate mysql server ca stores server certificate authority (CA) certificates.
Syntax
store cert_key
Stores the system certificate key and the certificate key of a mysql client and server.
store cert_key mysql stores the certificate key of a mysql client and server.
Syntax
store cert_key myself client stores the certificate key of a mysql client.
store cert_key myself server stores the certificate key of a mysql server.
Syntax
store cert_key sniffer console stores the sniffer certificate key by pasting the key into the console.
store cert_key sniffer import stores the sniffer certificate key by importing the key file.
Syntax
? (question mark)
When entering a command, enter a question mark at any point to display the arguments.
Syntax
<partial_command> ?
Example
ok
CLI>
Syntax
delete unit type [manager | standalone] [aggregated] [netinsp] [network routes static] [stap] [mainframe]
commands
Displays an alphabetical listing of all CLI commands.
Syntax
commands
debug
Syntax
eject
This command dismounts and ejects the CD ROM, which is useful after upgrading or re-installing the system, or installing patches that were distributed via CD ROM.
Syntax
eject
delete scheduled-patch
To delete a patch install request, use the CLI command delete scheduled-patch
See the CLI command, store system patch install for further information on patch installation.
Syntax
Show Command
show support-email
iptraf
IPTraf is a network statistics utility distributed with the underlying operating system. It gathers a variety of information such as TCP connection packet and byte counts,
interface statistics and activity indicators, TCP/UDP traffic breakdowns, and LAN station packet and byte counts. Â The IPTraf User Manual is available on the internet at
the following location (it may be available at other locations if this link does not work):
http://iptraf.seul.org/2.7/manual.html
Syntax
iptraf
license check
Indicates if the installed license if valid. Use this command after installing a new product key.
Syntax
license check
ping
Sends ICMP ping packets to a remote host. This command is useful for checking network connectivity. The value of host can be an IP address or host name.
Syntax
ping <host>
quit
Exits the command line interface.
Syntax
quit
recover failed
Command to restore failed CSV/CEF/PDF transfer files, placing the files back into the export folder for another export attempt.
Syntax
register management
Registers the Guardium system for management by the specified Central Manager. The pre-registration configuration of this Guardium system is saved, and that
configuration will be restored later if the unit is unregistered.
Syntax
port is the port number used by the Central Manager (usually 8443).
restart gui
Restarts the IBM® Guardium® Web interface. To optionally schedule a restart of the GUI once a day or once a week, use additional parameters. HH is hours 01-24. MM is
minutes 01-60. W is the day of the week, 0-6, Sunday is 0. If HHMM is listed twice, only the last entry is used. The parameter clear deletes the scheduled time.
In order to restart the Classifier and Security Assessments processes, run the restart gui command from the CLI (not from the GUI).
Running restart GUI from the GUI only restarts the web services. It is necessary to run the restart GUI command from the CLI to fully restart all processes, including
Classifier and Security Assessments processes. It is necessary to run the restart GUI command from the CLI for each managed unit to restart the Classifier listener.
Syntax
restart stopped_services
Use this CLI command to restart services previously stopped with the store auto_stop_services_when_full CLI command.
Syntax
restart stopped_services
restart system
Reboots the Guardium system. The system will completely shut down and restart, which means that the cli session will be terminated.
Syntax
restart system
show buffer
This command displays a report of buffer use for the inspection engine process. If you are experiencing load problems, IBM Technical Support may ask you to run this
command.
Syntax
show build
Displays build information for the installed software (build, release, snif version).
Syntax
show build
show defrag
Identify fragmented packets and attempt to reconstruct the packets before they get to the network sniffing process. The defrag is relevant only for network sniffing
through SPAM or a TAP device.
Syntax
show defrag
Parameters
Release level - The release level specified as a number of seconds, up to a maximum of the 31st power of two (2147483648).
Delete command
show password
This CLI command displays password functions. Password disable [0|1] removes the use of a password by storing the value 1. Password Expiration [CLI|GUI] [Number of
days] displays the number of days between required password changes. Default is 90 days. Password Validation [ON|OFF] determines how strong the password is. Â
Syntax
Syntax
Syntax
Syntax
Note: See show system key, store system key in Certificate CLI commands.
Syntax
stop gui
Stops the Web user interface.
Syntax
stop gui
stop system
Stops and powers down the appliance.
Syntax
stop system
store apply_user_hierarchy
Use this CLI command to apply user hierarchy to audit receiver.
If ON, the non-audit group receiver (the receiver other than the audit group receiver (normal or role) will only see audit results with a group IP beneath the receiver's
hierarchy, including the receiver. Â
Syntax
Show command
show apply_user_hierarchy
In order to run the simulation, Â the original traffic must be replayed through the rules engine (with the policy needing to be tested). This requires some of the original SQL
on the appliance to be saved with their values. The enable/disable of allow_simulation instructs IBM Guardium to save/NOT save any SQL or values whatsoever.
Syntax
Show command
show allow_simulation
store alp_throttle
Use this CLI to regulate the amount of data that will be logged.
Default is 0.
Example
store analyzer
Ignore session: The current request and the remainder of the session will be ignored. This action does log a policy violation, but it stops the logging of constructs and will
not test for policy violations of any type for the remainder of the session. This action might be useful if, for example, the database includes a test region, and there is no
need to apply policy rules against that region of the database.
This command sets the value of the timeout of the ignore session and sets the duration of the ignore session.
Syntax
Show command
show analyzer
store auto_stop_services_when_full
When ON, will stop internal services if database exceeds the 90% full threshold.
Inspection Engine, Classification and other Collection-related services will stop. Also, Aggregation import/restore will not process any new files.
To remediate, use the various Support commands (support clean audit_task, support clean log_files, support clean DAM_data, support show large_files) to analyze and
manually purge large tables.
Syntax
Show command
show auto_stop_services_when_full
Syntax
Usage: store connect_oracle_parser [state], where state is ON/OFF. ON is connect and OFF is disconnect.
Show command
store csv_fetch_size
CSV_MAX_SIZE is used to control the size of the CSV download that are retrieved when clicking Download all records, from the report export menu.
Note: csv_max_size requires a restart of the GUI for changes to take effect. csv_fetch_size does not requires a restart of the GUI for changes to take effect.
Show command
Usage
store csv_max_size
CSV_FETCH_SIZE and CSV_MAX_SIZE are GLOBAL_PROFILE parameters that can only be modified via CLI
CSV_MAX_SIZE is used to control the size of the CSV download that are retrieved when clicking Download all records, from the report export menu.
Note: csv_max_size requires a restart of the GUI for changes to take effect. csv_fetch_size does not requires a restart of the GUI for changes to take effect.
Show command
Usage
store default_queue_size
Use this CLI command to control the configuration parameter ADMINCONSOLE_PARAMETER.DEFAULT_QUEUE_SIZE. The default is 25. The range is 25-300.
Syntax
Show command
show default_queue_size 25
store defrag
Use this command to restore defragmentation defaults, or to set the defragmentation size. After entering this command, you must issue the restart inspection-core
command for the changes to take effect. The defrag is relevant only for network sniffing through SPAM or a TAP device.
Syntax
store defrag [default | size <s> interval <i> trigger <t> release <r>]
Show command
show defrag
Parameters
r - The release level specified as a number of seconds, up to a maximum of the 31st power of two (2147483648).
store delayed_firewall_correlation
Use this CLI command to hold a user connection until the decryption correlation has taken place.
Show command
show delayed_firewall_correlation
store full-bypass
This command is intended for emergency use only, when traffic is being unexpectedly blocked by the Guardium system. When on, all network traffic passes directly
through the system, and is not seen by the Guardium system.
When using this command, you will be prompted for the admin user password.
Syntax
store gdm_analyzer_rule
Analyzer rules - Certain rules can be applied at the analyzer level. Examples of analyzer rules are: user-defined character sets, source program changes, and firewall watch
or firewall unwatch modes. In previous releases, policies and rules were applied at the end of request processing on the logging state. In some cases, this meant a delay in
decisions based on these rules. Rules applied at the analyzer level means decisions can be made at an earlier stage.
Note: When applying analyzer rules on source program changes, if the source program is not matching the exact pattern, add a .* at the end of the pattern to deal with the
possibility that the source program has a trailing space (unseen by user).
Syntax
Use the CLI command, show gdm_analyzer_rule, to see a list of GDM analyzer rules.
Show command
show gdm_analyzer_rule
Use the Guardium CLI to add an analyzer rule for a direct regular expression to Mask UID Chain pattern.
Rule type:
3. Send verdict
4. HADOOP exclude
9. Transform string
ok
Usage
Show command
show gdm_http_session_template
Attempting to retrieve the template information. It may take time. Please wait.
This rule will be displayed ONLY if the following CLI command is executed:
Usage
Default is 60 seconds.
Show command
Syntax
Show command
Sets the TCP/IP port number on which the IBM Guardium appliance management interface accepts connections. The default is 8443. n must be a value in the range of
1024 to 65535. Be sure to avoid the use of any port that is required or in use for another purpose.
Set timeout of session - Sets the length of time (in seconds) with no activity before timeout. After the no-activity-timeout has been reached, it is necessary to log on again
to IBM Guardium. The default length is 900 seconds (15-minutes).
Set Cross-site Report Forgery (CSRF) (ON | OFF) - See the section CSRF and 403 Permission Errors in the Getting Started with GUI help topic. The default value is
enabled on an upgraded system. Trying to use certain web browser functions (for example, F5/CTRL-R/Refresh/Reload, Back/Forward) will result in a 403 Permission
Error message.
The new session timeout value will take effect only after the next GUI restart.
Syntax
Show command
Displays the GUI port number, state, session timeout (in seconds) and/or CSRF status.
Syntax
The response is
Restarting gui
Stopping.......
Safekeeping xregs
ok
The act of changing the cache setting will automatically restart the Guardium web server.
For Firefox, in order for the setting to take affect, the cache on the respective browsers has to be cleared.
Syntax
Show command
Syntax
Show command
Syntax
Show command
Syntax
Show command
Syntax
Show command
Syntax
Show Command
store keep_psmls
Use this CLI command to retain the current layouts/profiles/portlets created the users of the Guardium application. Set this CLI command to ON before an upgrade, and
the psmls from the previous version will be retained.
Syntax
show keep_psmls
store ldap-mapping
Store LDAP mapping parameters - allow a custom mapping for the LDAP server schema. This command permits customized mapping to the LDAP server schema for email,
firstname and lastname attributes. The paging parameter is used to facilitate transfer between any LDAP server type (Active Directory, Novell Directory, Open LDAP, Sun
One Directory, Tivoli® Directory). If the paging parameter is set to on, but paging is not supported by the server, the search is performed without paging.
Example for paging. If the CLI command, ldap-mapping paging is set to ON, then Microsoft Active Directory will download the maximum number users defined under the
limit value on the LDAP Import configuration screen. If CLI command, ldap-mapping paging is set to OFF, then Active Directory will download up to only 1000 users not
matter what the limit value is set to. All other LDAP server configurations must use the CLI command, ldap-mapping paging off in order to download users up to the set
limit value.
Note: Each time you change the CLI ldap-mapping attributes you also need to select Override Existing Changes on the LDAP Import configuration screen in IBM Guardium
GUI before updating. This action must occur each time you change the CLI ldap-mapping email, firstname or lastname attributes and import LDAP users.
Show commands
A GUI restart of the CLI is required for new parameters to take effect.
Examples
Â
If the attributes are written as follows, the mapping process will use the first attribute it finds. If this is not what you want, use one of the examples to map to specific
attributes.
store license
This command applies a new license key to the appliance.
A license key may be of one of two kinds: override type or append type; an override type replaces the currently installed license while the append type license will be
appended to the currently installed license. Append-type licenses can only add functionality; new functions may be enabled and when relevant - expiration dates be
updated, remaining number of scans and datasources will be increased, and a certain numeric fields in the license, such as number of managed  units will be replaced.
Syntax
store license
Show Command
show license
Example
When using the store license command, you will be prompted to paste the new product key:
Paste the string received from IBM Guardium and then press Enter.
Copy and paste the new product key at the cursor location, and then press Enter. The product key contains no line breaks or white space characters, and it always ends
with (and includes) a trailing equal sign. A series of messages will display, ending with:
We recommend that the machine be rebooted at the earliest opportunity in order to complete the license updating process.
ok
CLI>
Syntax
Show command
Syntax
Note: A restart of the inspection engine is required after the store command is issued to apply change.
Show command
A join table is a way of implementing many-to-many relationships. Use join entity to join tables in a SELECT SQL statement.
Syntax
Show command
Show command
Syntax
Show command
Syntax
Show command
store max_audit_reporting
Displays the audit report threshold. The default is 32. When defining reports in Audit Process, the number of days of the report (defined by the FROM-TO fields) should not
exceed a certain threshold (one month by default). See the Workflow Process, Central Management and Aggregation section of the Compliance Workflow Automation help
topic for further information on this using this CLI command.
Syntax
store max_audit_reporting
Show command
show max_audit_reporting
store max_result_set_size
Store the max_result_set_size, default value is 100 (size is between 1 and 65535) and aids in tuning the inspection engine when observing returned data. This command
sets the limitation for total result set size. This parameter works for any type of database. If the value is beyond the defined threshold, the analyzer will not retrieve data to
calculate records affected value.
Syntax
Show command
show max_result_set_size
store max_result_set_packet_size
Store the max_result_set_packet_size, default value is 32 (size is between 1 and 65535) and aids in tuning the inspection engine when observing returned data. This
command sets the limitation for packet size in response. This parameter works for any type of database. If the value is beyond the defined threshold, the analyzer will not
retrieve data to calculate records affected value.
Syntax
Show command
show max_result_set_packet_size
store max_tds_response_packets
Store the max_tds_response_packets, default value is 5 (size is between 1 and 65535) and aids in tuning the inspection engine when observing returned data. This
command sets the limitation for number of packets in response. This parameter works for MS SQL only. If the value is beyond the defined threshold, the analyzer will not
retrieve data to calculate records affected value.
Syntax
Note: max_tds_response_packets (Tabular Data Stream) is only applicable for MS SQL Server and Sybase.
show max_tds_response_packets
Syntax
Show Command
Use the CLI command, store monitor custom_db_usage to set the state to on and to specify a time to run this job.
Syntax
Use the CLI command, store monitor gdm_statistics to get information about the Unit Utilization. Default is 1 (run the script every hour).
Syntax
Show Commands
store mysql_utf8mb4
Enable support for 4-byte UTF-8 encoding (utf8mb4).
This command modifies Guardium sniffer processes and internal databases to correctly capture and store 4-byte UTF-8 characters. Enabling utf8mb4 may be useful if
datasources in your environment contain 4-byte characters, for example as used for Chinese, Japanese, and Korean ideographs.
The additional processing required to capture and store 4-byte characters will negatively impact the performance of your Guardium system. For this reason, do not
enable utf8mb4 unless you require 4-byte character support in your environment.
If support for 4-byte UTF-8 encoding is required in an aggregated or centrally managed environment, utf8mb4 should be enabled on all Guardium systems in the
environment. Enabling utf8mb4 on only some systems in the environment may create problems, such as failed aggregation or incorrectly displayed reports.
Data collected or aggregated before enabling utf8mb4 will still be available and function correctly after enabling utf8mb4.
CAUTION:
Once 4-byte UTF-8 support has been enabled using the store mysql_utf8mb4 command, the change cannot be undone or reversed. After enabling utf8mb on a Guardium
system, the only way to remove support for 4-byte UTF-8 characters is to completely rebuild the system.
Syntax
store mysql_utf8mb4
Show Command
show mysql_utf8mb4
Example
Syntax
Show Command
store pdf-config
Use this command to change the pdf font size and pdf orientation of the PDF image body content (excluding header/footer).
Orientation unit is 1 (for landscape orientation) or 2 (for portrait). The default value is 1.
The change takes effect immediately after typing the CLI command and pressing the Enter key.
Syntax
Show Command
Syntax
1 Default
2 Multi-language
Please select the option (1,2, or q to quit)
Show command
store populate_from_query_maxrecs
Sets the maximum number of records that can be used to populate groups and aliases from a query.
Use caution when setting a maximum records value via this CLI command. Setting it too high may result in incomplete populate group from query processes. The
maximum threshold is dynamic and dependent on the system load and memory utilization. This CLI command is limited to a high value of 200000.
Syntax
Show command
show populate_from_query_maxrecs
Syntax
Show Command
Note: The value of number of days will be set to the default (90 days) when the unit type changes between managed unit/Manager/standalone unit.
Syntax
Show Command
Example
Assume you want to keep an Event Log for 30 days. First issue the show purge objects age command to determine the index (do not use the table; your list may be
different). Then enter the store purge object command.
4. Â Â Â Â Assessment Tests, 7
8. Â Â Â Â Comment History, 60
...
ok
ok
store quartz_thread_run
This CLI command is for use by Technical Support.
The Javaâ„¢ Virtual Machine allows the application to have multiple threads. Thread is a piece of the program execution.
Use the store quartz_thread_num CLI command to set the number of threads that can run at the same time.
Use this command to ease conflict between too many threads running at the same time.
The show quartz_thread_num CLI command displays the number of Quartz scheduler threads that run at the same time.
Syntax
USAGE: store quartz_thread_num <number>, where number is in range 3 to 15 with default value = 5.
Show command
show quartz_thread_num
org.quartz.threadPoll.threadCount= 5
store remotelog
Controls the use of remote logging. In addition to system messages, statistical alerts and policy rule violation messages can be written to syslog (optionally). For each
facility.priority combination, messages can be directed to a specific host. This command can also control the use of remote logging through an optional port number and
can designate a mandatory protocol (UDP or TCP). This command works with any syslog implementation that supports TCP.
If you enable remote logging, be sure that the receiving host has enabled this capability (see the note).
Syntax
store remotelog [help|add|clear] facility.priority host [optional port number:mandatory protocol (UDP or TCP)]
add Adds the specified facility.priority combination to the list of messages to be sent to the specified remote host.
clear Clears the specified facility.priority combination from the list of messages being sent to the specified host.
facility Use daemon. The majority of messages issued by the IBM Guardium appliance will be from the daemon facility.
priority May be one of the following: alert, all, crit, debug, emerg, err, info, notice, warning.
The standard IBM Guardium severity codes for alerts and violations map as follows:
INFO / info
LOW / warning
MED / err
HIGH / alert
optional port Â
number
Some SIEM products may process the IETF RFC 5424 style syslog messages better than the default. This command changes the format. If the
format is changed 'restart rsyslog' must be run for this to take effect.
To configure the receiving system to accept remote logging, edit /etc/sysconfig/syslog on that system to include the -r option. For example:
SYSLOGD_OPTIONS=-r -m 0
/etc/init.d/syslog  restart
/var/log/messages
Common criteria requires that all communications from the Guardium system to a remote syslog server be encrypted. Communications to the remote syslog server can
not be in clear text.
CLI commands
show remotelog
store remotelog ?
Possible facilities: all auth authpriv cron daemon ftp kern local0 local1 local2 local3 local4 local5 local6 local7 lpr mail mark news security syslog user uucp
Possible priorities: alert all crit debug emerg err info notice warning
Note:
If you want to send the encrypted remote log message to the server, the rsyslog configuration in the server needs to accept encrypted message.
Switching from one mode to other on the same remote server: it needs to modify the configuration file to sync with the designated mode and the remote service
needs to restart.
Example
Use the example to store the certificate as ca.pem in /etc/pki/rsyslog/. This will open a new window and asks the user to paste the certificate.
Alerts and other messages can be forwarded to a remote syslog receiver, such as a SIEM system. This message traffic can be encrypted from the collector or
aggregator to the remote syslog receiver.
Note: Encryption only works in TCP mode. By default, syslog forwarding uses UDP, so if encryption is required, specify TCP for the CLI command, store remotelog.
The procedure documented here must be repeated on every collector or aggregator that is sending traffic to the encrypted host.
The certificate used by the remote syslog receiver is needed. Store that certificate on the Guardium system.
1. Have available the public certificate from the CA (Certificate Authority) from Verisign, Thwate, Geotrust, GoDaddy, Comodo, in-house, etc.
2. Log into the CLI on the individual Guardium system from which to send the encrypted syslog. Before executing the command, obtain the appropriate
certificate (in PEM format) from the CA, and copy the certificate, including the Begin and End lines, to your clipboard.
3. Enter the following CLI command:store remotelog add encrypted daemon.all <IP address of encrypted remote host>:<port number of remote host> tcp
Note: This example uses daemon because Guardium sends its application events using daemon.
4. The following instructions will be displayed:
Please paste your CA certificate, in PEM format. Include the BEGIN and END lines, and then press CTRL-D.
Paste the PEM-format certificate to the command line, then press CRTL-D. Guardium will take this input and store it as /etc/pki/rsyslog/ca.pem
There will follow a message informing of the success or failure of the store operation.
When successful, Guardium can send encrypted traffic to the remote system with the correct key.
5. Repeat the procedure for each collector and aggregator that is sending syslog traffic to the encrypted host.
store s2c
Sets several configurable parameters for ADMINCONSOLE. These parameters are used for throttling server-to-client (S2C) traffic.
Note: Use this CLI command only when directed by IBM Guardium Technical Services.
ANALYZER_S2C_IGNORE = {0,1,2,3}
Â
Syntax
store s2c
where 0<=I<=3 (level), Â 0<=M<=2147483647 (K/sec), and 1<=T<=2147483647 (seconds) OR store throttle default
       Â
Â
The new configuration will be effective once the CLI command, restart inspection-core, command is executed.
Show command
show s2c
         Ignore:         0
-------------------
ANALYZER_S2C_IGNORE (0,1,2,3) - Switch s2c throttling mechanisms on/off based on scenarios. This flag is based on bits. 0 = the s2c throttling mechanism is OFF. 1 =
turns on the function described in scenario 1, 2 = turns on the function described by scenario 2. 3 = turns both on.
MAX_S2C_VELOCITY - maximal rate (K bytes/sec). If this rate is exceeded, then analyzer should send CLI commands, ignore session, or ignore session reply, request to S-
TAP® or sniffer.
MAX_S2C_INTERVAL - time interval in seconds (default 30 sec.) between possible CLI commands, ignore session, or ignore session reply, requests.
Â
Scenario 1
Scenario 2
If the incoming traffic has a high S2C rate (>MAX_S2C_VELOCITY), then a throttling mechanism sends a ignore session reply request to S-TAP for local database
connections in the case when S2C velocity is greater than MAX_S2C_VELOCITY. If from some reason S-TAP was not affected, then analyzer will send ignore session reply
request again after MAX_S2C_INTERVAL seconds. In order to switch this throttling mechanism on, set ANALYZER_S2C_IGNORE flag to 2.
store sender_encoding
Use this CLI command to encode outgoing messages (email and SNMP traps) in different encoding schemes, where previously everything is encoded in UTF8.
For example, a Guardium customer wanted to encode all of the outgoing SNMP messages in SJIS - an alternative Japanese encoding.
Note: If the conversion fails, for either reason (a) the encoding scheme specified is invalid, or (b) the characters to be encoded can not be represented in the requested
encoding scheme, then the message will be sent using UTF8, which is the default encoding scheme.
Syntax
Show command
show sender_encoding
If ON, then STAPs can not connect until they are specifically approved.
If an unapproved STAP connects, it is immediately disconnected until the specific authorization of the IP Address of that STAP.
There is a pre-defined report for approved clients, Approved TAP clients, it is available on the Daily Monitor tab.
Note:
The CLI command, store stap approval, does not work within an environment where there is an IP load balancer.
Within a Central Managed environment, after adding the IPs to approved STAPs, there is a wait time associated with synchronization that might take up to an hour. After
synchronization is complete the approved STAPs status will appear green in GUI.
Syntax
Show command
GuardAPI command
grdapi store_stap_approval
The new configuration will be effective after running the CLI command, restart inspection-core.
Syntax
If you have not done so already, copy the server certificate to your clipboard. Paste the PEM-format certificate to the command line, then press CRTL-D. You will be
informed of the success or failure of the store operation.
When you are done, use the restart gui command to restart the IBM Guardium GUI.
Syntax
If the number goes higher the S-TAP verification process will become slower.
Show command
store set_partitions_for_queries
Use this CLI command to enable/disable partition selection on queries.
Usage:
store storage-system
store storage-system
Syntax
Show Command
show storage-system
Example
Assume you are currently using Centera for system backups, but want to switch to a TSM system. You must turn off the Centera backup option (unless you want to leave
that as another option), and turn on the TSM backup option. The commands to do this are highlighted in the example. The show commands are not necessary, but are for
illustration only.
NETWORK :
CENTERA : backing-up
TSM Â Â Â Â :
ok
ok
ok
NETWORK :
CENTERA :
ok
CLI>
Show Command
store throttle
This CLI command stores the throttle parameters. After entering this command, you must issue the CLI command, restart inspection-core for the changes to take effect.
This command is used to filter out (ignore) large packets. Throttling has two modes: Thresholds, per session - ignore sessions when identifying a long enough burst
(duration configurable) of large packets (size configurable) and stop ignoring the session when traffic goes under a certain threshold (also configurable); and, Overall -
ignore all packets larger than a certain size (configurable) in all sessions. This throttling mode completely ignores long and excessive non-database packets smaller than a
predefined size (useful for VNC clients and other types of white-noise traffic). Use for network traffic through SPAM port or hardware TAP. For S-TAP traffic, only network
TCP traffic picked up by PCAP. See also the CLI command, store s2c.
Syntax
store throttle [default | size <s> interval <i> trigger <t> release <r>]
Show Command
show throttle
Throttle parameters:
Parameters
default - Enter the keyword default to restore the system defaults (no other parameters are used). The default throttling parameters are never throttle.
Note: To restore the throttle defaults, use the CLI command, store throttle default.
store timeout
Sets the timeout value of a CLI session and/or fileserver session. The default value is 600 seconds. A timeout will also close the CLI session.
If the fileserver is stopped because of a timeout, a message will appear, Warning : Fileserver stopped because of timeout. The file upload may not be
complete. Stopping the process.
Use the CLI commands, show timeout db_connection, to show the socketTimeout value in the conf file, and store timeout db_connection <value>, to set the value of the
timeout. The value should be greater than 0. The default value is 25000 seconds. These CLI commands are used in managing the communications between the Central
Manager and the managed unit when DNS is not configured.
Syntax
Show command
store transfer-method
Syntax
Show Command
show transfer-method
Note: Files sent from one IBM Guardium appliance to another (from a collector to an aggregator, for example) are always sent using SCP.
store uid_chain_polling_interval
Set the interval for UID Chain polling with this CLI command. UID chain is a mechanism which allows S-TAP (by way of K-Tap) to track the chain of users that occurred
prior to a database connection.
Set the interval to 0 to turn off the UID Chain processing, in order to improve database performance. If the UID Chain processing is turned off, then calculating the UID
Chain and updating children sessions are skipped.
Note: When using any database, the UID chain is not logged for all sessions if the session is very short.
Syntax
Show command
show uid_chain_polling_interval
store upd_session_end
This CLI command adds an option to skip the update for the session_end time.
Syntax
Show command
show upd_session_end
Syntax
Use store unit type sink to switch collected DRDA traffic timestamp granularity from 1 millisecond to 1 microsecond.
Show Command
Note: Some attributes listed are set using the store unit type command, and cleared using the delete unit type command. The aggregator attribute can only be set during
installation of the IBM Guardium software, and cannot be modified except by re-installing the IBM Guardium software.
network route static Removes one line off the static routing table
stap The unit can receive data from and manage S-TAP and CAS agents.
unregister management
Syntax
unregister management
Notes:
This command is intended for emergency use only, when the Central Manager is not available.
After unregistering using this command, you should also unregister from the Central Manager (from the Administration Console), since that is the only way the
count of managed units will be reduced. The count of managed units is authorized by the product key.
There are no functions that you would perform with this command on a regular basis. Each main menu entry is described in a separate topic (see Main Menu Commands).
Aggregator Fix Schema – brings all imported tables that have older schema than that of the aggregator to the schema of the latest patch level of the aggregator
(runs in the background and may take several hours to complete). Note: There may be scenarios in which (a) the aggregator will not have the latest patch level or (b)
some of the imported tables are of the latest patch level—resulting in not all imported tables having the latest patch level.
Aggregator Maintenance – full analysis and recovery of the Aggregator. This utility will collect AGG related logs and place it in the diag export folder, calls the
Aggregator Fix Schema to sync the schema of all databases, clean AGG workspace and restart the merge process to ensure full analysis of all imported tables (runs
in the background and may take several hours to complete).
Clean Static Orphans on an Aggregator – This option should be used only by Technical Support and only in those cases where static tables grow too much and
needed to be cleaned. This utility cleans all the old construct records that are no longer in use.
1. At the command line prompt, log into the Guardium® appliance with CLI.
The Guardium user attempting to use the diag command must have an assigned CLI or admin role. The only user who has a CLI role by default is admin. The user
with a CLI or admin role is permitted to enter the diag command, use the unlock admin and unlock accessmgr CLI commands, and use the export audit-data CLI
command without restrictions. The user with a CLI role does not have to enter user name and password required of a GUI login and does not go through any further
role check.
If the Guardium user attempting to use CLI does not have a CLI or admin role, CLI will not start. The accessmgr assigns CLI and admin roles.
2. After starting CLI, enter the diag command (with no arguments) at the command line prompt.
3. The Guardium user attempting to use the diag command must have an assigned diag role on the Guardium system. By default, only admin has this assigned role.
Access to diag is allowed or disallowed based on the role assignment of this user (access to diag is permitted only if this user has the diag role). The accessmgr
assigns diag roles.
4. You are presented with the main command menu. Do one of the following to move the option selection cursor (which is selecting the first item in the example):
Type the desired entry number (the selection cursor moves to the selected entry).
Use the Up or Down arrow key to select the desired entry.
5. Press the Spacebar, the Left arrow key, or the Right arrow key to move the command selection cursor in the display (which is selecting the OK command in the
example).
6. Perform an action by selecting the appropriate option in the display area and then doing one of the following:
Select the appropriate command with the command selection cursor, then press the Enter key
Click on the appropriate action command.
.../guard/diag/current
.../guard/diag/depot
This output is accessed through the fileserver CLI command. See fileserver for further information.
.../guard/diag/current Directory
Most output from the diag commands is written in text format to the current directory. For most commands, this directory contains a separate output file. Each time you
run the same command, output is appended to the single file for that command. For a smaller number of commands, a separate file is created for each execution, usually
incorporating a date and time stamp in the filename.
The files in the current directory are easy to identify since the names are created from menu and command names. For example, after you use the File Summary command
from the System Interactive Queries menu, a file named interactive_filesummary.txt is created in the current directory. Â
If you look at the current directory while in the process of using a command, you may see a hidden temporary file with the same name as the one that will contain the
output for that command. The temporary file will be removed when the output is appended to the command output file.
.../guard/diag/depot Directory
When you pack the diag output files in the current directory to a compressed file (to send to Guardium Technical Support, for example), it is stored in the depot directory.
The filename is  in the format diag_session_<dd_mm_hhmm>.tgz, where the variable portion of the name indicates when the file was created. For example, a file
created at 12:15 PM on May 20th would be named as follows: diag_session_20_5_1215.tgz.
After exporting files (see the Export recorded files topic), you can remove them from the depot directory using the Delete recordings command of the Output Management
menu.
1 Output Management
The Output Management commands control what is done with the output produced by the diag command. Each Output Management command is described separately.
You can navigate the directories using the Up and Down arrow keys and pressing Enter. For example, selecting ../ and pressing Enter moves the selection up one level in
the directory structure.
You could then select the current directory and press enter, to navigate down to that folder and delete individual command output files. Note that you can navigate to
other directories, but you cannot delete files except from the current and depot directories.
When you have selected the file you want to delete, press Enter.
1. Select Export recorded files from the Output Management menu. The depot directory displays.
2. Select the file to be sent or use the ../ and ./ entries to navigate up or down in the directory structure. (However, keep in mind that you can only export files from the
depot directory.)
3. With the file to be transmitted selected, press Enter.
4. You are prompted to select FTP or exit. Select FTP and press Enter.
5. You are prompted to supply a host name. Enter the host name of the receiving system (or its IP address), and press Enter.
6. You are prompted for a user name. Enter a user account name for the receiving system, and press Enter.
7. You are prompted for a password. Enter the password for the user on the receiving system.
8. You are prompted to identify a directory to receive the sent file on the receiving system. Enter the path relative to the ftp root of the directory to contain the file on
the receiving system and press Enter.
9. You are prompted to confirm the details of the transfer (the file to be sent and its destination). Press Enter to perform the transfer, or select Cancel and press Enter
to start over.
10. You are informed of the success (or failure) of the operation.
1.5 Exit
Use the Exit command to return to the main menu.
1. Select System Static Reports from the Main Menu. You are informed that the process is running.
2. After the report has been created, it displays in the viewing area. Note that his report is lengthy and may be easier to view using a text editor, after exporting it to a
desktop computer).
Use the Up and Down arrow keys to scroll up or down in the report. When you are done viewing the report, press Enter to return to the Main Menu.
Current uptime:
  09:03:43  up 6 days, 17:34,  1 user,  load average: 0.44, 0.50, 0.41
System nameservers:
192.168.3.20
DB nameservers:
192.168.3.20
Gateway: 192.168.3.1 (system) 192.168.3.1 (def)
This is followed by information about the mail and SNMP servers configured:
The final section of the system configuration section describes the network configuration for the unit: IP address, host and domain names, etc:
============================================================================
Currently defined Tomcat port is 8443.
The TOMCAT daemon is running and listening on port(s): 8005 8443.
Currently OPEN ports
java run by tomcat on port *:8443
============================================================================
This is the SNIF (pid: 13036) command line: 13036 /opt/IBM/guardium/bin/snif.
This is the SNIF status:
Name: snif
State: R (running)
Tgid: 13036
============================================================================
IP Tables Information
The next major section contains information about the IP tables:
===========================================================================
IPTABLES:
-------------
       tcp  --  192.168.2.0/24       192.168.1.0/24      tcp spts:1521:60000  set 0x23
       tcp  --  192.168.1.0/24       192.168.2.0/24      tcp dpts:1521:60000  set 0x22
< lines deleted… >
S-TAP Information
The next major section contains S-TAP® information:
============================================================================
STAP:
----
    0     0 ACCEPT     tcp  --  *      *  0.0.0.0/0      0.0.0.0/0          tcp spt:9500
    0     0 ACCEPT     tcp  --  *      *  0.0.0.0/0      0.0.0.0/0          tcp dpt:9500
 2696  148K ACCEPT     tcp  --  *      *  0.0.0.0/0      0.0.0.0/0          tcp spt:16016
 2835  175K ACCEPT     tcp  --  *      *  0.0.0.0/0      0.0.0.0/0          tcp dpt:16016
IP Traffic Information
The next major section contains IP traffic information:
IP traffic statistics.
OUTPUT OF ETH0
Fri May 20 11:57:04 2012; ******** Detailed interface statistics started ********
*** Detailed statistics for interface eth0, generated Fri May 20 11:58:04 2009
OUTPUT OF ETH1
Fri May 20 11:57:04 2012; ******** Detailed interface statistics started ********
*** Detailed statistics for interface eth1, generated Fri May 20 11:58:04 2009
Snif  STDERR:
Snif STDOUT:
Fri_20-May-2009_04:04:35 : Guardium Engine Monitor starting
Fri_20-May-2009_04:14:37 : Guardium Engine Monitor starting
Fri_20-May-2009_04:24:38 : Guardium Engine Monitor starting
============================================================================
This is the aggregator last activities:
Audit Report
This section lists the following summary information (see example):
============================================================================
Range of time in logs: 01/14/10 13:12:26.348 - 01/18/10 12:48:01.073
Selected time for report: 01/14/10 13:12:26 - 01/18/10 12:48:01.073
Number of changes in configuration: 4 Â Â - changes to the audit configuration
Number of changes to accounts, groups, or roles: 0
Number of logins: 22 Â Â Â - logins into the machine - ssh and console
Number of failed logins: 114
Number of authentications: 22 - "su", etc.
Number of failed authentications: 5
Number of users: 2
Number of terminals: 18
Number of host names: 9
Number of executables: 7
Number of files: 0
Number of AVC's: 0
Number of MAC events: 0
Number of failed syscalls: 0
Number of anomaly events: 3
Number of responses to anomaly events: 0
Number of crypto events: 0
Number of keys: 0
Number of process IDs: 9173
Number of events: 98669
============================================================================
Anomaly Report
This section lists the following (see example):
============================================================================
# Date Time Type Exe Term Host AUID Event
============================================================================
1. 01/14/10 13:16:02 ANOM_PROMISCUOUS /usr/sbin/brctl (none) ? -1 8 - Â this is expected
to appear - it means the bridge is listening to all traffic
Authentication Report
This section lists the following (see example):
============================================================================
# Date Time Type Exe Term Host AUID Event
============================================================================
1. 01/14/10 13:13:22 tomcat ? console /bin/su yes 4
2. 01/14/10 13:16:44 tomcat ? console /bin/su yes 11
3. 01/14/10 13:16:44 tomcat ? console /bin/su yes 17
4. 01/14/10 13:16:45 tomcat ? console /bin/su yes 23
5. 01/14/10 13:16:48 tomcat ? console /bin/su yes 29
6. 01/14/10 13:22:29 tomcat ? ? /bin/su yes 155
7. 01/14/10 13:28:10 ? ? tty1 /bin/login no 252
8. 01/14/10 13:28:20 ? ? tty1 /bin/login no 254
Login Report
This section lists the following (see example):
============================================================================
# Date Time Type Exe Term Host AUID Event
============================================================================
1. 01/14/10 13:22:15 root 192.168.2.9 sshd /usr/sbin/sshd no 142
3 Interactive Queries
Select System Interactive Queries from the main menu to open the Interactive Queries menu. (Use the Down arrow key to scroll past the tenth item to see all items on this
menu.)
In addition to displaying the requested information, each interactive query command creates output in a separate text file in the current directory. See the Overview topic
for more information about the files created.
1. Select Files Changed from the Interactive Queries menu. You are prompted to enter a number days. Type a number and press Enter.
2. You are asked if you are interested in the files changed before or after that number of days. Select 1 or 2 and press Enter.
3. The full directory path for each changed file is displayed. Note that if not all data fits in the display area, use the Up and Down arrow keys to scroll through the data.
The current position in the file is indicated by the number in the display. The white bars in the display area indicate the presence of more data with a plus sign.
1. Select Summarize Folder from the Interactive Queries menu. There are no prompts. You are presented with a display of disk use for various directories.
2. Use the Up and Down arrow keys to scroll through the directories.
3. Press Enter or click Exit when you are done.
Be aware that when the Summary Style is used, variables are replaced by the pound sign character (#). For some log data containing variables such as IP addresses
or dates, the replacements can be extensive.
User written reports are listed following the pre-defined reports, beginning with number 20001 (for version 3.6.1).
1. Select Watch Buffer from the Interactive Queries menu. Â The display is updated every second.
2. Press Ctrl-C to close the display.
The variable portions or the file names are date and time stamps. For example, apks.txt.Fri_20-May-2011_08.52.00.789.
(m) Â to dump STAP packets (Select how long to run. Wait for completion and then check the msg-dump file under /var/log/guard/diag/current/tap/ )
3. Regardless of your selection, you will be prompted to select the time period for the activity. Select a time period and press Enter.
4. You are notified that the program will run for the specified time and prompted to press Enter. Press Enter and wait.
5. When processing completes, a message will be displayed. You can use the File Summary command to display the output of this command. Because this command
can produce a large amount of data, you will probably want to export the file to another system, where you can view the contents using a text editor. (Pack the
current session data, and export the recordings as described earlier in this section.)
1. Select Show Generate GDM_ERROR dump from the Interactive Queries menu.
2. Press OK and then enter password. Press Enter.
3. Use the Up and Down arrows to scroll through the display, and press Exit when you are done.
1. Select Prepare Tomcat Memory dump from the Interactive Queries menu.
2. Press OK.
3. Use the Up and Down arrows to scroll through the display, and press Exit when you are done.
Example
SYSTEM_NETMASK1: 255.255.255.0
SYSTEM_DOMAIN:
SYSTEM_DEFAULT_ROUTE:
SYSTEM_DNS1:
SYSTEM_DNS2:
SYSTEM_DNS3:
TOMCAT_IP:
MANAGER_IP:
HOST_MAC_ADDRESS:
SECOND_DEVICE:
For Generate TCP dump in rotation, enter Filter IP address (enter blank for all IPs). Enter Filter Port number. For the question, How long to run? if the TCP dump in rotation
is already running, choose the option “Rotation OFF†or “Rotation†(ON). If Rotation is selected, add file size.
1. Select TURBINE optimize ( index cardinality ) from the Maintenance menu. A progress bar displays while the operation is running. When the operation completes,
you are returned to the Maintenance menu.
1. Select Clean disk space from the Maintenance menu. You will be prompted to select a directory.
2. Select the directory from which you want to remove files. The contents of the directory will be listed, and you will be prompted to confirm that you want to remove
all files.
3. When the operation completes, you are returned to the Maintenance menu.
Choose Classifier to select debug level options: ERROR, WARN, INFO, DEBUG, ALL.
Choose DLS (data level security), Workflow, or Other (text input) to select debug level options: ERROR, WARN, INFO, DEBUG, ALL.
If Other is chosen (text input separated by ',') , enter valid components (dls, workflow, audit, customtable, gui, other, job).
5 Exit to CLI
Select Exit to CLI on the Main Menu. Press Enter to close the diag command and return to the command line interface.
<daysequence>-<hostname.domain>-w<run_datestamp>-d<data_date>.dbdump.enc
daysequence is a number representing the date of the archived data, expressed as the number of days since year 0. The same date appears in yyyy-mm-dd format in the
data_date portion of the name.
hostname.domain is the host name of the Guardium appliance on which the archive was created, followed by a dot character and the domain name.
run_datestamp is the date that the data was archived or exported, in yyyymmdd.hhmmss format.
backup config
These commands back up and restore configuration information from the internal administration tables. The backup config command stores data in the /media/backup
directory. The backup config command removes license and other machine-specific information. The backup system command provides a more comprehensive backup of
the configuration and the entire system.
Syntax
backup config
restore config
backup system
This topic applies to backup and restore operations for the Guardium internal database. You can back up or restore either configuration information only, or the entire
system (data plus configuration information, except for the shared secret key files, which are backed up and restored separately, see the aggregator backup keys file and
aggregator restore keys file commands). These commands stop all inspection engines and web services and restart them after the operation completes.
Before restoring a file, be sure that the appliance has the system shared secret of the system that created that file (otherwise, it will not be able to decrypt the
information). See About the System Shared Secret in the Guardium Administrator Guide.
Note: System restore must be done to the same patch level  of the system backup. For example, if a customer backed up the appliance when it was on Version 7.0, Patch
7 and then wishes to restore this backup into a newly-built appliance, then there is a need to first install Version 7.0, Patches 1 to 7 on the appliance and only then to
restore the file.
There are two commands involved in the restore process:
For all backup, import and restore commands, you will receive a series of prompts to supply some combination of the following items, depending on which storage
systems are configured, and the type of restore operation. Respond to each prompt as appropriate for your operation. The following table describes the information for
which you may be prompted.
Note:
One copy of the SCP/FTP/TSM/Centera file transfer is saved, regardless if the transfer was successful or failed. As certain files may take hours to regenerate (for example,
system backup), having a readily available copy (in particular if the file transfer failed) is of value to the user. Only one copy of each type of file is retained (archive/system
backup/configuration backup/etc.)
Backup system will copy the current license, metering and number of datasources, and then backup the data. Restore system will restore the data and then restore the
license, metering and number of datasources. This sequence applies to the regular restore system. Restore from a previous system will require re-configuring license,
metering and number of datasources.
When configuring backups, value of zero '0' for the port number indicates that the default port is being used for that protocol and no need to change.
SCP, FTP, TSM, Centera, Snapshot Select the method to use to transfer the file. TSM and Centera will be displayed only if those storage methods that
have been enabled (see the store storage-method command)
Data or Configuration Select Configuration to back up definitions and configuration information only, or select Data to back up data in
addition to configuration information.
restore from archive or restore from backup Select restore from archive to restore archived data, or select restore from backup to restore configuration
information.
normal or upgrade If restoring from the same software version of Guardium, select normal. If restoring configuration information
following software  upgrade of the Guardium appliance, select upgrade.
remote directory The directory for the backup file. For FTP, the directory is relative to the FTP root directory for the FTP user account
used. For SSH, the directory path is a full directory path. For Windows SSH servers, use Unix-style path names with
forward slashes, rather than Windows-style backslashes.
username The user account name to use for the operation (for backup operations, this user must have write/execute permission
for the directory specified).
Note: For Windows, a domain user is accepted with the format of domain\user
file name The file name for the archive or backup file. See Archived Data Names.
A user can select multiple files by using the wildcard character * in the file name. Support of the wildcard character * is
permitted when using transfer methods FTP, SCP and Snapshot. Support of the wildcard character * is not permitted
on transfer methods TSM or Centera.
Centera server Enter the Centera server name. If using PEA files, use the following  format:  <Host name/IP>? <full PEA file name>,
for example:
128.221.200.56?/var/centera/us_profile_rwqe.pea.txt
Centera clipID For a Centera restore operation, the Content Address returned from the backup operation. For example:
6M4B15U4JM4LBeDGKCPF9VQO3UA
After you have supplied all of the information required for the backup or restore operation, a series of messages will be displayed informing you of the results of the
operation. For example, for a restore system operation the messages should look something like this (depending on the type of restore and storage method used):
gpg: Signature made Thu Feb 22 11:38:01 2009 EST using DSA key ID 2348FF9E gpg: Good signature from "Backup Signer
<support@guardium.com>" Proceeding to shutdown services Proceeding to startup services Safekeeping admin.xreg Safekeeping
client.xreg Safekeeping controllers.xreg Safekeeping controls.xreg Safekeeping guardium-portlets.xreg Safekeeping local-
portlets.xreg Safekeeping local-security.xreg Safekeeping local-skins.xreg Safekeeping media.xreg Safekeeping portlets.xreg
Safekeeping security.xreg Safekeeping skins.xreg guard_sniffer.pl -reorder Recovery procedure was successful. ok
The archive process will check the size of the static tables and make sure there is room in /var to create the archive.
An error is now logged in the logfile and GUI if the backup is over 50%
Example:
ERROR: /var backup space is at 60% used. Insufficient disk space for backup. CLI> backup system 1. DATA 2. CONFIGURATION
Please enter the number of your choice: (q to quit) 1 1. SCP 2. CONFIGURED DESTINATION Enter the number of your choice:
(q to quit) 2 Make sure destination is configured in the GUI under the System Backup option Please wait, this may take some time.
backup profile
Use this command to maintain the backup profile data (patch mechanism).
The backup file will be copied to the destination according to the backup profile.  If the parameter indicating whether to keep the backup file is “1†AND there is
enough disk space the backup file will be kept within the system, otherwise removed.
Syntax
Example
patch backup flag is 1 patch backup automatic recovery flag is 1 patch backup dest host is patch backup dest dir is
patch backup dest user is patch backup dest pass is ok
Syntax
Example
Do you want to set up for automatic recovery? (y/n) Enter the patch backup destination host: Enter the patch backup
destination directory: Enter the patch backup destination user: Enter the patch backup destination password:
export audit-data
Exports audit data from the specified date (yyyy-mm-dd) from various internal Guardium tables to a compressed archive file. The data from a specified date will be stored
in a compressed archive file, in the /var/dump directory. The file created will be identified in the messages produced by the system. See the example. Use this command
only under the direction of Guardium Support.
Note: Only users with admin role may run this command .
Syntax
Example
If you enter the audit-data command for the date 2005-09-16, a set of messages similar to the following will be created: CLI>
export audit-data 2005-09-16 2005-09-16 Extracting  GDM_ACCESS  Data ... Extracting  GDM_CONSTRUCT  Data ... Extracting
 GDM_SENTENCE  Data ... Extracting  GDM_OBJECT  Data ... Extracting  GDM_FIELD  Data ... Extracting  GDM_CONSTRUCT_TEXT
 Data ... Extracting  GDM_SESSION  Data ... Extracting  GDM_EXCEPTION  Data ... Extracting  GDM_POLICY_VIOLATIONS_LOG  Data
... Extracting  GDM_CONSTRUCT_INSTANCE  Data ... Generating tar file ... /var/csvGenerationTmp ~ GDM_ACCESS.txt
GDM_CONSTRUCT.txt GDM_CONSTRUCT_INSTANCE.txt GDM_CONSTRUCT_TEXT.txt GDM_EXCEPTION.txt GDM_FIELD.txt GDM_OBJECT.txt
GDM_POLICY_VIOLATIONS_LOG.txt GDM_SENTENCE.txt GDM_SESSION.txt ~ Generation completed, CSV Files saved to /var/dump/732570-
supp2.guardium.com-w20050919110317-d2005-09-16.exp.tgz ok
The data from each of the named internal database tables is written to a text file, in CSV format. The name of the archive file ends with exp.tgz and the remainder of the
name is formed as described in About Archived Data File Names.
You can use the export file command to transfer this file to another system.
delete audit-data
Use this command only under the direction of Guardium Support. This command is used to remove compressed audit data files. You will be prompted to enter an index
number to identify the file to be removed. See Archived Data File Names, for information about how archived data file names are formed.
Syntax
delete audit-data
show audit-data
Use this command to display any files that were created by executing the CLI command, export audit-data. For more information about audit data files, see export audit-
data.
Syntax
export file
This command exports a single file named filename from the /var/IBM/Guardium/data/dump, /var/log or /var/IBM/Guardium/data/importdir directory.
Use this command only under the direction of Guardium Support. To export Guardium data to an aggregator or to archive data, use the appropriate menu commands on
the Administration Console panel.
Syntax
fileserver
Use this command to start an HTTPS-based file server running on the Guardium appliance. This facility is intended to ease the task of uploading patches to the unit or
downloading debugging information from the unit. Each time this facility starts, it deletes any files in the directory to which it uploads patches.
Note: Any operation that generates a file that the fileserver will access should finish before the fileserver is started (so that the file is available for the fileserver).
ip address is an optional parameter that allows access to the fileserver from the indicated IP address. By default (without the parameter), access is restricted to the IP
address of the SSH client that started the fileserver.
duration is an optional parameter that specifies the number of seconds that the fileserver is active. After the specified number of seconds, the fileserver shuts down
automatically. The duration can be any number of seconds from 60 to 3600.
In case of a security setup where browser sessions are redirected through a proxy server, the IP address of the fileserver client will not be the same as SSH client that
started the fileserver. Instead, the fileserver client will have the IP address of the proxy server, and this address must be passing the optional ip address parameter. To
find the proxy IP address, check your browser settings or the client IP addresses shown in the Logins to Guardium report in the Guardium Monitor interface.
Example
When you are done, return to the CLI session and press Enter to terminate the session.
Instructions
Vulnerability Assessment:
Entitlements:
import file
See backup config and restore config.
In import file CLI command, user can use wildcard * for the file name in method scp, ftp and snapshot.
Syntax
import file
You will be prompted for a password for the user account on the specified host.
Syntax
Parameters
Note: In setting up TSM on each collector, if the initial configuration fails, a notification error results which says the test file could not be sent. Logging into the collector as
root, and then running a dsmc archive command to the TSM server, the TSM file, with the same credentials, now succeeds. Returning to the GUI, and configuring with the
same options used before, the configuration now succeeds as well. Â
If tsm config has passwordaccess=generate, the password stored in a local file, is sought. The root user needs to run the dsmc command once to create this local
password file.
After uploading the tsm config file, if tsm config has a passwordaccess generate prompt, passwordaccess is set to be generated.
Would you like to run a dsmc command now to ensure password is set locally (y/n)? If the answer is y, run a "dsmc query
options>>/dev/null" command, which will prompt user for password.
Syntax
restore config
These commands back up and restore configuration information from the internal administration tables. The backup config command stores data in the /media/backup
directory. The backup config command removes license and other machine-specific information. The backup system command provides a more comprehensive backup of
the configuration and the entire system.
When restoring a configuration, you must restore a backup that is of the same version and patch level as the original appliance where the backup was created.
Syntax
backup config
restore config
restore db-from-prev-version
This command takes a backup from the immediate past system (backup data must be provided, configuration backup is optional) and performs a restore on a newer
system. It includes upgrading the data, portlets, etc.
Perform a full system backup prior to upgrading your Guardium system. If for some reason the upgrade fails and leaves the machine in a way that can not be used, instead
of trying to fix and re-run the upgrade, rebuild the machine as the latest system, setting up this latest system with only the basic network information (IP, resolver, route,
system hostname and domain).
The result will be the latest system with the data and customization (if configuration file is provided) from the previous system.
First, try a regular upgrade from the previous system to the latest system. If this is not successful, then use the backup as an alternative way to upgrade from the previous
system to the latest system.
Note: Older data being restored to an aggregator (not to investigation center), and outside the merge period, will not be visible until the merge period is changed and the
merge process rerun.
To run this command, back up the current server for both data and configuration. Once the backup is complete, install the latest release onto the same server. Next, import
both the data and configuration file from CLI via the import file command. Then after the two backup files are imported, run, again from CLI, the command restore db-
from-prev-version. This restores the backup files (data and configuration) from the older version to the newly installed server.
Note: If you are using Guardium in a non-English language, the restore CLI command sets some strings, including report headers, to English. To view these strings in the
non-English language, run the store language CLI command after you run the restore CLI command.
The optional parameter "override" is applicable only to a restore of a Central Manager appliance from backup.
By default, when a user executes the "restore db-from-prev-version" command on a Central Manager appliance, we preserve the existing configuration information on this
Central Manager that links to the Managed Units that it manages.
When the user adds "override" to the restore command, the existing Central Manager /Managed Units configuration is overridden by the Central Manager /Managed Units
configuration from the backup data.
Syntax
Examples
restore db-from-prev-version
Note: Managed units and S-TAP associations in "Associate S-TAPs and Managed Units" are not restored when using this CLI command. The user will have to define
associations again.
Syntax
restore db-from-prev-version
This procedure will restore and upgrade a previous backup on a newly-installed latest system. If the older files are currently
located on a remote system, use the "import file" cli command to transfer them locally prior to running this procedure. The
imported files will be put in the /var/dump/ directory. Continue (y/n)?
Note:
Answering Y (yes) to the following questions during the execution of the CLI command, restore db-from-prev-version, will result in all non-canned/customized reports and
panes to compress into one pane with the name of v.x.0 Custom Reports.
Answering N (no) to the same questions will result in all panes being restored to what they were in previous version.
Update portal layout (panes and menus structure) to the new v8 default (current instances of custom reports will be copied to the
new layout, as well as parameter changes on predefined reports) for the user admin? (y/n) n Update portal layout (panes and menus
structure) to the new v8 default (current instances of custom reports will be copied to the new layout, as well as parameter
changes on predefined reports) for all other users? (y/n)
Use this command to restore certifications and private keys used by the Web servlet container environment (Tomcat).
Syntax
restore keystore
restore pre-patch-backup
Use this command only under direction from Technical Support.
Use this command to recover the pre-patch-backup when the appliance database is up or down.
Syntax
restore pre-patchbackup Please enter the information to retrieve the file: Is the file in the local system? (y/n) n Start to
recover with the backup profile parameters. Please check the recovery status in the log
/var/log/guard/diag/depot/patch_installer.log ok -------------------------------------- If answer 'n', abort the operation. If
answer 'y', need to enter the file name.
restore system
This topic applies to backup and restore operations for the Guardium internal database. You can back up or restore either configuration information only, or the entire
system (data plus configuration information, except for the shared secret key files, which are backed up and restored separately, see the aggregator backup keys file and
aggregator restore keys file commands). These commands stop all inspection engines and web services and restart them after the operation completes.
Before restoring a file, be sure that the appliance has the system shared secret of the system that created that file (otherwise, it will not be able to decrypt the
information). See About the System Shared Secret in the Guardium Administrator Guide.
Note: System restore must be done to the same patch level  of the system backup.
There are two commands involved in the restore process:
For all backup, import and restore commands, you will receive a series of prompts to supply some combination of the following items, depending on which storage
systems are configured, and the type of restore operation. Respond to each prompt as appropriate for your operation. The following table describes the information for
which you may be prompted.
Note:
One copy of the SCP/FTP/TSM/Centera file transfer is saved, regardless if the transfer was successful or failed. As certain files may take hours to regenerate (for example,
system backup), having a readily available copy (in particular if the file transfer failed) is of value to the user. Only one copy of each type of file is retained (archive/system
backup/configuration backup/etc.)
Backup system will copy the current license, metering and number of datasources, and then backup the data. Restore system will restore the data and then restore the
license, metering and number of datasources. This sequence applies to the regular restore system. Restore from a previous system will require re-configuring license,
metering and number of datasources.
SCP, FTP, TSM, Centera, Snapshot Select the method to use to transfer the file. TSM and Centera will be displayed only if those storage methods that have
been enabled (see the store storage-method command)
Data or Configuration Select Configuration to back up definitions and configuration information only, or select Data to back up data in addition to
configuration information.
restore from archive or restore from Select restore from archive to restore archived data, or select restore from backup to restore configuration information.
backup
normal or upgrade If restoring from the same software version of Guardium, select normal. If restoring configuration information following
software  upgrade of the Guardium appliance, select upgrade.
remote directory The directory for the backup file. For FTP, the directory is relative to the FTP root directory for the FTP user account used.
For SSH, the directory path is a full directory path. For Windows SSH servers, use Unix-style path names with forward
slashes, rather than Windows-style backslashes.
username The user account name to use for the operation (for backup operations, this user must have write/execute permission for
the directory specified).
Note: For Windows, a domain user is accepted with the format of domain\user
file name The file name for the archive or backup file. See Archived Data files names.
A user can select multiple files by using the wildcard character * in the file name. Support of the wildcard character * is
permitted when using transfer methods FTP, SCP and Snapshot. Support of the wildcard character * is not permitted on
transfer methods TSM or Centera.
Centera server Enter the Centera server name. If using PEA files, use the following  format:  <Host name/IP>? <full PEA file name>, for
example:
128.221.200.56?/var/centera/us_profile_rwqe.pea.txt
Note the ? between the server IPs and Pea file name.
This IP address and the .PEA file comes from EMC Centera. The question mark is required when configuring the path. The
.../var/centera/... path name is important as the backup may fail if the path name is not followed. The .PEA file gives
permissions, username and password authentication per Centera backup request.
Centera clipID For a Centera restore operation, the Content Address returned from the backup operation. For example:
6M4B15U4JM4LBeDGKCPF9VQO3UA
After you have supplied all of the information required for the backup or restore operation, a series of messages will be displayed informing you of the results of the
operation. For example, for a restore system operation the messages should look something like this (depending on the type of restore and storage method used):
gpg: Signature made Thu Feb 22 11:38:01 2009 EST using DSA key ID 2348FF9E gpg: Good signature from "Backup Signer
<support@guardium.com>" Proceeding to shutdown services Proceeding to startup services Safekeeping admin.xreg Safekeeping
client.xreg Safekeeping controllers.xreg Safekeeping controls.xreg Safekeeping guardium-portlets.xreg Safekeeping local-
portlets.xreg Safekeeping local-security.xreg Safekeeping local-skins.xreg Safekeeping media.xreg Safekeeping portlets.xreg
Safekeeping security.xreg Safekeeping skins.xreg guard_sniffer.pl -reorder Recovery procedure was successful. ok
Syntax
ANS1708E Backup operation failed. Only a root user can do this operation
Syntax
Show command
This CLI command displays on, if non-root Guardium users are authorized to perform backup and archive when backupinitiationroot is set to ON in TSM servers.
Otherwise, it displays off.
store language
Use this CLI command to change from the baseline English and convert the database to the desired language. Installation of Guardium is always in English. A Guardium
system can be changed to Japanese, Chinese (Traditional or Simplified), French,, Spanish, German or Portuguese after an installation.
The CLI command, store language, is considered a setup of the appliance and is intended to be run during the initial setup of the appliance.
Running this CLI command, after deployment of the appliance in a specific language, can change the information already captured, stored, customized, archived or
exported.
Note: After switching from English to a desired language, it is not possible to revert back to English, using this CLI command. The Guardium system must be reinstalled in
English.
Syntax
CLI> store language [English | Japanese | SimplifiedChinese | TraditionalChinese | French | German | Spanish | Portuguese]
Show command
show language
Step 1: Open the VM client/console and select the VM instance that contains the IBM Guardium appliance. Right-click the instance, select (from the popup menu) Guest
=> Install/upgrade VMware tools. This enables the instance to access the VMware tools via a mount point.
Step 2: Run the CLI command (from within the VM client/console), setup vmware_tools install, to install VM tools.
To correct this situation, VMware recommends: Install update 2 on ESX4.1 or Set CPU/MMU virtualization to Use software only instruction set and MMU Virtualization. This
option is found under Settings/ Options/ CPU/MMU Use software for instruction set and MMU Virtualization.
An inspection engine monitors the traffic between a set of one or more servers and a set of one or more clients using a specific database protocol (Oracle or Sybase, for
example). The inspection engine extracts SQL from network packets; compiles parse trees that identify sentences, requests, commands, objects, and fields; and logs
detailed information about that traffic to an internal database.
add inspection-engines
Adds an inspection engine configuration to the end of the inspection engine list. The parameters are described. You can re-order your list of inspection engines after
adding a new one by using the reorder inspection-engines command. Adding an inspection engine does not start it running; to start it running, use the start inspection-
engines command.
Syntax
Parameters
name - The new inspection engine name; must be unique on the unit.
protocol - The protocol monitored, which must be one of the following: Aster, Cassandra, CouchDB, DB2, DB2 Exit, exclude IE, FTP, GreenPlumDB, Hadoop, HIVE, HTTP,
HUE, IBM ISERIES, IMPALA, Informix, iNFORMIX Exit, KERBEROS, Maria,DB, MongoDB, MS SQL, Mysql, Named Pipes, Netezza, Oracle, PostgreSQL, SAP Hana, Sybase,
Teradata, WebHDFS or Windows File Share.
fromIP/mask - A list of clients, identified by IP addresses and subnet masks. Separate each IP address from its mask with a slash, and multiple entries by commas. An
address and mask of all zeroes is a wild card. If the exclude client list option is Y, the inspection engine monitors traffic from all clients except for those in this list. If the
exclude client list option is N, the inspection engine monitors traffic from only the clients in this list.
port - The port or range of ports over which traffic between the specified clients and database servers will be monitored. To specify a range, separate the two numbers
with a hyphen.
toIP/mask - The list of database servers, identified by IP addresses and subnet masks, whose traffic will be monitored. Separate each IP address from its mask with a
slash, and multiple entries by commas. An address and mask of all zeroes is a wildcard.
exclude client list - A Y/N value; defaults to N. If Y, the inspection engine monitors traffic from all clients except for those identified in the client list. If N, the inspection
engine monitors traffic from only the clients listed in the client list.
active on startup - A Y/N value; defaults to N. If Y, the inspection engine is activated on system startup.
delete inspection-engines
Removes the single inspection engine identified by its name. The name can include only letters, numbers and blanks. If the inspection engine name contains any special
characters, use the administrator portal GUI to remove it.
Syntax
reorder inspection-engines
Specifies a new order for the inspection engines, using index values from the list produced by the list inspection-engines command.
Syntax
Example
If the displayed indices are 1, 2, 3, and 4, the following command will reverse order of the engines:
Syntax
restart inspection-core
Note: To restart the collection of traffic for one or more specific inspection engines, follow this command with one or more start inspection engine commands.
Alternatively, to restart the collection of traffic for all inspection engines, use the restart inspection-engines command.
restart inspection-engines
Restarts the database inspection engine core and all inspection engines. The collection of database traffic stops temporarily while this occurs and restarts only when
database connections re-initiate.
Syntax
restart inspection-engines
show inspection-engines
Displays inspection engine configuration information, as follows:
configuration <index> - Only the inspection engine identified by the specified index, which is from the list inspection-engines command.
type <db_type> -Displays configurations of a specific database type, which must be one of the supported monitored protocol types: Aster, Cassandra, CouchDB, DB2, DB2
Exit, exclude IE, FTP, GreenPlumDB, Hadoop, HIVE, HTTP, HUE, IBM ISERIES, IMPALA, Informix, iNFORMIX Exit, KERBEROS, Maria,DB, MongoDB, MS SQL, Mysql, Named
Pipes, Netezza, Oracle, PostgreSQL, SAP Hana, Sybase, Teradata, WebHDFS or Windows File Share.
Syntax
show inspection-engines <all | configuration <index> | log sqlstrings | type <type> >
Note: Use the CLI command, show inspection-engines all, to display non-STAP Inspection Engines like SPAN ports. The CLI command, list_inspection_engines, will
display inspection engines created by STAP.
start inspection-core
Starts the inspection-engine core.
Syntax
start inspection-core
start inspection-engines
Starts one or more inspection engines identified using index values from the list produced by the list inspection-engines command.
Syntax
Syntax
start inspection-engines id
Usage: start inspection-engines id <n>, where n is a numeric sniffer id.
Syntax
stop inspection-engines id
Usage: stop inspection-engines id <n>, where n is a numeric sniffer id.
stop inspection-core
Stops the inspection-engine core.
Syntax
stop inspection-core
stop inspection-engines
Syntax
Syntax
stop inspection-engines id
Stops one or more inspection engines identified using index values from the list produced by the list inspection-engines command.
Syntax
Syntax
Example
Show Command
Note: Deep search uses 10x (ten times) the time_allowed value.
Parent topic: CLI Overview
Identify a connector on the back of the machine (show network interface port)
Reset networking after installing or moving a network card (store network interface inventory)
Set IP addresses (store network interface ip, store network interface mask, store network resolver, store network routes defaultroute)
Enable or disable high-availability (store network interface high-availability)
Configure the network card if the switch it attaches to will not auto-negotiate the settings (store network interface auto-negotiation, store network interface speed,
store network interface duplex)
restart network
Restarts just the network configuration. For example, change the IP address, then run this CLI command.
Syntax
restart network
Syntax
Syntax
Example
ok
CLI>
Syntax
Syntax
Show Command
Syntax
Show Command
The two ports used (ETH0 and a second interface) must be connected to the same network. There is a slight delay, caused by the switch re-learning the port configuration.
The default setting is off.
The port used for the primary IP address is always ETH0. When the high-availability option is enabled, the Guardium system automatically fails over, as needed, to the
specified second interface, in effect transferring the primary IP address to the second interface.
Note: IP Teaming and Secondary Interface can not done at the same time.
Syntax:
Note: The store network interface inventory command will detect on-board NIC cards within the Guardium appliance and assign these cards as eth0 and eth1. This
command should only be run if specifically instructed to by Guardium Support as it can rearrange the NIC cards.
Syntax
Use the show command to display the port names and MAC addresses of all installed network interfaces.
Example
eth0| 00:50:56:3b:c3:73|
eth1| 00:50:56:8a:0d:fa|
eth2| 00:50:56:8a:0d:fb|
eth3| 00:50:56:8a:00:c1|
Note: The “Member of†will show which NICs are in the bond pair, if a bonding exists).
Syntax
Show Command
Syntax
Show Command
Syntax
Syntax
Show command
eth0 1500
Syntax
Example
Syntax
Syntax
Note: IP Teaming and Secondary Interface can not done at the same time.
Syntax:
store network interface secondary [on <NIC> <ip> <mask> <gateway> | off ]
Show command
Syntax
Show Command
Syntax
Example
ok
CLI>
Syntax
eth0| 00:50:56:3b:c3:73|
eth1| 00:50:56:8a:0d:fa|
eth2| 00:50:56:8a:0d:fb|
eth3| 00:50:56:8a:00:c1|
Note: The “Member of†will show which NICs are in the bond pair, if a bonding exists).
ok
Syntax
Show Command
Syntax
Show Commands
Syntax
Show Command
List the current static routes, with IDs - Device, Index, Address, Netmask, Gateway
Delete command
Syntax
Show Command
Syntax
Show Command
These commands are to assist Technical Support in analyzing the status of the machine, troubleshooting common issues and correct some common problems. There are
no functions that you would perform with these commands on a regular basis.
A way to manually purge audit results, this command should be used only when absolutely necessary to deal with audit tasks that produce a high number of records
and take up too much disk space.
It is strongly advised to consult with Technical Support before running this command.
A Warning message is presented and a confirmation step is needed when running this command.
This command will list the audit processes and tasks information.
It will present the number of rows, ordered from the largest result set to the smallest. The number of report results is greater or equal to the input value.
Next, after the report is presented, the user can select a line number to purge the results of the audit process corresponding to that line number. Selection of this
line number will delete the audit data for the selected process name.
Syntax
Input parameters
Note: On a system with a great many audit tasks, the completion of this command can take some time.
This CLI command will delete the specified file after user confirms to delete. If it can not find the file, it will list files larger than 10MB in /var/log and the user delete
a large file from the list. A warning message is presented and a confirmation step is included.
Syntax
A way to manually purge database activity monitoring data, this command should be used only when absolutely necessary.
It is strongly advised to consult with Technical Support before running this command.
Syntax
Input parameters
purge_type options: agg, exceptions, full_details, msgs, constructs, access, policy_violations, parser_errors, flat_log
start_date: YYYY-mm-dd
end_date: YYYY-mm-dd
Guardium archives/backups stored within Centera have a deletion date marker attached to them by Guardium, however there is no subsequent facility to invoke the
deletion. Centera does not have a GUI to allow maintenance of its own files, it relies on API invocations from client applications.
Use the CLI command, support clean centera_files, to delete marked files within Centera.
 USAGE: support clean hosts <IP address> <fully qualified domain name>
Use this CLI command to delete generated Javaâ„¢ servlets and their classes.
This utility is designed to provide Guardium Advanced Support with the ability to assist with remote diagnostics and support when direct remote access it not
available or permitted.
Support Execute is not a replacement for direct remote connections, but will allow Guardium Support at least some level of root access in a secure way without
direct access.
The commands provided by Guardium Advanced Support can be SQL statements, O/S Commands, Shell Scripts or SQL scripts. These will then be provided to the
customer along with a Secure Key to allow the command to run via CLI. The Secure key is tied to the system that Guardium Support is working with the customer
on, and is not valid for any other system. The command can only be run a number of times permitted by Guardium Support and is only valid for seven days from the
agreed date.
The feature is disabled by default. Enable via CLI command in both normal and recovery mode:
In order to permit the Guardium Advanced Support team to generate a Secure Key, the MAC address of the system in question must be provided for eth0. Here is an
example of the interfaces and MAC addresses:
# Show eth0 MAC address, root passkey & other system information
Syntax
Parameters
8-digit key number used to generate new password. Keep this key number to provide to Technical Support to receive new accessmgr account password. The
selection Random will generate a 8-digit random number.
Note: System will attempt to send notification to the accessmgr account email, if it is setup.
Â
This command will reset root password on the IBM® Guardium® appliance.
Syntax
Parameters
8-digit key number used to generate new password. Keep this key number to provide to Technical Support. The choice Random will generate a 8-digit random
number.
This command also requires that the user provide a secret keyword in order to change the root password. Contact Technical Support if there is a need to change the
root password.
Note: Do not reset root password unless absolutely required by business rules.
This command will list all the db processes sorted by running time.
Syntax
Parameters:
Where
Â
This command will display all the structure differences found during aggregation process.
Syntax
Â
This command will list 20 biggest database tables sorted by size and list of tables sorted by used free table space in percents for those tables which use more than
80% free space. It will allow filtering by table name. All table sizes displayed in Mbytes, free space usage in percents.
Syntax
Parameters
will list biggest tables matching criteria, where could be any portion of the table name
Â
Syntax
Â
This command uses a script to collect hardware information and place this collected information in a directory for retrieval.
After running this CLI command, the following message will appear:
Then run the CLI command, fileserver, to retrieve this .tar file from the server.
Syntax
Parameters
[diff | list] parameter controlling normal iptables output presentation versus displaying only differences/delta
[accept | full] parameter will filter output by accept row versus not filtered list
Â
This command will list all the files larger than MB and older than days in the /var /tmp /root folders.
Usage Â
This command will list all the files larger than MB and older than days in the /var /tmp /root folders
Input parameters:
Syntax:
Parameters
Â
This command will display the output of system netstat command. It will allow filtering of the output by content using grep parameter.
Syntax
Parameters
Â
This command is similar to using telnet to detect an open TCP port locally or on a remote host.
If we are able to connect successfully you will see a message like: Connection to 127.0.0.1 8443 port [tcp/*] succeeded!
If you are unable to connect you will see a message like: connect to 127.0.0.1 port 1 (tcp) failed: Connection refused
This command will display the output of system top command sorted by cpu, memory or running time. It has configurable number of iterations (default 1) and
number of displayed rows (default 10).
Syntax
Parameters
Â
Without any parameter this command checks all tables in TURBINE database with 3 minutes timeout for each check. Checks are running in parallel, overall time will
vary. Command will show progress in percents.If any check runs more than 3 minutes it will be terminated. All tables, whose checks were terminated by timeout,
will be listed on the screen after command completion. Any errors occurred during command's operation will be reported to the log file
/var/log/guard/<dbname>_check_tables/errors.<date>.log, where <date> is current date and <dbname> is the name of database.
Errors found for each table check operation will be reported in /var/log/guard/<dbname>_check_tables/check_table_child.<tablename>.<date>.log files, where
<date> is current date, <dbname> is a name of database and <tablename> is the name of table checked. Files for healthy tables are not created. </p><p>With
dbname specified as the 1st parameter the command will check all tables in the specified DB with the same timeout (3 minutes). With no parameters specified it
will check all TURBINE's tables.
With dbname and tablename specified as the parameters the command will check specified table in specified DB without timeout, until the check operation is
complete. This is to allow manual checking the tables whose checks didn't finish in 3 minutes. You can use masks in tablename parameter using percent sign (%).
Â
stops mysql
starts mysql
Â
Use this CLI command to troubleshoot MySQL issues. Use this CLI command to check what is happening at runtime with MySQL tables. Use this CLI command to
determine if long check times with MySQL tables are due to record lock or table lock.
Main thread process no. 7959, id 139923805550336, state: sleeping Number of rows inserted 6894, updated 6934, deleted 93, read 24787 0.33 inserts/s, 0.00
updates/s, 0.00 deletes/s, 0.67 reads/s
----------------------------
Use this CLI command to analyze content of static tables by sorting them based on the largest group per value length and value occurrence.
There are some simple must_gather commands that can be run by user CLI that generate specific information about the state of any Guardium system. This
information can be uploaded from the appliance and sent to Guardium Technical Support whenever a PMR (Problem Management Record) is logged.
Once the correct patch is installed, the must_gather commands can be run at any time by user CLI as follows.
3. Depending on the type of issue you are facing, paste the relevant must_gather commands into the CLI prompt. More than one must_gather command may
be needed in order to diagnose the problem.
Â
Â
For the following commands, you will be prompted for a time in minutes for how long you want the debugger running while you reproduce the problem.
Â
Output is written to the must_gather directory with filename(s) along the lines of this example, must_gather/system_logs/.tgz
By using fileserver, you can upload the tgz files and send to Support.
Send via email or upload to ECUREP using - for example - the standard data upload specifying the PMR number and file to upload.
Turns on Guardium for z/OS traffic diagnostics. This includes collection of TCPDUMP and SLON, collections will stop once corresponding files reach 2 GB size. Once
completed, results files tcpdump.tar.gz and slon_all.tar.gz can be found via fileserver command. The /var partition must have at least 15GB of free space.
Turns off Guardium for z/OS traffic diagnostics. Results files tcpdump.tar.gz and slon_all.tar.gz can be downloaded using the CLI command, fileserver.
Turns on SLON utility that captures packets got by sniffer for debug. Results files slon_packets.tar.gz, slon_messages.tar.gz or slon_all.tar.gz can be found via
fileserver. The /var partition must have at least 15GB of free space.
Turns off SLON utility. Results files slon_packets.tar.gz, slon_messages.tar.gz or slon_all.tar.gz can be found via fileserver.
packets, stop dumping packets, logging secure parameters, S-GATE debug info and sniffer SQL activities (default)
support store tcpdump on <type> <period> <loglimit> [interface] [IP] [port] [protocol]
support store tcpdump on <type> <period> <loglimit> [interface] [IP] [port] [protocol]
Turns on TCPDUMP utility. After period ends, results file tcpdump.tar.gz can be found via fileserver. The /var partition must have at least 15GB of free space.
Where:
<type> - dump type, 'headers' (only headers captured) or 'raw' (whole packets captured)
<period> - dump period, NUMBER[SUFFIX], where optional SUFFIX may be 's' for seconds, 'm' for minutes (default)
[IP] - IP address
[port] - port
'icmp6'
Example
This command will run TCPDUMP saving packets headers for 10 minutes and 1GB log file size limit.
Turns off TCPDUMP utility. After stop, results file tcpdump.tar.gz can be found via fileserver.
Collects necessary diagnostic information for Outliers, Quick search and Datamart functionality. Information includes dumps of corresponding internal tables,
necessary logs, state of corresponding processes and standard must_gather diagnostics (general system and internal DB info).
support must_gather network_issues [--host=<HOST>], where optional parameter <HOST> is hostname or IP address.
The command gathers all network information from the appliance and polls hosts that Guardium interacts with by using ping, traceroute, corresponding port
probing and other measures. If the optional parameter is specified, then it polls only the host that was specified (if Guardium is configured to do any activity on this
host).
store antlr3_max
Use this CLI command to help control data flow between Parser and Logger The CLI command, store antlr3_max is an advanced parameter geared towards expert
users and Customer Support to help control the data flow between Parser and Logger component of the Sniffer for Oracle, DB2, MySql, and MSSql.
This value (default 20,000) will change the number of concurrent parsed SQL statements that the Logger is able to hold in queue.
The issues that this could potentially help remedy are Sniffer running out of memory and restarting, or Sniffer not utilizing enough memory.
If you notice the sniffer is running out of memory and restarting, lowering the context cap may help to alleviate this. Alternatively, if the Sniffer isn't using enough of
the available system memory, raising the context cap can allow it to use more.
store active_parser_engine
This CLI command is used to control which parser engine should be used by sniffer. This CLI command is only applicable to database types supported by ANTLR3
parsers (Oracle, DB2, MS SQL, MySQL
USAGE: store active_parser_engine <num>
where <num> is
1: ANTLR3 parser errors reparsed by ANTLR2 (default)
start ecosystem
Use this command to restart the entire set of ecosystem processes. This is necessary after patching, upgrades and some other operations.
Syntax
start ecosystem
stop ecosystem
Use this command to temporarily and gracefully stop the entire set of ecosystem processes. This is necessary for patching, upgrades and some other operations.
Syntax
stop ecosystem
Sets the minimum charge percent (0-100) before powering down, or the number of seconds to run on battery power before powering down. The defaults are 25 and zero,
respectively.
There are also commands to start and stop the apc process. The apc process is disabled by default.
Syntax
Show Command
Example:
To create a banner (warning about unauthorized access, etc. or a welcome message) at the CLI login, use the CLI command, store system banner [message | clear].
Syntax
store system banner clear - use this CLI command to remove an existing banner message.
store system banner message - use this CLI command to create a banner message. Enter the banner message and then press CTRL-D.
Show command
show system banner - use this CLI command to view an existing banner message.
Syntax
Show Command
Example
IBM® Guardium® also logs the local timezone in the standard audit trail, to address cases where data is used in (or aggregated with) data collected in another time
zones.
Note: The timezone setting is not updated automatically when Daylight Saving time occurs. In order to update the machine, the user will need to reset the timezone. Reset
the timezone means to set a new timezone, different from what currently is, and then resetting to the correct timezone. Just resetting the timezone to the same one will
not work and give the message, No change for the timezone.
Syntax
Show Command
Example
Use the command first with the list option to display all time zones. Then enter the command a second time with the appropriate zone.
Timezone: Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Description:
--------- Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â -----------
Africa/Abidjan:
Africa/Accra:
Africa/Addis_Ababa:
...
...output deleted
...
Syntax
Show command
Use this CLI command to set the appropriate CPU scaling policy for your needs:
Syntax
Show command
Syntax
Show command
Syntax
Show Command
Syntax
Show Command
The CLI command, store system issue message, will receive input from the console until Ctrl-d and write it to /etc/motd after removing from the input any $,\, \followed by
single letter, and ` characters. This is a way to enter messages that make this system compliant with the security policies of customers.
The CLI command, store system issue clear, will restore /etc/motd to the default version.
The version comes from /etc/guardium-release. For example, SG70 -> 7.0, SG80 -> 8.0. If the SG is not found in the /etc/guard-release, the default version is an empty
string.
Syntax
Show command
Syntax
Example
Sets the host name of up to three NTP (Network Time Protocol) servers. Note that to enable the use of an NTP server, you must use the store system ntp state on
command. To define a single NTP server, enter its host name or IP address. To define multiple NTP servers, enter the command with no arguments, an you will be
prompted to supply the NTP server host names.
Syntax
Show Command
Delete command
delete ntp-server
Syntax
Show Command
The last option (sys) is for use when installing a second or subsequent patch from a compressed file that has been copied to the IBM Guardium appliance using this
command previously.
To display a complete list of applied patches, see the Installed Patches report on either Manage > Reports > Install Management > Installed Patches, Manage >
Maintenance > General > Installed Patches, or Reports > Guardium Operational Reports > Installed Patches.
In store system patch install CLI command, user can choose multiple patches from the list.
Syntax
<date> and <time> are the patch installation request time, date is formatted as YYYY-mm-dd, and time is formatted as hh:mm:ss
If no date and time is entered or if NOW is entered, the installation request time is NOW.
Parameters
Regardless of the option selected, you will be prompted to select a patch to apply:
cd - To install a patch from a CD, insert the CD into the IBM Guardium CD ROM drive before executing this command. A list of patches contained on the CD will be
displayed.
tp or scp - To install a patch from a compressed patch file located somewhere on the network, use the ftp or scp option, and respond to the prompts shown. Be sure to
supply the full path name for the patch, including the filename:
User on hostname:
Password:
In store system patch install scp CLI command, user can use wildcard * for the patch file name.
The compressed patch file will be copied to the IBM Guardium appliance, and a list of patches contained on file will be displayed.
sys - Use this option to apply a second or subsequent patch from a patch file that has been copied to the IBM Guardium appliance by a previous store system patch
execution.
The store system patch install command will not delete the patch file from the IBM Guardium appliance after the install. While there is no real need to remove the patch
file, as same patches can be reinstalled over existing patches and keeping patch files around can aid in analyze various problems, a user may remove patch files by hand or
use the CLI command diag (Note, the CLI command diag is restricted to certain users and roles.)
To delete a patch install request, use the CLI command delete scheduled-patch
Syntax
Syntax
Show command
Syntax
Show command
Reports either:
Use store system scheduler restart_interval [5 to 1440 or -1] to restart the timing function after 5 minutes to 1440 minutes. The default is -1 which means the timing
restart mechanism is not installed.
Use store system scheduler wait_for_shutdown [ON | OFF] to restart the scheduler after all jobs currently running finish. The parameters are ON or OFF.
Syntax
Show command
Users will need to make sure the collectors' shared secret and the aggregator's shared secret is exactly the same, otherwise the SCP transfer will fail from the collector to
the aggregator (This is a requirement for managed units and aggregators, collectors and aggregators, and export setup screen). The shared secret can be set both from CLI
and from the System pane in the Admin Console tab.
Syntax
Syntax
facility is one of: daemon ftp local0 local1 local2 local3 local4 local5 local6 local7 lpr user
Show command
The new configuration will be effective once the CLI command, restart inspection-core, is executed.
Syntax
Show command
The new configuration will be effective once the CLI command, restart inspection-core, is executed.
Syntax
Show command
Syntax
Show Command
Syntax
Show Command
Show Command
The use of the guardcli1 ... guardcli5 accounts requires the setting of a local password. Use the CLI command, set guisuer, command to reset the guardcli1 ... guardcli5
accounts and then add a local password, as shown in the Syntax.
Certain CLI commands are dependent on the role of the guiuser. For example, the role of the guiuser (marked when creating a new user from accessmgr view) must be
accessmgr in order to access grdapi create_user, grdapi set_user_roles, and grdapi update_user
Syntax
Example
$ ssh guardcli1@a1.corp.com
guardcli1@a1.corp.com's password:
================================================================
================================================================
ok
a1.corp.com>
create_user
Examples
userName=john disabled=0
ID=20000
roles="dba,diag,cas,user"
ID=20000
Failed to add role (diag). Diag must have one of these roles: cli or admin.
roles="dba,diag,cas,user,cli"
ID=20000
email="john.smith@gmail.com"
ID=20000
ID=0
Username: accessmgr
Email:
Disabled: false
Username: admin
Email:
Disabled: false
Username: anon
Email:
Disabled: false
Username: john
Email: john.smith@gmail.com
Disabled: false
Username: bill
Email:
Disabled: true
set_user_roles
set_user_roles
Each time that you execute a set_user_roles, you reset the roles of a user. You don't append to the roles. You reset.
When you create a user using GrdAPI, it will create the user with user role. Whenyou set the role, you have to specify all of its roles This is done to enable deletion of
existing roles and addition of new roles.
Even in GUI, it displays all roles, in which you can either check or uncheck a role and when you save it, it will save everything that you checked.
What GrdAPI does, is to give user kevin only role INV, where any user must have one of these roles: user, cli, admin, or accessmgr
Example
ok
ID=20000
ok
set_user_roles:
ERR=3700
User must have one of these roles: user, cli, admin, or accessmgr.
ok
roles="user,inv"
ID=20000
Failed to add role (inv). Sorry, before assigning the inv role the user's Last Name must be set to the name of one of the three investigation databases -
ok
roles="dba,diag,cas,user"
ID=20000
Failed to add role (diag). Diag must have one of these roles: cli or admin.
ok
>
show guiuser
This displays the user (by role) of GUI.
Show command
show guiuser
store password disable - Set the number of days after which an inactive account will be disabled.
store password expiration - Set the number of days after which a password will expire.
store password validation - Enable or disable the hardened password validation rules.
After a Guardium user account has been disabled, it can be enabled from the Guardium portal, and only by users with the accessmgr role, or the admin user.
Example
Note:
If the admin user account is locked, use the unlock admin command to unlock it.
If account lockout is enabled, setting the strike count or strike max to zero does NOT disable that type of check. On the contrary, it means that after just one failure the
user account will be disabled!
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
Syntax
Show Command
When password validation is enabled, the password must be eight or more characters in length, and must include at least one uppercase alphabetic character (A-Z), one
lowercase alphabetic character (a-z), one digit (0-9), and one special character from the table. When disabled (not recommended), any length or combination of
characters is allowed.
Syntax
Show Command s
@ Commercial at sign
# Number sign
$ Dollar sign
% Percent sign
& Ampersand
; Semicolon
! Exclamation mark
- Hyphen (minus)
+ Plus sign
= Equals sign
Syntax
You will be prompted to enter the current password, and then the new password (twice). None of the password values you enter on the keyboard will display on the
screen.
The cli user password requirements differ from the requirements for user passwords. The cli user password must be at least six characters in length, and must contain at
least one each of the following types of characters:
Digits (0-9)
Lowercase alphabetic characters (a-z)
Uppercase alphabetic characters (A-Z)
Running this CLI command will also update the change-time record in the password expiration file.
unlock accessmgr
Use this command to enable the Guardium accessmgr user account after it has been disabled. This command does not reset the accessmgr user account password.
Note: Only users with admin role are allowed to run this CLI command.
Syntax
unlock accessmgr
restart gui
unlock admin
Use this command to enable the Guardium admin user account after it has been disabled. This command does not reset the admin user account password.
Syntax
unlock admin
restart gui
Authentication commands
The following commands display or control the type of authentication used.
store auth
Use this command to reset the type of authentication used for login to the Guardium appliance, to SQL_GUARD (i.e. Local Guardium authentication, the default).
Optional authentication methods (LDAP or Radius, for example) can be configured and enabled from the administrator portal, but not from the CLI. See Configure
Authentication for more information.
Syntax
Show Command
show auth
GuardAPI Reference
GuardAPI provides access to Guardium® functionality from the command line.
This allows for the automation of repetitive tasks, which is especially valuable in larger implementations. Calling these GuardAPI functions enables a user to quickly
perform operations such as create datasources, maintain user hierarchies, or maintain the Guardium features such as S-TAP® just to name a few.
Proper login to the CLI for the purpose of using GuardAPI requires the login with one of the five CLI accounts (guardcli1,...,guardcli5) and an additional login (issuing the
'set guiuser' command) with a user (GUI username/guiuser) that has been created by access manager and given either the admin or cli role. See Set guiuser
Authentication for more information.
GuardAPI is a set of CLI commands, all of which begin with the keyword grdapi.
To list all GuardAPI commands available, enter the grdapi command with no arguments or use the 'grdapi commands' command with no search argument. For
example:
CLI> grdapi
or
CLI> grdapi commands
To display the parameters for a particular command, enter the command followed by '--help=true'. For example:
To search for GuardAPI commands given a search string use the CLI command, grdapi commands <search-string>. For example:
To display a values list for a parameter, enter the command followed by '--get_param_values=<parameter>;'. For example:
Case Sensitivity
Both the keyword and value components of parameters are case sensitive.
For example:
If, for example, you wanted to clear out a group from a policy rule you instead would set that group to space ("" "") and not an empty string (""""). Using an empty string
("""") would signal GuardAPI to ignore that group and not change that group selection.
Return Codes
Regardless of the outcome of the GuardAPI command, a return code is always returned in the first line of output, in the following format:
ID=identifier Successful. The identifier is the ID of the object operated upon; for example, the ID of a group that has just been
defined.
ERR=error_code Error. The error_code identifies the error, and one or more additional lines provide a text description of the error.
There is a table of common errors in the Overview and a complete listing of error codes in GuardAPI Error Codes.
For example, if we use the create_group command to successfully define an objects group named agroup, the ID of that group is returned:
We could use that ID in the list_group_by_id command to display the group definition
For an unsuccessful execution, an error code is returned. For example, if we enter the list_group_by_id command again with an invalid ID, we receive the following
message:
To see a complete list of GuardAPI error codes, type grdapi-errors, at the CLI command prompt.
2 Could not retrieve requested function - check function name. To list all functions, type either the CLI command, grdapi, or grdapi commands, with no
arguments.
To search, by function name, given a search string, use the CLI command, grdapi commands <search-string>
3 Too many arguments. To get the list of parameters for this function call the function --help=true
4 Missing required parameter. To get the list of parameters for this function call the function with --help=true
5 Could not decrypt parameter, check if encrypted with the correct shared secret.
6 Wrong parameter format, specify a function name followed by a list of parameters using <name=value> format.
21 --username and --source-host are grdapi reserved words and cannot be passed on the command line.
22 A parameter name cannot be specified more than once, please check the command line for duplicate parameters.
25 Not a valid parameter format - parameters should be specified as <name=value>, spaces are not allowed.
All grdapi activity will be attributed to the cli user. Double-click on the cli row in that report, and select the Detailed Guardium User Activity drill-down report. Every
command entered will be listed, along with any and all changes made. In addition, the IP address from which the command was issued is listed.
Encrypted Parameter
GuardAPI is intended to be invoked by scripts, which may contain sensitive information, such as passwords for datasources. To ensure that sensitive information is kept
encrypted at all times, the grdapi command supports passing of one encrypted parameter to an API Function. This encryption is done using the System Shared Secret
which is set by the administrator and can be shared by many systems, and between all units of a central management and/or aggregation cluster; enabling scripts with
encrypted parameters to run on machines that have the same shared secret.
Note: Trying to run an API call with encrypted parameter on a system where shared secret was not set results in an error message of
For Guard API scripts generated through the GUI, if encryption is required it is done using the shared secret of the system where script generation is performed.
The optional parameter encryptedParam is available on every grdapi call. This parameter can be used to pass an encrypted value for another parameter.
The encrypt_value API accepts a value to encrypt and the target system's shared secret (key) and then prints out the encrypted value. If the key is not the system's
shared secret it will print out a warning.
api_target_host In a central management configuration only, allows the user to specify a target host where the API will execute. On a Central
Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
2. Copy the generated content and embed within your cli script.
There are some entities (like FULL SQL) that have large numbers of attributes in them.
By default, all attributes will show up for all users (admin and non-admin).
Two GuardAPI commands have been added to display or not display certain attributes for certain users.
These GuardAPI commands will enable disable/enable ONLY specific groups of attributes in Full SQL: VSAM, ISAM, MApReduce, APEX, Hive and BigInsight.
The valid values for this parameter are: VSAM, IMS, MapReduce, APEX, Hive, BI (BigInsights), IMS/VSAM, DB2 i, F5 (Not case sensitive).
Each Grdapi will enable (disable) all the correspondent attributes for the group, for example VSAM will enable (disable) the following attributes:
VSAM records
VSAM records delected
VSAM records inserted
VSAM records retrieved
VSAM records updated
VSAM User Group ID
Hive command
Hive database
Hive error
Hive parsed SQL
Hive table name
Hive user
Note: The attributes will still be displayed if the user has the admin role; enabling or disabling these attributes applies ONLY to non-admin users (with no admin role).
Note: The GUI does not have to be restarted for the change to take effect. With this exception: If a report with the attributes of group F5 has been created and added it to
My New Reports, even though the attributes have been enabled, the no admin-user does not have the privilege to view the report. The GUI needs to be restarted to see
the report fields.
newExpDate string Required. The new expiration date for the day restored.
api_target_host string Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the
unit on which command is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API
will execute. On a Central Manager (CM) the value is the host name or IP of any managed units. On a managed
unit it is the host name or IP of the CM.
Example
grdapi list_expiration_dates_for_restored_days
newExpDate string Required. The new expiration date for the day restored.
api_target_host string Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the
unit on which command is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API
will execute. On a Central Manager (CM) the value is the host name or IP of any managed units. On a managed
unit it is the host name or IP of the CM.
Example:
where restoredDay can be of the format of a real day yyyy-mm-dd hh:mi:ss or relative day such as NOW -10 day.
set_expiration_date_for_restored_day
Set the expiration date for a given restored day.
V
a
l
u
e
t
y
p
Parameter e Description
newExpDate s Required. The new expiration date for the day restored.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
where newExpDate and restoredDay can be of the format of a real day yyyy-mm-dd hh:mi:ss or relative day such as NOW -10 day.
set_import
Start or stop import of Aggregation data.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
configure_export
Configure the export of Aggregation data.
V
a
l
u
e
t
y
p
Parameter e Description
aggSecHost S Â
t
r
i
n
g
exportValues i Required. 0, 1
n
t
e
g
e
r
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
configure_archive
Configure the archive of Aggregation data.
V
a
l
u
e
t
y
p
Parameter e Description
archiveValues i Required. 0 or 1
n
t
e
g
e
r
bucketName s Â
t
r
i
n
g
passwd s Password
t
r
i
n
g
secretKey s Â
t
r
i
n
g
targetDir s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i
n
g
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_assessment
Use this GuardAPI command to add a security assessment.
Table 1. create_assessment
V
a
l
u
e
t
y
p
Parameter e Description
assessmentDescription s Required. Free text – unique - must ensure there is no previous assessment with the same description. If there is one, then ERROR.
t
r
i
n
g
fromDate  Valid date or relative date. Not mandatory. Default: NOW -1 DAY
Action: If all parameters are validated created a new record in SECURITY_ASSESSMENT table (MODIFIED_FLAG leave default – 0)
Example
add_assessment_datasource
Use this GuardAPI command to add a datasource to a security assessment.
Table 2. add_assessment_datasource
V
a
l
u
e
t
y
p
Parameter e Description
assessmentDescription s Required. Free text. Unique - must ensure there is no previous assessment with the same description. If there is one, then ERROR.
t
r
i
n
g
datasourceName s Required. Free Text: Must be the Name of an existing datasource, if such datasource not present, then ERROR
t
r
i
n
g
Action: If all parameters are validated then it adds a record to: ASSESSMENT_DATASOURCE using the ASSESSMENT ID and DATASOURCE ID for the assessment and
datasource with the names provided.
Example
add_assessment_test
Use this GuardAPI command to add a test to an existing security assessment.
V
a
l
u
e
t
y
p
Parameter e Description
assessmentDescription s Required - Free text – unique - must ensure there is no previous assessment with the same description, if there is one, then ERROR
t
r
i
n
g
testDescription s Required - Free Text: Must match the TEST_DESC of an existing test in AVAILABLE_TEST , if such test not present, then ERROR
t
r
i
n
g
severity s Validates against SEVERITY_DESC table (using DESCRIPTION) – Not mandatory. The default value is INFO.
t
r
i
n
g
thresholdValue  If Threshold value required from available test = 0, then IGNORE this parameter.
Else (THRESHOLD) value required in available_test = 1, then parameter must be an integer
If 0 then (exceptions group not supported for this test): If the parameter is provided, then ERROR (can not provide exception group for
this test); If the parameter is NOT provided, then use -1 to populate.
Else  (Exception group supported for the test): If the parameter is NOT provided then use -1 to populate; IF the parameter is provided
validate the group and use the group ID.
To validate the group select from GROUP_DESC where GROUP_DESCRIPTION = the description provided, and check whether the
record exist and the GROUP_TYPE_ID
If there is not such group ERROR, then exception group does not exists.
If there is such group and the GROUP_TYPE_ID != 55, then ERROR: Exception group must be of the type “VA Exceptionsâ€
If the group is present and the type = 55, then use the GROUP_ID.
Additional Validation: Check whether there is already a record in ASSESSMENT_TEST for the ASSESSMENT_ID and TEST_ID, if there is such record: ERROR, this test is
already present in the assessment can not add it again.
Action: If all parameters validated then add a record to ASSESSMENT_TEST (note SEVERITY must be populated with the DESCRIPTION)
Example
delete_assessment
Use this GuardAPI command to delete a security assessment.
V
a
l
u
e
t
y
p
Parameter e Description
assessmentDescription s Required. Free text. Unique. Must ensure there is no previous assessment with the same description, if there is one, then ERROR
t
r
i
n
g
Additional Validation: Must ensure there are no results for the assessment to be deleted by:
Example
delete_assessment_datasource
Use this GuardAPI command to delete a datasource from a security assessment.
V
a
l
u
e
t
y
p
Parameter e Description
assessmentDescription s Required. Free text – unique - must ensure there is no previous assessment with the same description. If there is one, then ERROR.
t
r
i
n
g
datasourceName s Required. Free Text: Must be the Name of an existing data-source, if such datasource not present, then ERROR
t
r
i
n
g
Action: If all parameters validated, then check whether there is a record in ASSESSMENT_DATASOURCE for the assessment and datasource provided. If no such record
Error, otherwise delete the record.
Example
delete_assessment_test
Use this GuardAPI command to delete a test from an existing security assessment
V
a
l
u
e
t
y
p
Parameter e Description
assessmentDescription s Required. Free text – unique - must ensure there is no previous assessment with the same description, if there is one then ERROR
t
r
i
n
g
testDescription s Free Text: Must match the TEST_DESC of an existing test in AVAILABLE_TEST , if such test not present, then ERROR
t
r
i
n
g
Additional Validation: Check whether there is a record in ASSESSMENT_TES for the ASSESSMENT_ID and TEST_ID, if there is no such record: ERROR, this test is not
present in the assessment
Action: If all parameters validated then delete the record from ASSESSMENT_TEST.
Example
list_assessments
Use this GuardAPI command to list the security assessments.
assessmentDescription s Required. Free text – unique - must ensure there is no previous assessment with the same description, if there is one then ERROR
t
r
i
n
g
Example
grdapi list_assessments
list_assessment_tests
Use this GuardAPI command to show the list of tests for the security assessment.
The output of list_available_tests is in the following format: TEST=[<test description>], DS_TYPE=[<datasource type>] (The actual values are encapsulated within the
brackets)
The output of list_assessment_tests is in the following format: TEST_DESC=[<available test description>], DS_TYPE=[<datasourcetype>]
The parameters of list_assessment_tests API command are non-mandatory and support filtering.
V
a
l
u
e
t
y
p
Parameter e Validation
Example
grdapi list_assessment_tests
update_assessment
Use this GuardAPI command to update the record of the security assessment.
V
a
l
u
e
t
y
p
Parameter e Description
newAssessmentDescription s Free Text – IF empty, means do not update the description, use the value from the previous parameter, otherwise: unique must
t ensure there is no previous assessment with the same description, if there is one then ERROR.
r
i
n
g
Action: If all parameters validated (and there it identified a SECURITY_ASSESSMENT record with the description provided, then update the record with the values
provided)
Example
add_autodetect_task
This command adds a task to the specified process.
V
a
l
u
e
t
y
p
Parameter e Description
hosts_list s Required. Lists of hosts. Space separated list of IPs or IP ranges and wild cards such as 192.168.0.1 192.168.1.*
t
r
i
n
g
ports_list s Required. List of ports. Comma separated list of ports or port ranges such as 22,23,1400-1600
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_autodetect_process
This command creates an autodetect process.
V
a
l
u
e
t
y
p
Parameter e Description
use_dns  Required. Parameter to nmap1. Values are 'R' or 'true' for always, 'n' or 'false' for never.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Note: * nmap options are accessible from API only and not from GUI. For details of nmap parameters and their impact on scan performance see man nmap.
Example
modify_autodetect_process
This command modifies an autodetect process.
V
a
l
u
e
t
y
p
Parameter e Description
use_dns  Required. Parameter to nmap1. Values are 'R' or 'true' for always, 'n' or 'false' for never.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Note: * nmap options are accessible from API only and not from GUI. For details of nmap parameters and their impact on scan performance see man nmap.
Example
delete_autodetect_scans_for_process
This command remove all the tasks for a process, but cannot run if a process is running, scheduled or has results.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
list_autodetect_processes
This command lists all processes.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
grdapi list_autodetect_processes
list_autodetect_tasks_for_process
This command lists all tasks of a specified process.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
execute_autodetect_process
This command runs the specified process, but it cannot run if no tasks are defined for the process or if the process is currently running.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i
n
g
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
show_autodetect_process_status
This command shows process status and progress summary.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
stop_autodetect_process
This command stops the run of a specific process.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_entry_location
Adds a new archive entry to the internal catalog location table.
V
a
l
u
e
t
y
p
Parameter e Description
path s Required. For FTP: specify the directory relative to the FTP account home directory; for SCP: Specify the directory as an
t absolute path.
r
i
n
g
retention i Optional. The number of days this entry is to be kept in the catalog (the default is 365).
n
t
e
g
e
r
storageSystem s Required. Must be one of the following: EMC CENTERA, FTP, SCP, TSM.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
list_entry_location
Lists one archive location if a fileName is specified, or lists multiple archive locations when the fileName is omitted.
Â
Parameter Description
fileName s Optional. Identifies the single file location to be listed. If omitted, all file locations on the specified hostName and path will be
t listed.
r
i
n
g
path s Required. For FTP: specify the directory relative to the FTP account home directory; for SCP: Specify the directory as an
t absolute path.
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_entry_location
Updates one archive location if a fileName is specified, or removes multiple archive locations when the fileName is omitted.
V
a
l
u
e
t
y
p
Parameter e Description
fileName s Optional. Identifies the single file location to be removed. If omitted, all file locations on the specified hostName and path will
t be removed.
r
i
n
g
path s Required. For FTP: specify the directory relative to the FTP account home directory; for SCP: Specify the directory as an
t absolute path.
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
update_entry_location
Updates one archive locations if a fileName is specified, or updates multiple archive locations when the fileName is omitted.
fileName s Optional. Identifies the single file location to be updated. If omitted, all file locations on the specified hostName and path will
t be updated.
r
i
n
g
path s Required. For FTP: specify the directory relative to the FTP account home directory; for SCP: Specify the directory as an
t absolute path.
r
i
n
g
retention i Optional. The number of days this entry is to be kept in the catalog (the default is 365).
n
t
e
g
e
r
storageSystem s Optional. Use one of the following: EMC CENTERA, FTP, SCP, TSM.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
For instructions on how to use GuardAPI commands, see GuardAPI Reference Overview help topic.
create_classifier_action
V
a
l
u
e
t
y
p
y
Parameter e Description
add_to_group_object_fields
create_access_rule
create_privacy_set
log_policy_violation
action_send_alert
description s
t
r
i
n
g
objectGroup s Required.
t
r
i
n
g
policyName s Required.
t
r
i
n
g
ruleName s Required.
t
r
i
n
g
replaceGroupContent b Â
o
o
l
e
a
n
objectFieldGroup s Required.
t
r
i
n
g
accessPolicy s Required.
t
r
i
n
g
accessPolicy s Required.
t
r
i
n
g
accessRuleAction s Required.
t
r
i
n
g
commandsGroup s Â
t
r
i
n
g
includeField b Â
o
o
l
e
a
n
includeServerIP b Â
o
o
l
e
a
n
receiver s Â
t
r
i
n
g
privacySet s Required.
t
r
i
n
g
severity s Â
t
r
i
n
g
notificationType s
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Examples
%/%.Name %/NAME
%/Full %/FULL
Change/%.Name CHANGE/NAME
Change/Full CHANGE/FULL
Read/%.Name READ/NAME
Read/Full READ/FULL
Example
grdapi create_group appid=Classifier type=OBJECTS desc="Classifier Group of Each Objects" owner=admin category=classifier
classification=classifier subtype=classifier
grdapi create_classifier_policy policyName="A Group Object Each Type Policy" category="Object Each Process"
classification="Object Each Process"
grdapi create_classifier_rule policyName="A Group Object Each Type Policy" category="Object Each Process"
classification="Object Each Process" ruleName=groupobjects1 ruleType=SEARCH_FOR_DATA dataTypes=TEXT continueOnMatch=1
tableNameLike="EMP_INFORMATION"
columnNameLike="PHONE" tableTypeTable=1
create_classifier_policy
V
a
l
u
e
t
y
p
Parameter e Description
category s Required.
t
r
i
n
g
classification s Required.
t
r
i
n
g
description s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_classifier_process
create_classifier_process
Note: Create a classification policy and datasource before calling this GuardAPI.
comprehensive b Â
o
o
l
e
a
n
datasourceNames s Required.
t
r
i
n
g
policyName s Required.
t
r
i
n
g
processName s Required.
t
r
i
n
g
sampleSize i
n
t
e
g
e
r
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_classifier_rule
policyName s Required.
t
r
i
n
g
ruleName s Required.
t
r
i
n
g
ruleType s Required.
t
r For reference, here is the list of valid rule types with the associated required parameters. Â Depending on what the user selects for the
i rule type will determine which parameters are required
n
catalog_search_add
g
policyName - String - required
search_by_permissions_add
search_for_data_add
search_for_unstructured_data_add
category s Â
t
r
i
n
g
classification s Â
t
r
i
n
g
continueOnMatch b Â
o
o
l
e
a
n
description s Â
t
r
i
n
g
columnNameLike s Â
t
r
i
n
g
fireOnlyWithMarker s Â
t
r
i
n
g
tableNameLike s Â
t
r
i
n
g
tableTypeSynonym b Â
o
o
l
e
a
n
tableTypeSystemTable b Â
o
o
l
e
a
n
tableTypeTable b Â
o
o
l
e
a
n
tableTypeView b Â
o
o
l
e
a
n
grantTypes s Â
t
r
i
n
g
role s Â
t
r
i
n
g
roleGroup s Â
t
r
i
n
g
user s Â
t
r
i
n
g
userGroup s Â
t
r
i
n
g
withAdminOption b Â
o
o
l
e
a
n
compareToValuesInGroup s Â
t
r
i
n
g
compareToValuesInSQL s Â
t
r
i
n
g
dataTypes s Â
t
r
i
n
g
evaluationName s Â
t
r
i
n
g
hitPercentage i Â
n
t
e
g
e
r
maxLength i Â
n
t
e
g
e
r
minLength i Â
n
t
e
g
e
r
searchExpression s Â
t
r
i
n
g
searchLike s Â
t
r
i
n
g
grantTypes s Â
t
r
i
n
g
showUniqueValues T Â
r
u
e
o
r
F
a
l
s
e
uniqueValueMask s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Examples
grdapi create_group appid=Classifier type=OBJECTS desc="AA Classifier ALL Values" owner=admin category=classifier
classification=classifier subtype=classifier
grdapi create_classifier_policy policyName="Search ALL DATA SEARCH smoke values" category="ALL" classification="ALL"
grdapi create_classifier_rule policyName="Search ALL DATA SEARCH smoke values" category="ALL" classification=ALL
ruleName=ALL1 ruleType=SEARCH_FOR_DATA dataTypes=TEXT,NUMBER continueOnMatch=1 tableNameLike="DEPT14%" minLength=1 maxLength=100
tableTypeSynonym=1 tableTypeSystemTable=1 tableTypeTable=1 tableTypeView=1 fireOnlyWithMarker=ACCT searchLike="A%"
searchExpression="^AA*" columnNameLike="DNAME" evaluationName="com.guardium.classifier.custom.RichardEvaluation" hitPercentage=10
compareToValuesInGroup="AA Classifier ALL Values" compareToValuesInSQL="select DNAME from SCOTT.DEPT where DNAME like 'A%G'"
showUniqueValues="true" uniqueValueMask="^AA*"
delete_classifier_action
V
a
l
u
e
t
y
p
Parameter e Description
actionName s Required.
t
r
i
n
g
policyName s Required.
t
r
i
n
g
Example
delete_classifier_policy
V
a
l
u
e
t
y
p
Parameter e Description
policyName s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
delete_classifier_process
processName s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
example
delete_classifier_rule
V
a
l
u
e
t
y
p
Parameter e Description
policyName s Required.
t
r
i
n
g
ruleName s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
execute_cls_process
Execute (submit) a classification process
Runs a classification process. Â It is equivalent of executing Run Once Now from Classification Process Builder. It submits the job which places the process on the
Guardium® Job Queue, from which the appliance runs a single job at a time. Administrators can view the job status by selecting Guardium Monitor > Guardium Job
Queue.
Note: Create a classification process before calling this API.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
Here is a list of the classifier functions and the parameters for each. Â In the case where the parameter will have a set list of valid entries, the list will be supplied.
list_classifier_policies
V
a
l
u
e
t
y
p
Parameter e Description
policyName s Required.
t
r
i
n
g
ruleName s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
Note: Executing this function with no arguments will list all policies. Â Passing an argument for the policy will list all rules and actions for the policy. Â Passing a policy and
rule will list all of the actions for the rule.
list_classifier_process
processName s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
example:
set_classification_concurrency_limit
The set_classification_concurrency_limit command defines the number of classifier processes that can run concurrently.
Syntax: grdapi set_classification_concurrency_limit limit=[value].
Parameter V Description
a
l
u
e
t
y
p
e
limit I The limit value defines the number of classifier processes that can run concurrently. The limit value is the lesser of 100 or twice the
n number of CPU cores installed on the Guardium system.
t
e For example, if a system has 8 CPU cores, the maximum limit value is 16. If a system has 64 CPU cores, the maximum limit value is
g 100.
e
The default limit value is 1.
r
:
1
-
1
0
0
,
d
e
p
e
n
d
i
n
g
o
n
t
h
e
h
a
r
d
w
T
h
e
d
e
f
a
u
l
t
v
a
l
u
e
i
s
1
.
Example:
update_classifier_action
actionName s Required.
t
r
i
n
g
actualMemberContent s Required.
t
r
i
n
g
description s Â
t
r
i
n
g
objectGroup s Required.
t
r
i
n
g
policyName s Required.
t
r
i
n
g
replaceGroupContent b Â
o
o
l
e
a
n
objectFieldGroup s Required.
t
r
i
n
g
accessPolicy s Required.
t
r
i
n
g
accessRuleAction s Required.
t
r
i
n
g
commandsGroup s Â
t
r
i
n
g
includeField b Â
o
o
l
e
a
n
includeServerIP b Â
o
o
l
e
a
n
receiver s Â
t
r
i
n
g
privacySet s Required.
t
r
i
n
g
severity s Â
t
r
i
n
g
notificationType s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
update_classifier_policy
V
a
l
u
e
t
y
p
Parameter e Description
policyName s Required
t
r
i
n
g
category s Required
t
r
i
n
g
classification s Required
t
r
i
n
g
description s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
update_classifier_process
update_classifier_process
V
a
l
u
e
t
y
p
Parameter e Description
comprehensive b Â
o
o
l
e
a
n
datasourceNames s Required.
t
r
i
n
g
To view and edit the databases and schema affected by the includeInternalTables parameter, use the Group Builder to edit one of the
predefined Excluded Classification groups.
newName s Â
t
r
i
n
g
policyName s Required
t
r
i
n
g
processName s Required
t
r
i
n
g
sampleSize i Â
n
t
e
g
e
r
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
update_classifier_rule
policyName s Required
t
r
i
n
g
ruleName s Required
t
r
i
n
g
category s Â
t
r
i
n
g
classification s Â
t
r
i
n
g
continueOnMatch b Â
o
o
l
e
a
n
description s Â
t
r
i
n
g
columnNameLike s Â
t
r
i
n
g
fireOnlyWithMarker s Â
t
r
i
n
g
tableNameLike s Â
t
r
i
n
g
tableTypeSynonym b Â
o
o
l
e
a
n
tableTypeSystemTable b Â
o
o
l
e
a
n
tableTypeTable b Â
o
o
l
e
a
n
tableTypeView b Â
o
o
l
e
a
n
grantTypes s Â
t
r
i
n
g
role s Â
t
r
i
n
g
roleGroup s Â
t
r
i
n
g
user s Â
t
r
i
n
g
userGroup s Â
t
r
i
n
g
withAdminOption b Â
o
o
l
e
a
n
compareToValuesInGroup s Â
t
r
i
n
g
compareToValuesInSQL s Â
t
r
i
n
g
dataTypes s Â
t
r
i
n
g
evaluationName s Â
t
r
i
n
g
hitPercentage i Â
n
t
e
g
e
r
maxLength i Â
n
t
e
g
e
r
minLength i Â
n
t
e
g
e
r
searchExpression s Â
t
r
i
n
g
searchLike s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Examples
create_cloud_datasource
application String. See description Required. The application for which the datasource is being defined. One of the following:
Access Policy
Application User Translation
Audit Task
Change Audit System
Classifier
Custom Domain
Database Analyzer
Monitor Values
Security Assessment
Stap Verification
cloudTitle string. see Description Required. Name of cloud account already defined in Guardium
conProperty String Optional. Use only if additional connection properties must be included on the JDBC URL to
establish a JDBC connection with this datasource. The required format is property=value,
where each property and value pair is separated from the next by a comma.
customURL String Optional. Connection string to the datasource; otherwise connection is made using host, port,
instance, properties, etc. of the previously entered fields. This is useful, for example, when
creating Oracle Internet Directory (OID) connections.
dbInstanceAccount String Optional. Database Account Login Name that is used by CAS
dbInstanceDirectory String Optional. Directory where database software was installed is used by CAS
dbName String Optional. For a DB2® or Oracle datasource, enter the schema name. For others, enter the
database name.
importServerSSLcert Boolean Â
KerberosConfigName String Optional. Name of Kerberos configuration already defined in Guardium system
name String Required. A unique name for the datasource in the Guardium system
objectLimit 0, positive integer Required. The maximum number of sensitive objects found in the classification process that are
added automatically to the list of audited objects. Default = 20.
primaryCollector Integer The collector that extracts the audit data from the cloud database.
savePassword Boolean Saves and encrypts your authentication credentials on the Guardium appliance. Required if you
are defining a datasource with an application that runs as a scheduled task (as opposed to on
demand). When set to yes, login name and password are required.
serviceName String Optional. Required for Oracle, Informix®, DB2, and IBM® ISeries. For a DB2 datasource enter
the database name, for others enter the service name.
severity value list Optional. Severity Classification (or impact level) for the datasource. One of:
LOW
NONE
MED
HIGH
shared value list Optional. Set to True or Share to share with other applications. To share the datasource with
other users, you will have to assign roles from the GUI. Values:
Share
Not Shared
True
False
type value list Required. Identifies the datasource type. Valid values:
useKerberos Boolean Optional (boolean). Set to yes to use Kerberos authentication. If yes, KerberosConfigName
must be supplied.
user String Optional. User for the datasource. If used, password must also be used.
list_cloud_datasource_by_name
api_target_host string Optional parameter that specifies the target host(s) to execute the API. When not specified, it
defaults to the unit on which command is executed. Valid values:
restart_cloud_instance
Restarts the specified cloud instance.
api_target_hos string Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command is
t executed. Valid values:
update_cloud_datasource
Updates the cloud datasource configuration.
conProperty String Optional. Use only if additional connection properties must be included on the JDBC URL to
establish a JDBC connection with this datasource. The required format is property=value,
where each property and value pair is separated from the next by a comma.
customURL String Optional. Connection string to the datasource; otherwise connection is made using host, port,
instance, properties, etc. of the previously entered fields. This is useful, for example, when
creating Oracle Internet Directory (OID) connections.
dbInstanceAccount String Optional. Database Account Login Name that is used by CAS
dbInstanceDirectory String Optional. Directory where database software was installed that will be used by CAS
dbName String Optional. For a DB2® or Oracle datasource, enter the schema name. For others, enter the
database name.
importServerSSLcert Boolean Â
name String Required. A unique name for the datasource in the Guardium system
newName String Optional. Provides a new name, which must be unique for a datasource on the system.
objectLimit integer: 0 and higher Required. The maximum number of sensitive objects found in the classification process that are
added automatically to the list of audited objects.
savePassword Boolean Saves and encrypts your authentication credentials on the Guardium appliance. Required if you
are defining a datasource with an application that runs as a scheduled task (as opposed to on
demand). When set to yes, login name and password are required.
serviceName String Optional. Required for Oracle, Informix®, DB2, and IBM® ISeries. For a DB2 datasource enter
the database name, for others enter the service name.
severity value list Optional. Severity Classification (or impact level) for the datasource. One of:
LOW
NONE
MED
HIGH
shared value list Optional. Set to True or Share to share with other applications. To share the datasource with
other users, you will have to assign roles from the GUI. Values:
Share
Not Shared
True
False
useKerberos Boolean Optional (boolean). Set to yes to use Kerberos authentication. If yes, KerberosConfigName
must be supplied.
user String Optional. User for the datasource. If used, password must also be used.
non_credential_scan
API that allows for submitting jobs that will scan databases within the serversGroup for enabled default users in the usersGroup. Submitted jobs will run under the
Classifier Listener and may be tracked using the Classifier/Assessment Job Queue report. A submitted job may be canceled from the Classifier/Assessment Job Queue
report by double-clicking on the job and choosing Stop Job.
Note: If a server within the serversGroup can not be reached, an exception of type Scheduled Job Exception is added and the serveris not scanned.
V
a
l
u
e
t
y
p
Parameter e Description
databaseType v Required. Must be one of the following: ORACLE, DB2®, SYBASE, MS SQL SERVER, MYSQL, TERADATA, POSTGRESQL,
a NETEZZA, IBM ISERIES, INFORMIX
l
u
e
s
l
i
s
t
serversGroup v Required. Must be a valid group of servers (Server IP/Instance Name/Port) as defined with Group Builder.
a
l
u
e
s
l
i
s
t
usersGroup v Required. Must be a valid group of users (DB User/DB Password) as defined with Group Builder. Default groups exist within
a Group Builder.
l
u
e
s
l
i
s
t
Example
create_db_user_mapping
delete_db_user_mapping
list_db_user_mapping
create_db_user_mapping
Use of wildcards:
In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
'create' command:
serverIp - wildcard is valid, '%' can be placed instead of the number in the ip_address format
192.168.2.% - valid
192.%.2.% - valid
192.% - invalid
serviceName - wildcards (%) are allowed
dbUserName - no wildcards, '%' is valid, but will be considered as the symbol '%'
emailAddress - no wildcards, '%' is valid, but will be considered as the symbol '%'
emailAddress s Required (any string and requires an '@' sign). Identifies the email address.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_db_user_mapping
Use of wildcards:
In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
'create' command:
serverIp - wildcard is valid, '%' can be placed instead of the number in the ip_address format
192.168.2.% - valid
192.%.2.% - valid
192.% - invalid
serviceName - wildcards (%) are allowed
dbUserName - no wildcards, '%' is valid, but will be considered as the symbol '%'
emailAddress - no wildcards, '%' is valid, but will be considered as the symbol '%'
emailAddress s Required (any string and requires an '@' sign). Identifies the email address.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
list_db_user_mapping
Use of wildcards:
In the 'delete' and the 'list' commands, all 4 parameters accept wildcards ('%')
'create' command:
serverIp - wildcard is valid, '%' can be placed instead of the number in the ip_address format
192.168.2.% - valid
192.%.2.% - valid
192.% - invalid
serviceName - wildcards (%) are allowed
dbUserName - no wildcards, '%' is valid, but will be considered as the symbol '%'
emailAddress - no wildcards, '%' is valid, but will be considered as the symbol '%'
emailAddress s Required (any string and requires an '@' sign). Identifies the email address.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
set_debug_level
Use this GuardAPI command to control IMS output.
If the IMS debug_level = 1, IMS debug fields like mvs_is_plex, mvs_ipaddr, mvs_dlta_sign, mvs_dlta_val output to internal database tables,
GDM_CONSTRUCT_TEXT.FULL_SQL or GDM_EXCEPTION.FULL_SQL.
If the IMS debug level is 0, then the IMS debug fields are not distributed.
create_datasource
Use this command to define a new datasource.
Note: In a Central Manager environment, datasources are defined on the Central Manager. GuardAPI will allow you to create datasources on a managed unit, but those
datasources cannot be seen or used.
V
a
l
u
e
t
y
p
Parameter e Description
application v Required. Identifies the application for which the datasource is being defined. It must be one of the following:
a
l Access_policy
u
Application User translation
e
li AuditDatabase
s
t AuditTask
ChangeAuditSystem
Classifier
CustomDomain
DatabaseAnalyzer
MonitorValues
SecurityAssessment
Stap_Verification
compatibilityMode  Compatibility Mode: Choices are Default or MSSQL 2000. The processor is told what compatibility mode to use when
monitoring a table.
conProperty c Optional. Use only if additional connection properties must be included on the JDBC URL to establish a JDBC connection with
o this datasource.
m
m For a Sybase database with a default character set of Roman8, enter the following property: charSet=utf8
a
s
e
p
a
r
a
t
e
d
li
s
t
o
f
p
r
o
p
e
r
t
y
=
v
a
l
u
e
customURL Â Optional. Connection string to the datasource; otherwise connection is made using host, port, instance, properties, etc. of the
previously entered fields. As an example this is useful for creating Oracle Internet Directory (OID) connections.
dbInstanceAccount s Optional. Database Account Login Name that will be used by CAS
t
r
i
n
g
dbInstanceDirectory s Optional. Directory where database software was installed that will be used by CAS
t
r
i
n
g
dbName s Optional. For a DB2® or Oracle datasource, enter the schema name. For others, enter the database name.
t
r
i
n
g
name s Required. Provides a unique name for the datasource on the system.
t
r
i
n
g
savePassword b Saves and encrypts your authentication credentials on the Guardium appliance. Required if you are defining a datasource with
o an application that runs as a scheduled task (as opposed to on demand). When set to yes, login name and password are
o required.
l
e
a
n
serviceName s Required for Oracle, Informix®, DB2, and IBM® ISeries. For a DB2 datasource enter the database name, for others enter the
t service name.
r
i
n
g
severity  Optional. Severity Classification (or impact level) for the datasource.
shared b Optional. Set to true to share with other applications. To share the datasource with other users, you will have to assign roles
o from the GUI.
o
l
e
a
n
MS SQL Server
MySQL
NA
Netezza
Oracle (DataDirect)
Oracle (SID)
PostgreSQL
Sybase
Sybase IQ
Teradata
TEXT
TEXT:FTP
TEXT:HTTP
TEXT:HTTPS
TEXT:SAMBA
useKerberos b Optional. Set to yes to use Kerberos authentication. If yes, KerberosConfigName must be supplied.
o
o
l
e
a
n
user s Optional. User for the datasource. If used, password must also be used.
t
r
i
n
g
Example
create_test_exception
Use this command to add records to the Tests Exceptions. This effects the behavior for vulnerability assessments, if a test on a specific datasource fails it will check the
last record of the test exceptions table for that test/datasource such that if the execution date is contained within the from and to dates of the last record the test will be
set to PASS, the recommendation will be set to the explanation (from the exceptions record) and the result text will be set to:
Test passed, based on exception approved by: .... effective from date to date.
Note: The API only adds records to remove an exception a new record should be created with new dates according to the needs.
V
a
l
u
e
t
y
p
Parameter e Description
list_datasource_by_name
Displays a datasource definition identified by a name.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
list_datasource_by_id
Displays a datasource definition identified by an ID key.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_datasource_by_name
Deletes the specified datasource definition, unless that datasource is being used by an application. This function removes the datasource, regardless of who created it.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_datasource_by_id
Deletes the specified datasource definition, unless that datasource is being used by an application. This function removes the datasource, regardless of who created it.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
update_datasource_by_name
Updates a datasource definition.
V
a
l
u
e
t
y
p
Parameter e Description
newName s Optional. Provides a new name, which must be unique for a datasource on the system.
t
r
i
n
g
savePassword b Saves and encrypts your authentication credentials on the Guardium appliance. Required if you are defining a datasource with
o an application that runs as a scheduled task (as opposed to on demand). When set to yes, login name and password are
o required.
l
e
a
n
user s Optional. User for the datasource. If used, password must also be used.
t
r
i
n
g
password s Optional. Password for user. If used, user must also be used.
t
r
i
n
g
conProperty C Optional. Use only if additional connection properties must be included on the JDBC URL to establish a JDBC connection with
o this datasource.
m
m For a Sybase database with a default character set of Roman8, enter the following property: CHARSET=utf8
a
s
e
p
a
r
a
t
e
d
li
s
t
o
f
:
p
r
o
p
e
r
t
y
=
v
a
l
u
e
dbInstanceAccount s Optional. Database Account Login Name that will be used by CAS
t
r
i
n
g
dbInstanceDirectory s Optional. Directory where database software was installed that will be used by CAS
t
r
i
n
g
shared b Optional. Set to true to share with other applications. To share the datasource with other users, you will have to assign roles
o from the GUI.
o
l
e
a
n
customURL s Optional. Connection string to the datasource; otherwise connection is made using host, port, instance, properties, etc. of the
t previously entered fields. As an example this is useful for creating Oracle Internet Directory (OID) connections.
r
i
n
g
severity  Optional. Severity Classification (or impact level) for the datasource.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
useKerberos b Optional. Set to yes to use Kerberos authentication. If yes, KerberosConfigName must be supplied.
o
o
l
e
a
n
Example
update_datasource_by_id
Updates a datasource definition.
V
a
l
u
e
t
y
p
Parameter e Description
newName s Optional. Provides a new name, which must be unique for a datasource on the system.
t
r
i
n
g
savePassword b Saves and encrypts your authentication credentials on the Guardium appliance. Required if you are defining a datasource with
o an application that runs as a scheduled task (as opposed to on demand). When set to yes, login name and password are
o required.
l
e
a
n
user s Optional. User for the datasource. If used, password must also be used.
t
r
i
n
g
password s Optional. Password for user. If used, user must also be used.
t
r
i
n
g
conProperty C Optional. Use only if additional connection properties must be included on the JDBC URL to establish a JDBC connection with
o this datasource.
m
m For a Sybase database with a default character set of Roman8, enter the following property: CHARSET=utf8
a
s
e
p
a
r
a
t
e
d
li
s
t
o
f
p
r
o
p
e
r
t
y
=
v
a
l
u
e
dbInstanceAccount s Optional. Database Account Login Name that will be used by CAS
t
r
i
n
g
dbInstanceDirectory s Optional. Directory where database software was installed that will be used by CAS
t
r
i
n
g
shared b Optional. Set to true to share with other applications. To share the datasource with other users, you will have to assign roles
o from the GUI.
o
l
e
a
n
customURL s Optional. Connection string to the datasource; otherwise connection is made using host, port, instance, properties, etc. of the
t previously entered fields. As an example this is useful for creating Oracle Internet Directory (OID) connections.
r
i
n
g
severity  Optional. Severity Classification (or impact level) for the datasource.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
useKerberos b Optional. Set to yes to use Kerberos authentication. If yes, KerberosConfigName must be supplied.
o
o
l
e
a
n
Example
list_db_drivers
List only the name of database drivers Oracle (DataDirect) and MS SQL SERVER (DataDirect) are now supported as datasource types.
list_db_drivers_by_details
Lists each database driver in more details (name, class, driver class, URL, and datasource type ID)
create_datasourceRef_by_id
For a specific object of a specific application type (for example, a specific Classification process), creates a reference to a datasource.
V
a
l
u
e
t
y
p
Parameter e Description
objId i Required. Identifies an instance of the appID type specified. For example, if apID=51, this would be the ID of a classification
n process.
t
e
g
e
r
Example
create_datasourceRef_by_name
For a specific object of a specific application type (for example, a specific Classification process), creates a reference to a datasource.
Table 1. create_datasourceRef_by_name
V
a
l
u
e
t
y
p
Parameter e Description
objName s Required. Identifies an instance of the application type specified. For example, if the application is Classifier, this would be the
t name of a specific classification process.
r
i
n
g
Example
list_datasourceRef_by_id
For a specific object of a specific application type (for example, a specific Classification process), lists all datasources referenced.
objID s Required. Identifies an instance of the application type specified. For example, if the application is Classifier, this would be the
t ID of a specific classification process.
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
list_datasourceRef_by_name
For a specific object of a specific application type (for example, a specific Classification process), lists all datasources referenced.
V
a
l
u
e
t
y
p
Parameter e Description
CustomTables
Classifier
objName s Required. Identifies an instance of the application type specified. For example, if the application is Classifier, this would be the
t name of a specific classification process.
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
delete_datasourceRef_by_id
For a specific object of a specific application type (for example, a specific Classification process), removes a datasource reference.
V
a
l
u
e
t
y
p
Parameter e Description
appId  Required (integer). Identifies the application. Must be from this list:
8 = SecurityAssessment
47 = CustomTables
51 = Classifier
objId i Required. Identifies an instance of the appID type specified. For example, if apID=51, this would be the ID of a classification
n process.
t
e
g
e
r
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_datasourceRef_by_name
For a specific object of a specific application type (for example, a specific Classification process), removes a datasource reference.
V
a
l
u
e
t
y
p
Parameter e Description
CustomTables
Classifier
objName s Required. Identifies an instance of the application type specified. For example, if the application is Classifier, this would be the
t name of a specific classification process.
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
create_user_hierarchy
Add a relationship between a user and parent in the user data security hierarchy
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
Note: An error will occur if the insert is cyclic (a parent reports to a child)
list_user_hierarchy_by_parent_user
List relationships in the user data security hierarchy
V
a
l
u
e
t
y
p
Parameter e Description
create b If set (true or false) will or will not generate create statements for create_user_hierarchy API calls.
o
o Use this parameter to get all the commands necessary to generate a batch file. This batch file can be used to move each parent
l and child pairing to another Guardium system.
e
a
n
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
Note: Only lists immediate parent-child relationship - will not display "grandchildren"
delete_user_hierarchy_by_entry_id
Deletes a relationship in the user data security hierarchy by entry id
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_user_hierarchy_by_user
Deletes a relationship in the user data security hierarchy by user
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
Note:
create_allowed_db
Create a User-DB association
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
list_allowed_db_by_user
List User-DB associations by user
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_allowed_db_by_entry_id
Delete a User-DB association by entry id
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
delete_allowed_db_by_user
Delete a User-DB association by user
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
update_user_db
Fully apply all recent changes to the active User-DB association map
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On
a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of
the CM.
Example
grdapi update_user_db
Note: In a Central Management configuration, this command should be run on a Central Manager.
Parent topic: GuardAPI Reference
grdapi get_load_balancer_load_map
get_load_balancer_params
View the current load balancer configuration parameters.
grdapi get_load_balancer_params
set_load_balancer_param
Set load balancer configuration parameters.
See Enterprise load balancing configuration parameters for a list of available parameters and allowed values.
ID=0
To get a Constant values list for a parameter, call the function with --get_param_values
assign_load_balancer_groups
Assign a managed unit group to an application or S-TAP group.
unassign_load_balancer_groups
Unassign a managed unit group from an application or S-TAP group.
enable_entitlement_optimization
Enables the entitlement optimization feature on this Collector.
grdapi enable_entitlement_optimization
disable_entitlement_optimization
Disables the entitlement optimization feature on this Collector.
grdapi disable_entitlement_optimization
add_datasource_to_entitlement_optimization
Adds the data from this source to the entitlement optimization data collection, and to individual tabs as specified.
grdapi add_datasource_to_entitlement_optimization
Parameter Value type Description
isEnabled one of: true, false Datasource is enabled, or disabled, for entitlement optimization
Default = false
userScope One or more comma separated Guardium Optional. Entitlement recommendations results are filtered by this group of users. Browse Entitlements
user group IDs (groups must contain only results: indicates whether users are included in this scope or not; does not present user activity count of
users) users outside the scope.
default = NULL
objectScop One or more comma separated Guardium Optional. Entitlement recommendations results are filtered by this group of objects.
e object group IDs (groups must contain only
objects) default = NULL
extractActiv one of: true, false Enables, disables extraction of datasource activity.
ity
Must be true for Browse Entitlements and What If.
Default = false
extractEntitl one of: true, false Enables, disables extraction of entitlement data.
ement
Must be true for What's New, Users and Roles, Recommendations, and Browse Entitlements
Default = false
generateRol one of: true, false Enables, disables extraction of behavioral role clustering from the data source, used in the What If tab.
eClusters
Must be true for What If.
Default = false
generateNe one of: true, false Activity from this datasource is included in the What's New? tab.
ws
Default = false
generateRe one of: true, false Activity from this datasource is included in the Recommendations.
commendat
ions Default = false
Default = true
Default = true
remove_datasource_from_entitlement_optimization
Removes all data from this source from the entitlement optimization data collection.
remove_datasource_from_entitlement_optimization
set_entitlement_datasource_parameter
Modifies parameters for data source that is already enabled for entitlement optimization. Uses the same parameters as add_datasource_to_entitlement_optimization.
grdapi set_entitlement_datasource_parameter
get_entitlement_datasource_parameter
Displays the parameter settings for each data source on this Collector.
grdapi get_entitlement_datasource_parameter
Example:
create_ef_mapping
This function creates a mapping and populates tables based on the name of the report specified by the reportName parameter. Each mapping has a name stored in
EF_MAP_TYPE_HDR.EF_TYPE_DESC, and that name will be identical to the value of reportName. The target table name will also be based on the reportName parameter,
with underscores added between the words. For example, "My Report" becomes MY_REPORT.
V
a
l
u
e
t
y
p
Parameter e Description
reportName s Name of the report to use for external feed mapping. This parameter also determines the name of the mapping and the target
t table name.
r
i
n
g
modify_ef_mapping
Sometimes the names generated by create_ef_mapping are not suitable for particular database, and modify_ef_mapping can be used to adjust the names to fit database
requirements. Only mappings with ID >= 20000 may be modified in order to protect predefined Guardium mappings.
V
a
l
u
e
t
y
p
Parameter e Description
modifyObj  Specifies the database object to modify, either table or column. Existing values can be retrieved using the list_ef_mapping
function.
delete_ef_mapping
This function allows you to delete existing mappings. Only mappings with ID >= 20000 may be deleted in order to protect predefined Guardium mappings.
list_ef_mapping
If run without any parameters, this function returns a list of all customer-created mappings. If run with the reportName parameter, this function returns details of the
specified mapping (such as the table and column names used by the external feed).
Table 1.
V
a
l
u
e
t
y
p
Parameter e Description
Use the GuardAPI command, grdapi create_policy, to create a FAM policy. After the policy is created, use FAM-specific GuardAPI commands.
For example:
grdapi create_fam_rule policyName='TEST' ruleName=r-test-sles11 actionName="Log As Violation and Audit" serverHost="9.70.144.98:FAM" filePath="/famtest/*"
enable_fam_crawler
Sets the Guardium system to process crawler results and file activity data. The results will be added automatically to quick search index files. Use the parameters to
schedule file quick search activity, entitlement extractions, and remote group population.
Note: The Investigation Dashboard must also be enabled with the command grdapi enable_quick_search schedule_interval=1.
V
a
l
u
e
t
y
p
Parameter e Description
extraction_start  Initial date/time from which data is extracted to file quick search. It is limited to 2 days in the past. The default is current time. If the
unit is set to HOUR, then it is rounded to an hour. If it is set to DAY, then it is rounded to a day.
activity_schedule_interval i Required. This parameter sets activity schedule interval. The recommended interval is 2 with the unit set to MINUTE.
n
t
e
g
e
r
activity_schedule_units v Required. This parameter sets the unit of the activity unit. The values are either MINUTE or HOUR. The recommended unit is MINUTE.
a
l
u
e
l
i
s
t
entitlement_schedule_interva i Required. This parameter sets the entitlement schedule interval. The recommended interval is 1 with the unit set to DAY.
l n
t
e
g
e
r
entitlement_schedule_units v Required. This parameter sets the unit of the entitlement schedule. The possible values are MINUTE, HOUR, and DAY. The
a recommended unit is DAY.
l
u
e
l
i
s
t
Example
disable_fam_crawler
Disables the file activity monitor. The file quick search activity and entitlement extractions scheduler are removed. This function also disables remote group population.
Example
grdapi disable_fam_crawler
get_fam_crawler_info
Shows the status of the file activity monitor. If it is enabled, the command shows the settings for the entitlement extraction and file quick search activity schedule.
Example
grdapi get_fam_crawler_info
list_policy_fam_rule
Lists all the rules in a FAM policy.
Parameter V Description
a
l
u
e
t
y
p
e
ruleName s Optional. If no ruleName is provided, all policy rules with details will be shown. If a ruleName is provided, details will be listed for
t that rule.
r
i
n
g
create_fam_rule
Creates a new FAM rule.
V
a
l
u
e
t
y
p
Parameter e Description
notfilePath b Must be yes or no. Yes means apply this rule to all files except those in the specified path.
o
o
l
e
a
n
includeSubDirectory b Must be yes or no. Yes means include files in all subdirectories.
o
o
l
e
a
n
notOSUser s Must be yes or no. Yes means use all users except the specified osUser,
t
r
i
n
g
notCommand s Must be yes or no. Yes means use all commands except the specified command.
t
r
i
n
g
policy_fam_rule_delete
Deletes a rule from a FAM policy.
V
a
l
u
e
t
y
p
Parameter e Description
add_action_to_fam_rule
Adds an action to an existing FAM rule.
V
a
l
u
e
t
y
p
Parameter e Description
gim_list_registered_clients
Lists all the registered clients.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
grdapi gim_list_registered_clients
gim_list_client_params
Lists all the (module) parameters assigned to a specific client.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_update_client_params
Updates a single module parameters in a specific client.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_list_client_modules
Lists all the modules assigned to a specific client and their state
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_load_package
Loads all the modules within 'filename'.
Note: This command will load a file which resides on local file system, therefore the procedure (cmd='fileserver') of loading a file to the CM/Guardium appliance must
precede this command.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_assign_bundle_or_module_to_client_by_version
Assigns a bundle/module to a client.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_schedule_install
Schedules for installation all the modules/bundles that were assigned to a client and haven't been installed yet (for example, PENDING). If the parameter module is
specific, only the requested module will be scheduled.
V
a
l
u
e
t
y
p
Parameter e Description
module s Optional - Module. If module is not specified in the command, all the modules for the specified clientIP will be scheduled for install.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
gim_list_client_status
Displays the status of the latest operation executed for a specific client.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_uninstall_module
Uninstalls a module/bundle on a specific client.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_cancel_install
Cancels installation of  a bundle/module on a specific client. Canceling installation is possible only if a module/bundle is not already in the process of being installed by a
client (STATE=IP or IP-PR)
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_list_bundles
Lists all the available bundles. A bundle is a group of modules that can be installed on a client.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
grdapi gim_list_bundles
gim_list_mandatory_params
Lists the mandatory parameters for a single module.
V
a
l
u
e
t
y
p
Parameter e Description
module s The name of the GIM module for which to display the mandatory parameters
t
r
i
n
g
version s The version of the GIM module for which to display the mandatory parameters
t
r
i
n
g
Example
gim_assign_latest_bundle_or module_to_client
Assigns the latest (i.e. the highest version) available bundle or module for a specific client.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_schedule_uninstall
Schedules uninstallation of all the modules/bundles that were assigned to a client and haven't been uninstalled yet (i.e. “PENDING†). If the parameter 'module' is
specific, only the requested module will be scheduled.
V
a
l
u
e
t
y
p
Parameter e Description
module s Optional - Module. If module is not specified in the command, all the modules for the specified clientIP will be scheduled for install.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_cancel_uninstall
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_remove_bundle
The command will delete bundlePackageName from the database as well as from the file system (from /var/log/guard/gim_packages , and also
from/var/gim_dist_packages if the Guardium system is a central manager).
parameters (required):
bundlePackageName
Parameter value take bundle package name as specified in the output of the gim_list_unused_bundles. The command will be successful only if:
2.4 There is one and only one bundle that refers to the value of bundlePackageName
ALL the conditions (2.1 to 2.4) must be true in order to delete a bundle from the database/file system. Otherwise an error will be generated.
Example
gim_unassign_client_module
Unassigns a module from a client. Unlike 'gim_remove_module', this command will untie the connection between a module and a specific client on the CM/Guardium
appliance. This command is will NOT uninstall or remove the module on the actual DB-server machine. It is to be used only in cases on synchronization problems between
the DB-server (i.e client) information and the CM/Guardium appliance information regarding the current state of the modules.
V
a
l
u
e
t
y
p
Parameter e Description
module s Optional. Module. If module is not specified in the command, all the modules for the specified clientIP will be scheduled for install.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_get_purge_list
List old software packages (GIM files) that have previously been uploaded to the Guardium® appliance or CM.
V
a
l
u
e
t
y
p
Parameter e Description
olderThan s Required - Number of days. Files older than the number of days specified will be purged. Valid value is any number greater or equal to
t 0.
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
gim_purge
Remove old software packages (GIM files) that have previously been uploaded to the Guardium appliance or CM.
V
a
l
u
e
t
y
p
Parameter e Description
olderThan s Required - Number of days. Files older than the number of days specified will be purged. Valid value is any number greater or equal to
t 0.
r
i
n
g
filename s Optional - A specific file that is to be removed. If the file specified is a bundle (for example, starts with 'guard-bundle'), the content of
t this bundle will be removed.
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
Note:
Either the 'filename' parameter or (olderThan and/or excludeLatest) can be specified in the command.
GIM purge will not purge files that are currently scheduled for installation.
GIM purge will not allow the removal of any file (for example, parameter filename) that includes '/' character.
gim_get_available_modules
List the available modules / bundles available to install on a specific server.
V
a
l
u
e
t
y
p
Parameter e Description
gim_get_client_last_event
List the latest operation executed for a specific client.
gim_get_client_last_event is a GrdAPI command with limited functionality. All it does is show the last event occurred during the latest installation attempt. For example, if
during the latest installation of S-TAP there were some errors, it will show up by running that grdapi command. However, if you manually fix the installation problem
directly on the database server, this grdapi command will still show the same original error message (even though S-TAP is now running). This command should not be
used to evaluate S-TAP status after manual fixes on the database server.
V
a
l
u
e
t
y
p
Parameter e Description
Example
gim_get_modules_running_status
List the modules / bundles currently running on a specific server.
V
a
l
u
e
t
y
p
Parameter e Description
status O Â
N
o
r
O
F
F
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
tr is executed. Valid values:
i
n all_managed: for all managed units
g all: all managed units and CM
group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
gim_list_unused_bundles
The command returns a list of unused (not installed on any database server) bundles and individual Windows modules that can be uploaded (for example, Windows CAS,
Windows FAM).
parameters (required):
If set to value 1, the returned list of unused bundles will include the latest unused bundle.
Example
gim_reset_client
Disassociate modules from selected client.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_set_diagnostics
Set diagnostics collection within GIM.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_set_global_param
Set global parameters within GIM.
V
a
l
u
e
t
y
p
Parameter e Description
paramName s Required - Name of the parameter within the API function to be mapped
t
r
i
n
g
paramValue s Required - Value of the parameter within the API function to be mapped
t
r
i
n
g
sqlguardip s Optional - IP address /host name of the collector this GIM agent will connect to.
t
r
i
n
g
ca_file s Optional - Full file name path to the certificate authority PEM file.
t
r
i
n
g
key_file s Optional - Full file name path to the private key PEM file.
t
r
i
n
g
cert_file s Optional - Full file name path to the certificate PEM file.
t
r
i
n
g
gim_listener_default_port s Optional - Set a different port for the GIM agent server mode.
t
r
i
n
g
gim_listener_default_shared_se s Optional - Set a shared secret to verify collectors that are sending requests to the new server mode GIM agent.
cret t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
gim_remote_activation
Connects the collector's IP address to a server mode GIM agent or group of GIM agents.
V
a
l
u
e
t
y
p
Parameter e Description
targetGroup s Optional - The group name of all the database servers that the collector connects to. It cannot be specified with the
t targetHost parameter.
r
i
n
g
sharedSecret s Optional - The shared secret that was configured during installation.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
t command is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute.
On a Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or
IP of the CM.
Example
Note: In a Central Management environment, all groups are defined on the Central Manager and sent to the managed units on a scheduled basis.
Group Functions
create_group
list_group_by_id
list_group_by_desc
delete_group_by_id
delete_group_by_desc
update_group_by_id
update_group_by_desc
flatten_hierarchical_groups
Member Functions
create_member_to_group_by_id
create_member_to_group_by_desc
list_group_members_by_id
list_group_members_by_desc
delete_member_from_group_by_id
delete_member_from_group_by_desc
create_group
create_group
Create a group definition.
Application System ID
APPLICATION USER
Client Hostname
Client IP
Client OS
COMMANDS
Database Name
DB Error Codes
DB PROTOCOL
DB PROTOCOL VERSION
DB Role
DB User/Object/Privilege
DB Ver./Patches
EXCEPTION TYPE
FIELDS
Files Permissions
Global ID
Guardium Role
Guardium Users
NET PROTOCOL
Object/Command
Object/Field
Operation Type
OS User
PORT
Qualified Objects
Records Affected
SCHEMA
SENTENCE DEPTH
Server Description
Server Hostname
Server IP
Server OS
SERVER TYPE
Service Name
SOURCE PROGRAM
TTL
USERS
VA Tests Exception
WEEKDAY
YEAR
appid v Required. Identifies the application for the group. It must be one of the following values:
a
l Public
u
Audit Process Builder
e
l Baseline Builder
i Attention: The Baseline Builder and related functionality is deprecated starting with Guardium V10.1.4.
s
t Classifier
DB2_zOS groups
Express Security
Policy Builder
Â
subtype s Optional. A sub type is used to collect multiple groups of the same group type, where the membership of each group is exclusive. For
t example, assume that you have database servers located in three datacenters, and that you want to group the servers by location. You
r would define a separate group of database servers for each location, and define all three groups with the same sub type (datacenter,
i for example).
n
g
category s Optional. A category is an optional label that is used to group policy violations and groups for reporting.
t
r
i
n
g
classification s Optional. A classification is another optional label that is used to group policy violations and groups for reporting.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
list_group_by_id
Display the properties of a specific group.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
list_group_by_desc
Display the properties of a specific group.
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
delete_group_by_id
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
delete_group_by_desc
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i
n
g
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
update_group_by_id
Update properties of the specified group.
V
a
l
u
e
t
y
p
Parameter e Description
subtype s Optional. A sub type is used to collect multiple groups of the same group type, where the membership of each group is exclusive. For
t example, assume that you have database servers located in three datacenters, and that you want to group the servers by location. You
r would define a separate group of database servers for each location, and define all three groups with the same sub type (datacenter,
i for example).
n
g
category s Optional. A category is an optional label that is used to group policy violations and groups for reporting.
t
r
i
n
g
classification s Optional. A classification is another optional label that is used to group policy violations and groups for reporting.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
update_group_by_desc
Update properties of the specified group.
V
a
l
u
e
t
y
p
Parameter e Description
subtype s Optional. A sub type is used to collect multiple groups of the same group type, where the membership of each group is exclusive. For
t example, assume that you have database servers located in three datacenters, and that you want to group the servers by location. You
r would define a separate group of database servers for each location, and define all three groups with the same sub type (datacenter,
i for example).
n
g
category s Optional. A category is an optional label that is used to group policy violations and groups for reporting.
t
r
i
n
g
classification s Optional. A classification is another optional label that is used to group policy violations and groups for reporting.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i
n
g
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
flatten_hierarchical_groups
Update ALL hierarchical groups that exist in Group Builder.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
grdapi flatten_hierarchical_groups
create_member_to_group_by_id
Add a member to a group specified by the group ID.
member s Required. The new member name, which must be unique within the group.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_member_to_group_by_desc
Add a member to the named group.
V
a
l
u
e
t
y
p
Parameter e Description
desc s Required. The name of the group to which the member is to be added.
t
r
i
n
g
member s Required. The new member name, which must be unique within the group.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_hierarchical_member_to_group_by_desc
delete_hierarchical_member_from_group_by_desc
function parameters :
list_group members_by_id
List the members of the specified group.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
list_group_members_by_desc
List the members of the specified group.
V
a
l
u
e
t
y
p
Parameter e Description
desc s Required. The name of the group whose members are to be listed.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n
g
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
delete_member_from_group_by_id
Remove a member from a specified group.
V
a
l
u
e
t
y
p
Parameter e Description
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
delete_member_from_group_by_desc
Remove a member from a specified group.
desc s Required. The name of the group from which the member is to be removed.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
1. Double-clicking on a row for drill-down displays an option to Invoke... Click the Invoke... option to display a list of APIs that are mapped to this report.
1. Click the Invoke... icon (within the report status line) to display a list of APIs that are mapped to this report.
2. Click the API you would like to invoke; bringing up the API Call Form for the Report and Invoked API Function. Invoking an API call from a report for multiple rows
produces an API Call Form that displays and enables the editing of all records that are displayed on the screen (dependent on the fetch size) to a maximum of 20
records.
3. Fill in the Required Parameters and any non-Required Parameters for the selected API call. Many of the parameters are pre-filled from the report but might be
changed to build a unique API call. For specific help in filling out required or non-required parameters, see the individual API function calls within the GuardAPI
Reference guide.
For multi row, use the set of parameters for the API (those with a button for each parameter) to enter a value for a parameter and then click the down arrow button
populate that parameter for all records. Also, use the check boxes for each row to select or deselect a row from being included in the API call.
Note: Parameters with the name of 'password' are masked.
4. Use the drop-down list to select the Log level, where Log level represents the following (0 - returns ID=identifier and ERR=error_code as defined in Return Codes, 1
- displays additional information to screen, 2 - puts information into the Guardium application debug logs, 3 - will do both 1 & 2)
5. Use the drop-down list to select a Parameter to encrypt.
Note: Parameter Encryption is enabled by setting the Shared Secret and is relevant only for invoking the API function through script generation.
6. Choose to Invoke Now or Generate Script.
a. If Invoke Now is selected, the API call runs immediately and display an API Call Output screen showing the status of the API call.
b. If Generate Script is selected
i. Open the generated script with your favorite editor or optionally save to disk to edit and execute later.
Example Script
ii. Modify the script; replacing any of the empty parameter values (denoted by '< >')
Note: Empty parameters might remain in the script as the API call ignores them
# A template script for invoking Sqlguard API function delete_datasource_by_name seven times:
Example Call
$ ssh
cli@a1.corp.com<c:/download/delete_datasource_by_name_api_call.txt
1. Go to any predefined report in the Daily Monitor tab, Guardium Monitor tab, or Tap Monitor tab.
2. Click the Invoke ... button.
3. Choose the Add API mapping selection.
4. At the new window, Add API mapping shows the name of the report, for example, Guardium Logins; a search/filter mechanism to find the appropriate GuardAPI
command; and, selection choices for API functions available under the Predefined Report. Choose the API function, and then click Map Report Attributes.
5. At the new window, API-Report Parameter Mapping, map the parameter name to the Report field. Sometimes there might be data that is not supplied with a
Guardium report. For these instances, a constant can be created, added to the report and used within the API parameter mappings.
Note: Save overrides the current mapping.
Note: If the Guardium report, with a constant added, is exported, the constant will not be exported.
To simplify the mapping between the GuardAPI parameters and Guardium attributes, Guardium created the predefined report Query Entities & Attributes that list all the
Guardium attributes; giving users a GUI interface and allowing them to easily drill down from that report and create the linkages quickly.
Existing Guardium attributes or user-defined constants may be mapped to the GuardAPI parameters of Existing Attributes or Constants.
Note: When GuardAPI parameters are mapped to report attributes, if a report has more than one attribute that is mapped to the same GuardAPI parameter, the value
picked for the API call is the first of these attributes according to the order of display in the report.
Existing Attributes
1. Go to the Query Entities & Attributes report to add the API parameter mappings. (Guardium Monitor -> Query Entities & Attributes)
2. The Query Entities & Attributes report is long because it lists all the Guardium attributes. Narrow down the records you are interested in by using the
Customize button.
3. To create the mapping, double-click the attribute row you would like to assign to a parameter name
4. Click the Invoke... option
5. Select the create_api_parameter_mapping API function
6. Fill in the functionName and parameterName in the API Call Form
7. Click the Invoke now button to create the API to Report Parameter Mapping
See how-to topic, Using API Calls From Custom Reports, for a full scenario that maps GuardAPI parameters through the GUI.
Constants
Sometimes there may be data that is not supplied within a Guardium report. For these instances, a constant can be created, added to the report, and then used
within the API parameter mappings.
1. Go to the Query Entities & Attributes report to add the API parameter mappings. (Guardium Monitor -> Query Entities & Attributes)
2. The Query Entities & Attributes report is long because it lists all the Guardium attributes. Narrow down the records you are interested in by using the
Customize button.
3. To create a constant attribute, double-click any row for the entity you would like to create a constant attribute for
4. Click the Invoke... option
5. Select the create_constant_attribute API function
See how-to topic, Using Constants within API Calls, for a full scenario that creates and maps a constant attribute through the GUI.
Note: If the Guardium report, with a constant added, is exported, the constant will not be exported.
Note: When using API mapping, table columns in a report appears in the report field as long as the table column is an attribute of an entity. Some of the columns
such as count column will not be displayed in the report field because it cannot be mapped.
This means a user that has the appropriate roles for Policy Builder is able to execute the GuardAPI command, delete_rule, on any policy, regardless of the roles of this
specific policy.
Role validation exists for the following Policy rules GuardAPI commands: change_rule_order; copy_rule; copy_rules, delete_rule; update_rule.
Role validation exists for the following Group Description GuardAPI commands: create_member_to_group_by_desc; create_member_to_group_by_id;
delete_group_by_desc; delete_group_by_id; delete_member_from_group_by_desc; delete_member_from_group_by_id; update_group_by_id; update_group_by_desc.
Role validation exists for the following Datasource GuardAPI commands: delete_datasource_by_id; delete_datasource_by_name; update_datasource_by_id;
update_datasource_by_name.
Role validation exists for the following Audit Process GuardAPI commands: stop_audit_process.
If such process for the user exists, then the parameters are updated and the same process is used.
1 - If new process, it creates one receiver per email in the list (if any) with <p>a content type as indicated in the emailContentType parameter. It will also create a user
receiver for the user that is logged in (invoking the API) if the includeUserReceiver parameter is true.
2 - If existing process, Â all email receivers are removed and replaced with the emails from the new list (if any) with the content type as defined in the emailContentType
parameter. If the list is empty, it removes all email address receivers. If there is already a receiver for the user it will NOT be removed even if the includeUserreceiver is
false, however if the parameter is true and there is no such receiver then it is added.
Once the audit process is generated, it is automatically executed (similar to a Run Once Now) and users should expect an item on their to-do list for that audit process.
create_ad_hoc_audit_and_run_once
Parameters:
1 - reportId - The ID on the report to be used for the only task in the Audit process
2 - isForReportRunOnce boolean indicates whether the process should be run once after it is created.
3 - changeParIfExist boolean indicates whether the task parameters should be updated if the process exists
4 - taskParameter All task parameters and the value for each concatenated with the characters ^^ should be like: PAR1=Val1^^PAR2=Val2^^ etc it is valid to leave a
parameter empty, for example if PAR2 should remain empty it looks like: PAR1=VAL1^^PAR2=^^PAR3=VAL3^^...
5 - processNamePar - Name of the process if empty it creates a process with the name.
8 - includeUserReceiver boolean indicates whether to create a receiver for the user that is logged in
An GuardAPI can be invoked automatically from any report portlet. When the GuardAPI is invoked, it creates a new audit process report.
Schedule APIs
modify_schedule parameters jobName jobGroup cronString startTime optional
list schedule
Note: Some job types for the grdapi schedule_job function do not require an object name. No validation is performed on the object name parameter and users see the
standard 'OK' prompt when the function is run with anything entered as the objectName parameter for the following jobs types:csvExportJob, systemBackupJob,
dataArchiveJob, dataExportJob, dataImportJob, resultsArchiveJob, AppUserTranslation, IpHostToAlias
grdapi set_purge_batch_size
Set the batch size that is used during purge, aids in performance of purge and has a default setting of 200,000. A trade-off in performance and disk space usage should be
noted as setting to a larger batch size increases the speed of the purge but consumes more disk space and setting to a low batch size decreases the speed of the purge but
not consume as much disk space.
function parameters: batchSize - required api_target_host Example vx29> grdapi set_purge_batch_size batchSize=200000 ID=0 ok
grdapi get_purge_batch_size
Gets the current setting for the purge batch size
function parameters: api_target_host Example vx29> grdapi get_purge_batch_size ID=0 Purge Batch Size = 200000 ok
grdapi patch_install
function parameters: patch_date patch_number - required
grdapi populate_from_dependencies
function parameters: descOfEndingGroup - required descOfStartingGroup - required flattenNamespace getFunctions getJavaClasses getPackages getProcedures
getSynonyms getTables getTriggers getViews isAppend - required isEndingGroupQualified owner - required reverseIt selectedDataSourceName - required api_target_host
create_computed_attribute
Use in Reports.
V
a
l
u
e
t
y
p
Parameter e Description
attributeLabel s Required.
t
r
i
n
g
expression s Required. Server IP. The user must specify the tableName.field in the expression.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
delete_computed_attribute
Use in Reports.
attributeLabel s Required.
t
r
i
n
g
entityLabel s Required.
t
r
i
n
g
expression s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
update_computed_attribute
Use in Reports.
V
a
l
u
e
t
y
p
Parameter e Description
attributeLabel s Required.
t
r
i
n
g
entityLabel s Required.
t
r
i
n
g
expression s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
create_constant_attribute
Use in Reports.
V
a
l
u
e
t
y
p
Parameter e Description
attributeLabel s Required.
t
r
i
n
g
entityLabel s Required.
t
r
i
n
g
constant s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
delete_constant_attribute
Use in Reports.
V
a
l
u
e
t
y
p
Parameter e Description
attributeLabel s Required.
t
r
i
n
g
entityLabel s Required.
t
r
i
n
g
constant s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
update_constant_attribute
Use in Reports.
V
a
l
u
e
t
y
p
Parameter e Description
attributeLabel s Required.
t
r
i
n
g
entityLabel s Required.
t
r
i
n
g
constant s Required.
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
create_ad_hoc_audit_and_run_once
Use in Reports.
V
a
l
u
e
t
y
p
Parameter e Description
chnageParlfExist B Required.
o
o
l
e
a
n
emailContentType i
n
t
e
g
e
r
includeUserReceiver b Â
o
o
l
e
a
n
isForReportRunOnce b Required.
o
o
l
e
a
n
processNamePar s Â
t
r
i
n
g
reportID i Required
n
t
e
g
e
r
sendToEmails s Â
t
r
i
n
g
taskParameter s Â
t
r
i
n
g
api_target_host s Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
t is executed. Valid values:
r
i all_managed: for all managed units
n all: all managed units and CM
g group:<group name>: where group name is a group of managed units
from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
from managed unit, the host name or IP of the CM
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
REST API
JSON (JavaScript Object Notation) output option supports GuardAPI functions. This is part of REST APIs. REST stands for Representational State Transfer. It relies on a
stateless, client/server, cacheable communications protocol, and in virtually all cases, the HTTP protocol is used. REST is an architecture style for designing networked
applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC, or SOAP to connect between machines, simple HTTP is used to make calls
between machines. RESTful applications use HTTP requests to post data (create and/or update), read data (for example, make queries), and delete data. Thus, REST uses
HTTP for all four Create/Read/Update/Delete operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP,
WSDL).
I want the ability to dynamically get a small amount of audit data for a certain IP address without having to login to the Guardium GUI.
I want to populate an existing group, so I can update my policy to prevent unauthorized access to sensitive information.
I want to get a list of all users within a certain authorized access group.
I want my application development team to help identify what sensitive tables to monitor.
I want to script access to grdAPI’s without using “expect†scripting language, which requires me to code response text from the target system.
For internal REST API requests, there is a special ROLE and USER predefined in the system.
This user cannot be removed or modified through the accessmgr UI and cannot be used to log in the UI.
This user's password will never expire, but is revoked if client ID is revoked.
The internal (S-TAP, maybe others) client must secure the client secret and password.
Permissions for different functions can be assigned to the role through accessmgr UI.
GET = List
POST = Create
PUT = Update
DELETE = Delete
GuardAPIs
-X GET https://10.10.9.239:8443/restAPI/datasource/?name="MSSQL_1"
create_datasource
-X POST https://10.10.9.239:8443/restAPI/datasource
-X DELETE -d '{"id":20020}‘
For further information, go to the Using the IBM Security Guardium REST API article on DeveloperWorks.
http://www.ibm.com/developerworks/data/library/techarticle/dm-1404guardrestapi/index.html
register_oauth_client
Use this GuardAPI command to wrap supported GuardAPI functions in a RESTful API that uses JSON (JavaScript Object Notation) for input and output.
Use the GrdAPI command, grdapi register_oauth_client, to register the client and obtain the necessary access token to call the REST services.
REST stands for Representational State Transfer. It relies on a stateless, client/server, cacheable communications protocol, and in virtually all cases, the HTTP protocol is
used.
REST is an architecture style for designing networked applications. The idea is that, rather than using complex mechanisms such as CORBA, RPC, or SOAP to connect
between machines, simple HTTP is used to make calls between machines.
RESTful applications use HTTP requests to post data (create and/or update), read data (for example, make queries), and delete data. Thus, REST uses HTTP for all four
Create/Read/Update/Delete operations. REST is a lightweight alternative to mechanisms like RPC (Remote Procedure Calls) and Web Services (SOAP, WSDL).
function parameters:
grant_types - String - required. The only grant type that is supported is password.
fetchSize - String- optional (default is 20 recores to retain backward compatibility, maximum value is 30000.
sortColumn - optional - If specified must be the column title of one of the report fields.
Syntax
getOAuthTokenExpirationTime
Use this GuardAPI command to get the expiration time of the REST API token
function parameters:
api_target_host - String
function parameters:
api_target_host - String
Syntax
disable_quick_search
Note that the Investigation Dashboard includes the Quick Search Results Table, in addition to the Activity Chart, and various other pre-defined charts.
grdapi disable_quick_search
Paramet
er Value Description
all true or In an environment with a Central Manager, use this parameter to disable search on all managed units. For example, all=true.
false
This parameter is optional.
api_targ hostname or In a central management configuration only, specifies a target host where the API will execute. On a Central Manager (CM) the value is the
et_host IP address host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to unit on which command is executed.
Valid values:
enable_quick_search
Enable Investigation Dashboard functionality.
For example, the following command enables the Investigation Dashboard with a 2-minute data extraction interval: grdapi enable_quick_search
schedule_interval=2 schedule_units=MINUTE.
Paramete
r Value Description
all true or In an environment with a Central Manager, use this parameter to enable search on all managed units. For example, all=true.
false
This parameter is optional.
api_target hostname or In a central management configuration only, specifies a target host where the API will execute. On a Central Manager (CM) the value is the
_host IP address host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to unit on which command is
executed. Valid values:
extraction date Define the date by which to start the extraction of audit data for search. If this parameter is omitted, extraction starts immediately.
_start
This parameter is optional.
includeVi true or Determine whether to include violations in the search indexes. Omitting violations can help reduce the size of search indexes.
olations false
This parameter is optional.
schedule_ date Date on which to begin following the extraction interval defined by the schedule_interval and schedule_units parameters.
start
This parameter is optional.
schedule_ HOUR or Used with the schedule_interval parameter to define the interval for extracting audit data. For example, schedule_interval=2
units MINUTE schedule_units=MINUTE.
set_enterprise_search_options
Define the search mode for the Investigation Dashboard .
For example, the following command configures the Investigation Dashboard in all_machines mode to allow searching of data across the entire Guardium environment
from any Guardium machine in that environment: grdapi set_enterprise_search_options distributed_search=all_machines.
Paramete
r Value Description
api_target hostname or IP In a central management configuration only, specifies a target host where the API will execute. On a Central Manager (CM) the value is
_host address the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to unit on which command is
executed. Valid values:
add_ip_to_sg
Adds the specified Guardium IP to the cloud security group.
add_objects_native_audit parameter=value
api_target_ho hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
st address command is executed. Valid values:
add_objects_native_audit
Adds objects to the Object Audit (audit trail) on the specified datasource.
add_objects_native_audit parameter=value
objects string. Comma separated list of objects. View objects with the get_native_audit_objects or in the GUI.
api_target_ho hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
st address command is executed. Valid values:
disable_native_audit
Disables DB Audit (native audit) on the specified cloud datasource.
disable_native_audit parameter=value
api_target_ho hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
st address command is executed. Valid values:
enable_native_audit
Enable DB Audit (native audit) on the specified datasource.
enable_native_audit parameter=value
api_target_ho hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
st address command is executed. Valid values:
get_native_audit_collectors
Returns the name of the collector, in your environment, that is receiving data from the specified host, port, and service name.
get_native_audit_collectors parameter=value
api_target_h hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
ost address is executed. Valid values:
get_native_audit_configurations
get_native_audit_configurations parameter=value
api_target_h hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
ost address is executed. Valid values:
get_native_audit_objects
Returns all objects found by the classification process on the specified host, port, service name.
get_native_audit_objects parameter=value
api_target_h hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
ost address is executed. Valid values:
remove_objects_native_audit
Disable the object audit (audit trail) on the specified objects in the specified datasource.
remove_objects_native_audit parameter=value
objects string Comma separated list of objects. View objects with the get_native_audit_objects or in the GUI.
api_target_ho hostname or IP Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
st address command is executed. Valid values:
grdapi enable_outliers_detection_agg
grdapi enable_outliers_detection_agg
grdapi disable_outliers_detection_agg
Run on a central manager to enable or disable sending export data from all collectors in the CM environment that send their data to the specified aggregator, except a
collector that is running outliers detection locally. Data is collected hourly and sent to the aggregator for outliers detection processing. A distributed report mechanism is
used to extract and send data to an aggregator.
grdapi enable_outliers_detection
grdapi enable_outliers_detection
grdapi disable_outliers_detection
V
a
l
u
Parameter e Description
V
a
l
u
Parameter e Description
execute_cls_process
Executes (submits) a classification process. It is equivalent of executing Run Once Now from Classification Process Builder. It submits the job which places the process on
the Guardium® Job Queue, from which the appliance runs a single job at a time. Administrators can view the job status by selecting Guardium Monitor > Guardium Job
Queue.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
execute_assessment
Executes (submits) a security assessment. It is equivalent of executing Run Once Now from Security Assessment Finder. It submits the job. This places the process on the
Guardium Job Queue, from which the appliance runs a single job at a time. Administrators can view the job status by selecting Guardium Monitor > Guardium Job Queue.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
execute_auditProcess
Executes an Audit process. Runs the specified audit process. It is equivalent of executing Run Once Now from Audit Process Builder.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
stop_audit_process
The stop_audit_process API can not be used through the GuardAPI command line. This function is only usable as an invocation through a drill down. See the sub-topic,
Stop an audit process, in Compliance Workload Automation help topic.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
stop_audit_process
execute_populateGroupFromQuery
Note: This grdapi can only be used for groups that have already been configured in Populate Group From Query Set Up screen (query should have been chosen and
parameters should have been set)
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
grdapi execute_appUserTranslation
Execute an application user translation. Imports the user definitions for all configured applications in Application User Translation Configuration screen. It is equivalent of
executing Run Once Now from Application User Translation Configuration screen.
Note: To run this grdapi, must define at least one Application User Detection in Application User Translation Configuration screen. If not a message will be displayed.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
grdapi execute_appUserTranslation
execute_flatLogProcess
Merges the flat log information to the internal database. It is equivalent of executing Run Once Now from Flat Log Process screen.
Note: This grdapi can only be executed if Flat Log Process is configured as Process in Flat Log Process screen. If not, an error message will be displayed.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
grdapi execute_flatLogProcess
execute_incidentGenProcess
Executes a query which is defined for the selected incident generation process, using the processId, against the policy violations log. It generates incidents based on that
query. It is equivalent of executing Run Once Now from Edit Incident Generation Process screen.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
execute_incidentGenProcess_byDetails
Executes a query which is defined for the selected incident generation process, using the query name, against the policy violations log. It generates incidents based on
that query. It is equivalent of executing Run Once Now from Edit Incident Generation Process screen.
user  User
threshold  Threshold
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
upload_custom_data
Executes (submits) a classification process. Uploads data to the custom table specified by tableName. It is the equivalent of executing Upload from Import Data screen of
Custom Table Builder. To run this grdapi, must first configure the specified custom table in Import Table Structure of Custom Table Builder. From the UI, go to Tools/Report
Builder/Custom Table Builder, select a Custom Table, click upload data, and select datasource.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
execute_ldap_user_import
Import LDAP users. It imports Guardium user definitions from an LDAP server configured in LDAP User Import screen. It is equivalent of executing Run Once Now from
LDAP User Import screen. (login in as accessmgr /LDAP Import)
Note: LDAP must be configured. Otherwise, the system will give an error message.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
grdapi execute_ldap_user_import
policy_install
Install a policy or multiple policies. If multiple policies are to be installed then the policies need to be delimited by a pipe character '|' with policies being in the order you
want to be installed. This needs to be done even if only one policy might have had changes.
Install multiple policies with grdapi policy_install command. Install by position by specifying the policies in the order that you want to install.
Even in UI, when you install a policy after another installed policy, it will reinstall all of them. which is the same as grdapi policy_install command.
V
a
l
u
Parameter e Description
api_target_host  In a central management configuration only, allows the user to specify a target host where the API will execute. On a Central Manager
(CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Examples
delete_policy
Use the delete_policy command to delete a policy specified by the policyDesc parameter.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
list_policy
Use the list_policy command to display a list of available policies or to display details about a single policy.
V
a
l
u
Parameter e Description
policyDesc  Policy name. If unspecified, the list_policy command returns a list of available policies.
detail  Accepts values of true or false. The default value is true and returns policy details. Specifying a value of false returns only policy
names.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Examples
grdapi list_policy
copy_rule
Copy a rule <ruleDesc> of <fromPolicy> to the end of <toPolicy> rule's list.
Note: It Copies a rule of  <fromPolicy> to the end of <toPolicy> rule's list. Both <fromPolicy>  and <toPolicy> must be created, before running this grdapi.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
update_rule
Update policy rule. Update a rule <ruleDesc> of <fromPolicy> for a rule parameter.
See Policies for additional information on the following policy rule parameters that can be altered with the update_rule API call.
V
a
l
u
Parameter e Description
clientIP Â Client IP
serverIP Â Server IP
command  Command
pattern  Patter
severity  Severity
category  Category
classification  Classification
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which
o command is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the
I
CM.
P
a
d
d
r
e
s
s
Example
change_rule_order
Change policy rule order. Change the ordered position of a rule within a policy.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
list_policy_rules
List the rules for a policy.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
delete_rule
Remove a rule from a policy.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
uninstall_policy_rule
Use the uninstall_policy_rule command to uninstall the policy rule(s) specified by the policy and ruleName parameters.
V
a
l
u
Parameter e Description
ruleName  Rule name(s). Specify multiple policy rules using the pipe character, for example ruleName="rule1|rule2|rule3.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Examples
grdapi uninstall_policy_rule policy="Hadoop Policy" ruleName="Low interest Objects: Allow|Low Interest Commands: Allow"
reinstall_policy_rule
Use the reinstall_policy_rule command to reinstall the policy rule(s) specified by the policy and ruleName parameters.
ruleName  Rule name(s). Specify multiple policy rules using the pipe character, for example ruleName="rule1|rule2|rule3.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Examples
grdapi reinstall_policy_rule policy="Hadoop Policy" ruleName="Low interest Objects: Allow|Low Interest Commands: Allow"
delete_Audit_process_result
Use this command to delete any audit process results.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
Note: The Mapping GuardAPI Parameters to Domain Entities and Attributes in GuardAPI Input Process Generation shows the domains, entities and attributes of the
system and has a GUI interface to invoke this API function.
V
a
l
u
Parameter e Description
domain  Any of the Guardium reporting domains such as Access, Alert, Discovered Instances, Exceptions, Group Tracking, etc.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
list_param_mapping_for_function
List the parameter mappings for an API function.
Note: The Mapping GuardAPI Parameters to Domain Entities and Attributes in GuardAPI Input Process Generation shows the domains, entities and attributes of the
system and has a GUI interface to invoke this API function.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
delete_api_parameter_mapping
Delete API Parameter Mappings for Domain Entities and Attributes. Remove the parameter mappings for an API function.
Note: The Mapping GuardAPI Parameters to Domain Entities and Attributes in GuardAPI Input Process Generation shows the domains, entities and attributes of the
system and has a GUI interface to invoke this API function.
V
a
l
u
Parameter e Description
domain  Any of the Guardium reporting domains such as Access, Alert, Discovered Instances, Exceptions, Group Tracking, etc.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
close_default_events
V
a
l
u
Parameter e Description
eventStatus  Required. Event status. Must be a valid status for the default event defined for the audit task and must be a final status.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Example
create_quarantine_allowed_until
Use in Policies.
V
a
l
u
Parameter e Description
allowedUntil  Required.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
create_quarantine_until
Use in Policies.
V
a
l
u
Parameter e Description
quarantineUntil  Required.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
delete_quarantine_until
Use in Policies.
V
a
l
u
Parameter e Description
quarantineUntil  Required.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
must_gather
Use grdapi must_gather command to collect information on the state of the Guardium system that can be used by Guardium Support. See Basic information for IBM
Support for further information on this topic.
V
a
l
u
Parameter e Description
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
r
I
P
a
d
d Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
e
s
s
restart_job_queue_listener
Use the restart_job_queue_listener command to restart the job queue listener if the job queue fails to start, does not run waiting jobs, or if a job appears stuck in running
or stopping status for a prolonged period of time. Issuing this command immediately restarts the job queue, and any currently executing jobs will be halted and restarted.
Example:
grdapi restart_job_queue_listener
update_quarantine_allowed_until
Use in Policies.
V
a
l
u
Parameter e Description
allowedUntil  Required.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
update_quarantine_until
V
a
l
u
Parameter e Description
quarantineUntil  Required.
api_target_host h Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
o is executed. Valid values:
s
t all_managed: for all managed units
n all: all managed units and CM
a group:<group name>: where group name is a group of managed units
m from CM only, the host name or IP of any managed unit, for example, api_target_host=10.0.1.123
e from managed unit, the host name or IP of the CM
o
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
r
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
I
P
a
d
d
r
e
s
s
Parent topic: GuardAPI Reference
Note: If you create query rewrite definitions by using APIs, you can still use the UI to retrieve those definitions for testing with the Query Rewrite Builder.
assign_qr_condition_to_action
create_qr_action
create_qr_add_where
create_qr_add_where_by_id
create_qr_condition
create_qr_definition
create_qr_replace_element
create_qr_replace_element_byId
list_qr_action
list_qr_add_where
list_qr_add_where_by_id
list_qr_condition
list_qr_condition_to_action
list_qr_definitions
list_qr_replace_element
list_qr_replace_element_byId
remove_all_qr_replace_elements
remove_all_qr_replace_elements_byId
remove_qr_add_where_by_id
remove_qr_condition
remove_qr_definition
remove_qr_replace_element_byId
update_qr_action
update_qr_add_where_by_id
update_qr_condition
update_qr_definition
update_qr_replace_element_byId
assign_qr_condition_to_action
Create an association between a query rewrite condition and an associated action.
V
a
l
u
Parameter e Description
conditionName  Required. The name of the query rewrite condition to be associated with the specified action.
definitionName  Required. The name of the query rewrite definition that is associated with the specified condition and action.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
create_qr_action
Create a query rewrite action for a specified query rewrite definition.
V
a
l
u
Parameter e Description
definitionName  Required. The query rewrite definition that is associated with this action.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
create_qr_add_where
Associate a query rewrite function to add a WHERE condition to the specified query rewrite action.
V
a
l
u
Parameter e Description
definitionName  Required. The query rewrite definition that is associated with this action.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
create_qr_add_where_by_id
Associate a query rewrite function to add a WHERE condition to the specified query rewrite action.
V
a
l
u
Parameter e Description
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
create_qr_condition
Create a query rewrite condition.
V
a
l
u
Parameter e Description
definitionName  Required. The query rewrite definition that is associated with this condition.
depth  Integer that specifies the depth of the parsed SQL that this condition applies to (1 and higher). The default -1 means that the query
rewrite condition applies to any matching SQL at any depth.
isForAllRuleObjects  True or false. Use this parameter to associate this condition with objects in a policy access rule. True indicates that the specified
condition applies to all objects in the access rule’s Object field or Object group for a fired rule. The default is false, which means
the query condition is specified using the objects that are defined in this condition. Neither option impacts any rule triggering behavior.
isForAllRuleVerbs  True or false. Use this parameter to associate this condition with objects in a policy access rule. True, indicates that the specified
condition applies to all verbs in the access rule’s Verb field or Verb group for a fired rule. The default is false, which means the
query condition is specified using the verbs that are defined in this condition. Neither option impacts any rule triggering behavior.
isObjectRegex  True or false. Indicates that the specified object is specified by using a regular expression. Default is false.
isVerbRegex  True or false. Indicates that the specified verb is specified by using a regular expression. Default is false.
object  An object (table, view). The default "*" means all objects. This can also be specified as a regular expression, in which case set the
isVerbRegex to True.
order  Used to specify the order in which to assemble multiple related query rewrite conditions for complex SQL. Default is 1.
verb  A verb (select, insert, update, delete). The default "*" means all verbs.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
grdapi create_qr_condition definitionName="case 15" conditionName="qr cond15_3" verb=select isForAllRuleObjects=false object=* depth=2 order=3
create_qr_definition
Create a query rewrite definition.
V
a
l
u
Parameter e Description
dataBaseType  Required. The type of database this query rewrite definition is associated with. Acceptable values are: ORACLE or DB2.
definitionName  Required. A unique name for this query rewrite definition condition.
isNegateQrCond  Indicates whether there is a NOT flag on the set of query rewrite conditions that are associated with this definition.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
V
a
l
u
Parameter e Description
actionName  Required. The unique name of the query rewrite action this rewrite function is associated with.
definitionName  Required. A unique name for this query rewrite definition condition.
isFromAllRuleElements  True or false. Indicates that this action applies to all FROM elements. Default is false.
isFromRegex  True or false. Indicates that the ‘from’ element is specified by using a regular expression. Default is false.
isReplaceToFunction  True or false. Indicates that the "replace to" is the name of a function, such as user-defined function.
replaceFrom  The incoming string for a matching rule that is to be replaced. Use replaceType to indicate specifically which element of the incoming
query to examine.
SELECT
VERB
OBJECT
SENTENCE
SELECTLIST
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
create_qr_replace_element_byId
Create a replacement specification for a specified query rewrite action.
V
a
l
u
Parameter e Description
isFromAllRuleElements  True or false. Indicates that this action applies to all FROM elements. Default is false.
isFromRegex  True or false. Indicates that the "from" element is specified by using a regular expression. Default is false.
isReplaceToFunction  True or false. Indicates that the "replace to" is the name of a function, such as user-defined function.
replaceFrom  The incoming string for a matching rule that is to be replaced. Use replaceType to indicate specifically which element of the incoming
query to examine.
SELECT
VERB
OBJECT
SENTENCE
SELECTLIST
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
list_qr_action
Lists query actions for a specified query definition.
V
a
l
u
Parameter e Description
detail  True or false. The default is true, which lists all the associated attributes of the actions. Only the name is returned for false.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
Output:
ok
Example:
grdapi list_qr_action definitionName="case 2" detail=false
Output:
list_qr_add_where
Lists "add where" functions for a specified query action and query definition pair.
V
a
l
u
Parameter e Description
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
list_qr_add_where_by_id
Lists "add where" functions for a specified query action.
V
a
l
u
Parameter e Description
qrActionId  Required (integer). The unique identifier for the query rewrite action.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
  Â
Example:
V
a
l
u
Parameter e Description
detail  True or false. The default is true, which lists all the associated attributes of the conditions. Only the name is returned for false.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
Output:
#######################################################################
qr condition id: 1
qr condition name: qr cond2
qr definition ID: 1
qr condition verb: *
qr condition object: *
qr condition dept: -1
is verb regex: false
is object regex: false
is action for all rule verbs: false
is action for all rule objects: false
qr condition order: 1
list_qr_condition_to_action
Lists the associations between a query rewrite condition and a query rewrite action for a particular query definition.
V
a
l
u
Parameter e Description
actionName  Required (integer). The unique identifier for the query rewrite action.
Detail  True or false. The default is true, which lists all the associated attributes of the conditions for the specified action and definition. Only
the name is returned for false.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
Output:
#######################################################################
qr condition id: 1
qr condition name: qr cond2
qr definition ID: 1
qr condition verb: *
qr condition object: *
qr condition dept: -1
is verb regex: false
is object regex: false
is action for all rule verbs: false
is action for all rule objects: false
qr condition order: 1
list_qr_definitions
Lists query rewrite definitions.
V
a
l
u
Parameter e Description
Detail  True or false. The default is true, which lists all the associated attributes of the conditions for the specified action and definition. Only
the name is returned for false.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
grdapi list_qr_definitions
Output:
#######################################################################
qr definition ID: 1
qr definition name: case 2
qr definition description:
is negation set on qr conditions: false
list_qr_replace_element
Lists replacements for a specified query rewrite action and query rewrite definition pair.
V
a
l
u
Parameter e Description
Detail  True or false. The default is true, which lists all the associated attributes of the replacement elements for the specified action and
definition. Only the names are returned for false.
SELECT
VERB
OBJECT
SENTENCE
SELECTLIST
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
Output:
#######################################################################
***********************************************************************
qr replace element ID: 2
qr replace type: selectList
qr replace from: Whole select list
qr replace to: EMPNO,SAL
qr is from regex: false
qr is from all rule elements: false
list_qr_replace_element_byId
Lists replacements for a specified query rewrite action.
V
a
l
u
Parameter e Description
detail  True or false. The default is true, which lists all the associated attributes of the replacement elements for the specified action and
definition. Only the names are returned for false.
qrActionId  Required (integer). The unique identifier for the query rewrite action.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
remove_all_qr_replace_elements
Deletes query replacement specifications from the system.
V
a
l
u
Parameter e Description
definitionName  Required (integer). The unique identifier for the query rewrite action.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
remove_all_qr_replace_elements_byId
Deletes query replacement specifications from the system.
V
a
l
u
Parameter e Description
If replaceType is not specified, then all replacements for the specified action and definition is deleted.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
remove_qr_action
Deletes a specified query rewrite action from the system.
V
a
l
u
Parameter e Description
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
remove_qr_add_where_by_id
Deletes a specified "add where" function from the system.
V
a
l
u
Parameter e Description
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
  Â
Example:
remove_qr_condition
Deletes a query rewrite condition from the system.
V
a
l
u
Parameter e Description
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
remove_qr_definition
Deletes a query rewrite definition from the system.
V
a
l
u
Parameter e Description
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
  Â
Example:
remove_qr_replace_element_byId
Deletes a specified query element replacement from the system.
V
a
l
u
Parameter e Description
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
  Â
Example:
grdapi qrReplaceElementId=33333
update_qr_action
Updates an existing query rewrite action with a new name and optional description.
V
a
l
u
Parameter e Description
definitionName  Required. The query rewrite definition that is associated with this action.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
update_qr_add_where_by_id
Allows update of an existing "add where" function with new replacement text.
V
a
l
u
Parameter e Description
qrAddWhereId  Required (integer). The unique identifier for the query rewrite "add where" function.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
update_qr_condition
Update an existing query rewrite condition.
V
a
l
u
Parameter e Description
definitionName  Required. The query rewrite definition that is associated with this condition.
depth  Integer that specifies the depth of the parsed SQL that this condition applies to (1 and higher). The default -1 means that the query
rewrite condition applies to any matching SQL at any depth.
isForAllRuleObjects  True or false. Indicates that the specified condition applies to all objects for the fired rule. Default is false.
isForAllRuleVerbs  True or false. Indicates that the specified condition applies to all verbs for the fired rule Default is false.
isObjectRegex  True or false. Indicates that the specified object is specified by using a regular expression. Default is false.
isVerbRegex  True or false. Indicates that the specified verb is specified by using a regular expression. Default is false.
Object  An object (table or view). The default "*" means all objects. This can also be specified as a regular expression, in which case set the
isVerbRegex to True.
Order  Used to specify the order in which to assemble multiple related query rewrite conditions for complex SQL. Default is 1.
Verb  A verb (select, insert, update, delete). The default "*" means all verbs.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
grdapi update_qr_condition definitionName="case 16" conditionName="qr cond15_3" newName="qr cond16_3" verb=select object=* dept=2 order=3
update_qr_definition
Update an existing query rewrite definition.
V
a
l
u
Parameter e Description
dataBaseType  Required. The type of database this query rewrite definition is associated with. Must be either ORACLE or DB2.
definitionName  Required. A unique name for this query rewrite definition condition.
isNegateQrCond  Indicates whether there is whether there is a NOT flag on the set of query rewrite conditions that are associated with this definition.
sampleSql  Optional. Specify a sample SQL statement. In most cases, you will not use this unless you want to use the inputted sample SQL later in
the UI.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
grdapi update_qr_definition dataBaseType="DB2" definitionName="case 15" sampleSql="select EMPNO from EMP where ENAME = (select
ENAME from EMP where SAL = (select SAL from EMP where HIREDATE = to_date('06/09/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS')))"
newName="DB2_case 15"
update_qr_replace_element_byId
Update an existing replacement specification for a specified query rewrite action.
isFromAllRuleElements  Required. The type of database this query rewrite definition is associated with. Must be either ORACLE or DB2.
isFromRegex  True or false. Indicates that the "from" element is specified by using a regular expression. Default is false.
isReplaceToFunction  True or false. Indicates that the "replace to" is the name of a function, such as user-defined function.
replaceFrom  The incoming string for a matching rule that is to be replaced. Use replaceType to indicate specifically which element of the incoming
query to examine.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
Note: In a Central Management environment, the object to which you want to add a role may reside on the Central Manager or on a managed unit. See the Overview of the
Aggregation & Central Management help book, for more information.
grant_role_to_object_by_id
Add a role to the specified object - a Classification process, for example. Dependencies are checked before adding the role. For example, before adding a role to a
Classification process, that role must be assigned to all components contained by that Classification process (the classification policy and any datasources referenced).
V
a
l
u
Parameter e Description
objectTypeId  Required (integer). Identifies the type of object to which the role will be assigned. It must be one of the following integers:
1=Query
2=Report
3=Alert
4=Baseline
5=Policy
6=SecurityAssessment
7=PrivacySet
8=AuditProcess
12=CustomTable
13=Datasource
14=CustomDomain
15=ClassifierPolicy
16=ClassificationProcess
objectId  Required (integer). Identifies the object to which the role will be assigned.
roleId  Required (integer). Identifies the role to assign. This can be any existing role ID, or the special value -1, which allows access by all
roles.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
grant_role_to_object_by_Name
Add a role to the specified object - a Classification process, for example. Dependencies are checked before adding the role. For example, before adding a role to a
Classification process, that role must be assigned to all components contained by that Classification process (the classification policy and any datasources referenced).
Parameters
V
a
l
u
Parameter e Description
objectType  Required. Identifies the type of object to which the role will be assigned. It must be one of the following:
Query
Report
Alert
Baseline
Policy
SecurityAssessment
PrivacySet
AuditProcess
CustomTable
Datasource
CustomDomain
ClassifierPolicy
ClassificationProcess
objectName  Required. The name of the object (the query or report, for example) to which the role will be assigned.
role  Required. The name of the role to assign. This can be any existing role, or all_roles to allow access by all roles.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
list_roles_granted_to_object_by_id
Displays the roles assigned to the specified object - a Classification process, for example.
V
a
l
u
Parameter e Description
objectTypeId  Required (integer). Identifies the type of object to which the role will be assigned. It must be one of the following integers:
1=Query
2=Report
3=Alert
4=Baseline
5=Policy
6=SecurityAssessment
7=PrivacySet
8=AuditProcess
12=CustomTable
13=Datasource
14=CustomDomain
15=ClassifierPolicy
16=ClassificationProcess
objectId  Required (integer). Identifies the object to which the role will be assigned.
roleId  Required (integer). Identifies the role to assign. This can be any existing role ID, or the special value -1, which allows access by all
roles.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
list_roles_granted_to_object_by_Name
Displays the roles assigned to the specified object - a Classification process, for example.
V
a
l
u
Parameter e Description
objectType  Required. Identifies the type of object to which the role will be assigned. It must be one of the following:
Query
Report
Alert
Baseline
Policy
SecurityAssessment
PrivacySet
AuditProcess
CustomTable
Datasource
CustomDomain
ClassifierPolicy
ClassificationProcess
objectName  Required. The name of the object (the query or report, for example) to which the role will be assigned.
role  Required. The name of the role to assign. This can be any existing role, or all_roles to allow access by all roles.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
revoke_role_from_object_by_id
Removes a role from the specified object - a Classification process, for example. Dependencies are handled automatically. For example, if the role foo is removed from a
specific query, the role foo will also be removed from any report based on that query.
V
a
l
u
Parameter e Description
objectTypeId  Required (integer). Identifies the type of object to which the role will be assigned. It must be one of the following integers:
1=Query
2=Report
3=Alert
4=Baseline
5=Policy
6=SecurityAssessment
7=PrivacySet
8=AuditProcess
12=CustomTable
13=Datasource
14=CustomDomain
15=ClassifierPolicy
16=ClassificationProcess
objectId  Required (integer). Identifies the object to which the role will be assigned.
roleId  Required (integer). Identifies the role to assign. This can be any existing role ID, or the special value -1, which allows access by all
roles.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
revoke_role_from_object_by_Name
Removes a role from the specified object - a Classification process, for example. Dependencies are handled automatically. For example, if the role foo is removed from a
specific query, the role foo will also be removed from any report that uses that query.
V
a
l
u
Parameter e Description
objectType  Required. Identifies the type of object to which the role will be assigned. It must be one of the following:
Query
Report
Alert
Baseline
Policy
SecurityAssessment
PrivacySet
AuditProcess
CustomTable
Datasource
CustomDomain
ClassifierPolicy
ClassificationProcess
objectName  Required. The name of the object (the query or report, for example) to which the role will be assigned.
role  Required. The name of the role to assign. This can be any existing role, or all_roles to allow access by all roles.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
create_stap_inspection_engine
Add an inspection engine to the specified S-TAP. S-TAP configurations can be modified only from the active Guardium® host for that S-TAP, and only when the S-TAP is
online.
V
a
l
u
Parameter e Description
stapHost  Required. The host name or IP address of the database server on which the S-TAP is installed.
protocol  Required. The database protocol, which must be one of the these values:
DB2®
FTP
Informix®
Kerberos
Mysql
Netezza®
Oracle
PostgreSQL
Sybase
Teradata
exclude IE
MSSQL
named pipes
portMin  Required (integer). Starting port number of the range of listening ports that are configured for the database. (Do not use large
inclusive ranges, as this degrades the performance of the S-TAP.)
portMax  Required (integer). Ending port number of the range of listening ports for the database.
teeListenPort  Optional (integer). Not used for Windows. Under UNIX, replaced by the KTAP DB Real Port when the K-TAP monitoring mechanism is
used. Required when the TEE monitoring mechanism is used. The Listen Port is the port on which the S-TAP listens for and accepts
teeRealPort local database traffic. The Real Port is the port onto which S-TAP forwards traffic.
connectToIp  Optional (integer). The IP address for the S-TAP to use to connect to the database. Some databases accept local connection only on
the “real†IP address of the machine, and not on the default (127.0.0.1).
client  Required. A list of Client IP addresses and corresponding masks to specify which clients to monitor. If the IP address is the same as
the IP address for the database server, and a mask of 255.255.255.255 is used, only local traffic is monitored. A client address/mask
value of 1.1.1.1/0.0.0.0 monitors all clients. (See the example.)
encryption  Optional. Activate ASO encrypted traffic where encryption=0 (no) or encryption=1 (yes).
excludeClient  Optional. A list of Client IP addresses and corresponding masks to specify which clients to exclude. This option enables you to
configure the S-TAP to monitor all clients, except for a certain client or subnet (or a collection of these options).
procNames  For a Windows Server: For Oracle or MS SQL Server only, when named pipes are used. For Oracle, the list usually has two entries:
oracle.exe,tnslsnr.exe. For MS SQL Server, the list is usually just one entry: sqlservr.exe.
namedPipe  Windows only. Specifies the name of a named pipe. If a named pipe is used, but nothing is specified here, the S-TAP retrieves the
named pipe name from the registry.
ktapDbPort  Optional (integer). Not used for Windows. Under UNIX, used only when the K-TAP monitoring mechanism is used. Identifies the
database port to be monitored by the K-TAP mechanism.
dbInstallDir  UNIX only. Enter the full path name for the database installation directory. For example: /home/oracle10
procName  For a UNIX Server: For a DB2, Oracle, or Informix database, enter the full path name for the database executable. For example:
/home/oracle10/prod/10.2.0/db_1/bin/oracle
procNames  Optional
db2SharedMemAdjustment  These three parameters are used for a DB2 inspection engine, only under the following conditions:
db2SharedMemClientPosition The DB2 server is running under Linux.
When these parameters are used, grdapi verifies only that the protocol is db2; it does not verify that the conditions have been met.
See the DB2 Linux S-TAP Configuration Parameters topic for a detailed explanation of how to use these parameters.
instanceName  Optional (string). Used only for MSSQL or Oracle encrypted traffic. Either the MSSQL or ORACLE encryption flag must be turned on
before this parameter can be used.
Â
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
Note:
Sometimes, when adding an inspection engine, a false message of Configuration rejected by S-TAP- see S-TAP event log for details, is displayed even though the
configuration was not rejected and installed correctly.
Client IP/mask is required for UNIX S-TAP, optional for Windows S-TAP.
list_inspection_engines
Display the properties of all S-TAPs on the specified host, optionally for a specific database type only.
V
a
l
u
Parameter e Description
stapHost  Required. The host name or IP address of a database server on which S-TAPs are installed (and configured to report to this Guardium
appliance).
type  Optional. If used, inspection engines for the specified database type only will be listed. Type must be one of the following:
db2
informix
mssql
mssql-np
oracle
sybase
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
ID=20162
         name =ORACLE2
         type =ORACLE
         connect to IP=127.0.0.1
         encrypted = no
                 client = 127.0.0.1/255.255.255.255
                 client = 192.168.0.0/255.255.0.0
         name =ORACLE3
         type =ORACLE
         connect to IP=127.0.0.1
         encrypted = no
ok Â
list_staps
Display the database servers from which S-TAPs report to this Guardium system, optionally listing only the servers that have S-TAPs for which this Guardium system is the
active host (that is, the one to which the S-TAP is sending data and the one from which the S-TAP configuration can be modified).
V
a
l
u
Parameter e Description
onlyActive  Optional (Boolean). Enter true, or omit this parameter, to list only those hosts having S-TAPs for which this Guardium system is the
active host. Enter false to list all hosts on which S-TAPs have been configured to use this Guardium system as either a primary or
secondary host.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
ID=0
staps:
ok Â
delete_stap_inspection_engine
Remove an S-TAP inspection engine. This Guardium system must be the active host for the S-TAP from which the inspection engine will be removed.
V
a
l
u
Parameter e Description
stapHost  Required. The host name or IP address of the database server on which the S-TAP is installed.
type  Required. Identifies the type of inspection to be removed. Type must be one of the following:
Cassandra, CouchDB, DB2, DB2 Exit, FTP, GreenPlumDB, Hadoop, HTTP, iSERIES, Informix, KERBEROS, MongoDB, MS SQL, mssql-np,
Mysql, Named Pipes, Netezza, Oracle, PostgreSQL, SAP Hana, Sybase, Teradata, Teradata Exit (v10.1.3 and up), or Windows File Share
sequence  Required (integer). The sequence number of the inspection engine to be removed within the set of inspection engines of the specified
type. You can use the grdapi list_inspection_engines command with the type option first, to verify the sequence number of the
inspection engine to be removed.
waitForResponse  Optional. Specifies whether the API will wait for a response from the S-TAP. Valid values are 0 (do not wait) and 1 (wait for a
response). The default is 1 when stapHost is a single host name or IP address and 0 in all other cases.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
Note: Sometimes, when deleting an inspection engine, a false message of Cannot remove Inspection Engine - the specified inspection engine is not found, is displayed
even though the removal was successful.
restart_stap
Restart an S-TAP inspection engine.
stapHost  Required. The host name or IP address of the database server on which the S-TAP is installed.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example
set_stap_debug
Filter log content by database, protocol, client information, instead of dumping all traffic to the log.
function parameters :
stapDebugInterval - required
stapDebugLevel - required
stapDebugOn - required
stapHost - required
api_target_host
store_stap_approval
Use this function to block unauthorized S-TAPs from connecting to the Guardium system.
If ON, then S-TAPs can not connect until they are specifically approved.
If an unapproved S-TAP connects, it is immediately disconnected until the specific authorization of the IP address of that S-TAP.
There is a pre-defined report for approved clients, Approved TAP clients. It is available on the Daily Monitor tab.
Note:
The store_stap_approval command does not work within an environment where there is an IP load balancer.
Within a Central Managed environment, after adding the IP addresses to approved S-TAPs, there is a wait time associated with synchronization that might take up to an
hour. After synchronization is complete, the status of the approved S-TAP will appear green in the GUI.
Function: store_stap_approval
function parameters :
api_target_host - String
Syntax
CLI command
add_approved_stap_client
Use this GuardAPI command to add an approved S-TAP client.
Use of this GuardAPI command does not restart the sniffer and does not affect already connected S-TAPs. This command affects only new S-TAP connections.
Function: add_approved_stap_client
function parameters :
Syntax
list_approved_stap_client
Use this GuardAPI command to list approved S-TAP clients.
Function: add_approved_stap_client
function parameters :
api_target_host - String
Syntax
grdapi list_approved_stap_client
list_stap_verification_results
Use this GuardAPI command to list S-TAP verification results.
function parameters:
stapHost - String. The host name or IP address of the database server on which the S-TAP is installed.
Syntax
delete_approved_stap_client
Use this GuardAPI command to remove an approved S-TAP client.
Use of this GuardAPI command does not restart the sniffer and does not affect other already connected S-TAPs. This command affects only the specified S-TAP
connections.
Function: add_approved_stap_client
function parameters :
api_target_host - String
Syntax
set_ktap_debug
ID=0
function parameters :
ktapDebugInterval - required
ktapFunctionNames
stapHost - required
api_target_host
display_stap_config
Display all the properties of all S-TAPs on the specified host.
V
a
l
u
Parameter e Description
stapHost  Required. The host name or IP address of a database server on which S-TAPs are installed and configured to report to this Guardium
system, or a comma-separated list of host names or IP addresses. You can also use these values:
all_active
All S-TAPs that are configured to report to this Guardium system
all_windows_active
All S-TAPs that are configured to report to this Guardium system and are running on Windows machines
all_unix_active
All S-TAPs that are configured to report to this Guardium system and are running on UNIX machines
  Â
Examples:
update_stap_config
Update properties of all S-TAPs on the specified host.
V
a
l
u
Parameter e Description
stapHost  Required. The host name or IP address of a database server on which Guardium system, or a comma-separated list of host names or
IP addresses. You can also use these values:
all_active
All S-TAPs that are configured to report to this Guardium system
all_windows_active
All S-TAPs that are configured to report to this Guardium system and are running on Windows machines
all_unix_active
All S-TAPs that are configured to report to this Guardium system and are running on UNIX machines
updateValue  Required. One or more key-value pairs, in this format: section.parameter_name:new_value. section indicates the section of the
guard_tap.ini file in which the parameter is contained, and can be TAP or DB_x, where DB_x is a designation for an inspection engine
that appears as a section header in the file. You can specify new values for multiple parameters by separating the entries with an
ampersand (&) .
waitForResponse  Optional. Specifies whether the API will wait for a response from the S-TAP. Valid values are 0 (do not wait) and 1 (wait for a
response). The default is 1 when stapHost is a single host name or IP address and 0 in all other cases.
Examples:
verify_stap_inspection_engine_with_sequence
Use this command to verify the S-TAP inspection engine.
V
a
l
u
Parameter e Description
addToSchedule  String. Constant values list; valid values are Yes and No.
datasourceName  String. If this parameter is specified, advanced verification is performed against the specified datasource. If this parameter is
omitted, standard verification is performed.
sequence  Required. Integer. The sequence number of the existing inspection engine for verification. You can use the grdapi
list_inspection_engines command with the type option first, to verify the sequence number of the inspection engine to be verified.
stapHost  Required. String. The host name or IP address of the database server on which the S-TAP is installed.
protocol  Required. The database protocol, which must be one of the these values: DB2, DB2 Exit (DB2 version 10), FTP, Informix, Kerberos,
Mysql, Netezza, Oracle, PostgreSQL, Sybase, Teradata, Teradata Exit (v10.1.3 and up), exclude IE. Windows S-TAP hosts can also use
the following protocols: MSSQL, named pipes.
Example:
revoke_ignore_stap
This command revokes existing IGNORE S-TAP SESSION (REVOKABLE) policy rule actions that ignore S-TAP session traffic. This command only revokes soft ignore rules
(marked as REVOKABLE) and cannot revoke hard rules (not marked as REVOKABLE).
stapHost  Required. The host name or IP address of a database server on which S-TAPs are installed and configured to
report to this Guardium system, or a comma-separated list of host names or IP addresses. You can also use
these values:
all_active
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the
unit on which command is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API
will execute. On a Central Manager (CM) the value is the host name or IP of any managed units. On a managed
unit it is the host name or IP of the CM.
Example
set_ztap_logging_config
This command controls the logging parameters described below.
Parameter V Description
a
l
u
e
log_db2z_target 0 When enabled using log_db2z_target=1, targets in db2z protobuf message are logged to GDM_OBJECT in addition to objects
t from the parser.
o
d
is
a
b
l
e
1
t
o
e
n
a
b
l
e
P
a
r
a
m
e
t
e
r
is
d
is
a
b
l
e
d
b
y
d
e
f
a
u
lt
.
log_zkey_to_full_sql 0 When enabled using log_zkey_to_full_sql=1, VSAM or IMS Key values will be logged in the full SQL statement for policies using
t "Log full details."
o
d
is
a
b
l
e
1
t
o
e
n
a
b
l
e
P
a
r
a
m
e
t
e
r
is
d
is
a
b
l
e
d
b
y
d
e
f
a
u
lt
.
Example
Parameter V Description
a
l
u
e
all  Optional. In a central management configuration only, enables all threat detection scanners on all managed units. Allowable values:
true, false.
schedule_start  Optional. Specifies the date and time to start running the processes. The accepted format is yyyy-mm-dd hh:mm:ss (24-hour clock).
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
You will see the following message if threat analytics is enabled when outlier detection is not:
Warning - Enabling advance threat scanning (AKA Eagle Eye) when Analytic anomaly detection is disabled.
Advance threat scanning (AKA Eagle Eye) enabled.
ok
disable_advanced_threat_scanning
Disables threat detection scanners on the collector.
Parameter V Description
a
l
u
e
all  In a central management configuration only, disables all threat detection scanners on all managed units.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
get_eagle_eye_info
Displays the current settings for threat detection parameters.
Parameter V Description
a
l
u
e
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Example:
grdapi get_eagle_eye_info
Eagle Eye Parameters Values:
EI_CASES_DISPLAY_LIMIT = 3
EI_CONFIDENCE_PCT_CHANGE_TO_REDISPLAY_CASE = 30
EI_EAGLE_EYE_ENABLED = 1
EI_PROCESSOR_TIMEOUT_SEC = 420
set_eagle_eye_parameter
Use under the direction of IBM personnel. Changes configuration parameters for threat detection. These parameters must be set explicitly using parameter_name and
parameter_value as follows:
Parameter V Description
a
l
u
e
EI_CASES_DISPLAY LIMIT Â The number of cases to be displayed in the to-do list report. Default is 3.
EI_CONFIDENCE_PCT_CHANG  The percent of “confidence†change that will cause this case to be redisplayed in the to-do list report, even if it has already
E_TO_REDISPLAY CASE appeared before. This can happen if Guardium detects another symptom or symptoms that raise the confidence by this percentage
value. Default is 30.
EI_PROCESSOR_TIMEOUT_SEC Â Processors that run longer time than this threshold are turned off. Default is 420 seconds.
EI_SCANNER_PATCH_DEF Â To avoid false positives as a result of patch installation, if in a single process run the number of stored procedures created exceeds
this parameter then the process assumes a patch is installed and it stops analyzing symptoms. Default is 10 stored procedure
creations detected in one run.
EI_SCANNER_TIMEOUT_SEC Â Scanners that run longer time than this threshold are turned off. Default is 300 seconds.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
get_eagle_eye_scanners_info
Return scanner settings information.
Parameter V Description
a
l
u
e
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
  Â
Field Description
I: in progress
D: done
K: killed
True: enabled
False: disabled
Permanent disabled If the scanner was disabled 3 times in 24 hours, then it is permanently disabled.
True: disabled
False: enabled
Example:
grdapi get_eagle_eye_scanners_info
ID=0
ID:1, Name:SQLInjectionExceptionsScanner, Status:D, Enabled:true, Permanent disabled:false
ID:2, Name:NumNewConstructScanner, Status:D, Enabled:true, Permanent disabled:false
ID:3, Name:SQLInjectionSuspiciousObjectScanner, Status:D, Enabled:true, Permanent disabled:false
ID:4, Name:SqliQueryScanner, Status:Unknown, Enabled:false, Permanent disabled:true
ID:5, Name:EagleEyeSTPCreateProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:6, Name:EagleEyeSTPCallProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:7, Name:EagleEyeSTPExceptionProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:8, Name:EagleEyePreviousStpUsageProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:9, Name:EagleEyeSTPViolationProcedureScanner, Status:D, Enabled:true, Permanent disabled:false
ID:10, Name:EagleEyeSTPUserOutlierScanner, Status:D, Enabled:true, Permanent disabled:false
ok
set_eagle_eye_scanner_parameter
Use under the direction of IBM personnel. Activate or deactivate a scanner. These parameters must be set explicitly using parameter_name and parameter_value as
follows:
Parameter V Description
a
l
u
e
scanner_id  Required. The unique ID of the scanner, which you can get from get_eagle_eye_scanners_info GuardAPI command.
is_active  Defines if the scanner should run. Used to start a scanner that was stopped automatically because it timed out.
0 : the scanner is stopped
is_permanent_inactive  If the scanner was permanently disabled after it was disabled 3 times in 24 hours then it can only be enabled again using this
GuardAPI.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
get_eagle_eye_symptom_period_hours
Show the value of the symptom period parameter in hours. The symptom period determines how long back the process is looking and analyzing the collected symptoms
for one case.
Parameter V Description
a
l
u
e
case_name  Required. The case type. The following values are allowed:
STP: malicious stored procedure case
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
set_eagle_eye_symptom_period_hours
Set a value for the symptom period parameter in hours. The symptom period determine how long back the process is looking and analyzing the collected symptoms for a
case.
Parameter V Description
a
l
u
e
case_name  Required. The case type. The following values are allowed:
STP: malicious stored procedure case
symptom_period_hours  Required. Integer. The number of hours in the past to analyze symptoms for a case.
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
get_eagle_eye_debug_level
For use by IBM Service personnel. Displays current debug level:
1: on
0: off
Parameter V Description
a
l
u
e
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
grdapi get_eagle_eye_debug_level
ID=0
component=EAGLE_EYE level=1
ok
set_eagle_eye_debug_level
For use by IBM Service personnel. Displays current debug level.
Parameter V Description
a
l
u
e
0: off
api_target_host  Optional parameter that specifies the target host(s) to execute the API. When not specified, it defaults to the unit on which command
is executed. Valid values:
Guardium V10.1 and 10.1.2: In a central management configuration only, specifies a target host where the API will execute. On a
Central Manager (CM) the value is the host name or IP of any managed units. On a managed unit it is the host name or IP of the CM.
Example:
Use IBM Guardium S-TAP for Db2 to collect and correlate the following types of data to the Guardium system:
IBM Guardium S-TAP for Db2 uses Db2 data sharing to obtain audit information from all members of the data sharing group.
What's new in IBM Security Guardium S-TAP for Db2 on z/OS V10.1.3?
Version 10.1.3 of IBM Guardium S-TAP for Db2 provides speed and monitoring enhancements.
The IBM Security Guardium S-TAP for Db2 on z/OS installation environment
The IBM Guardium S-TAP for Db2 SQL Collector Agent collects data from an audited Db2 subsystem in accordance with the filtering policies you set with the
Guardium system.
Installation and operation requirements
Verify that you have the hardware and software that is required to install and operate IBM Guardium S-TAP for Db2.
What's new in IBM Security Guardium S-TAP for Db2 on z/OS V10.1.3?
Version 10.1.3 of IBM Guardium S-TAP for Db2 provides speed and monitoring enhancements.
New Simulation mode enables you to test policies without sending data to the appliance. Data is collected on z/OS.
Support for the collection of BIND/REBIND events
Support for the collection of CICS Unit of Work ID
Improved memory management
Support for blocking policies pushed-down from the appliance
Improved filtering of events
MODIFY command now collects more diagnostic information
Ability to exclude host variables
Support for initiating an appliance MUST GATHER command from z/OS
Support for S-TAP logging
Support for Internet Protocol version 6 (IPV6) introduced with PH16991
Parent topic: IBM Security Guardium S-TAP for Db2 on z/OS overview
The IBM Security Guardium S-TAP for Db2 on z/OS installation environment
The IBM Guardium S-TAP for Db2 collector agent runs as a started task and is responsible for the collection of audit data in an IBM Guardium S-TAP for Db2 environment.
As shown in the following diagram, SQL collector data is filtered and sent to the Guardium system, enabling you to view reports on your workstation.
Provides the user interface, which processes requests and displays the resulting information.
Enables you to create filtering policies, which specify the types of data to be collected by the agent.
Stores the collected data.
For more information about how Guardium system policies are interpreted and enabled by the S-TAP, see Policy pushdown.
With the Guardium system installed, configured, and running in your environment, you can test your connection from the z/OS platform to the Guardium system by
configuring and running the IBM Guardium S-TAP for Db2 sample library member, ADHTCPD. Consult your network security team to review the results and confirm that
connection from the z/OS platform to the Guardium system is available.
Parent topic: IBM Security Guardium S-TAP for Db2 on z/OS overview
FEC common code FMID H25F132 is required, and must be present on the system for the successful installation of this product.
Hardware requirements
Any hardware that is capable of running Db2 for z/OS (V11 or later, until end of service).
Parent topic: IBM Security Guardium S-TAP for Db2 on z/OS overview
To implement Db2 Query Monitor, your site must have the appropriate operating system, environment, hardware, software, and network requirements. For information
about installing and operating Db2 Query Monitor, refer to the IBM Db2 Query Monitor for z/OS Knowledge Center.
Where:
LPAR
The two products releases can coexist on the same LPAR (provided they use a different MASTER name), but cannot be active on the same Db2 subsystem.
Db2
The two products releases can coexist on the same LPAR and can both be active on the same Db2 subsystem/shared collector.
The collector agent user ID requires Db2 privileges. Grant the collector agent user ID SYSCTRL authority, and the authority to issue the SELECT statements on these
tables:
SYSIBM.SYSTABLES
SYSIBM.SYSTABLESPACE
SYSIBM.SYSINDEXES
OMVS segment
The collector agent uses UNIX System Services (USS) callable services as the network interface to the appliance. The USS callable services require that an OMVS segment
is defined in the RACF® profile for the user ID under which the collector agent job runs. The OMVS segment that is defined for the user ID must contain the following
minimum requirements:
To verify that the ID has an OMVS segment in its RACF profile, use the following command:
LU user ID OMVS
To add an OMVS segment to the RACF profile of an ID, refer to this sample command:
ALTUSER user ID
OMVS(UID(nnn)HOME('/u/ user ID)
PROGRAM('/bin/sh')
Upgrading from IBM Guardium S-TAP for Db2 V9.0, V9.1, or V10.0
You can upgrade to IBM Guardium S-TAP for Db2 V10.1.3 from IBM Guardium S-TAP for Db2 V9.0, V9.1, or V10.0 by completing these steps.
Configuring IBM Security Guardium S-TAP for Db2 on z/OS
After installation, configure IBM Guardium S-TAP for Db2 by completing the steps that are described in this section.
Upgrading from IBM Guardium S-TAP for Db2 V9.0, V9.1, or V10.0
You can upgrade to IBM Guardium S-TAP for Db2 V10.1.3 from IBM Guardium S-TAP for Db2 V9.0, V9.1, or V10.0 by completing these steps.
Procedure
1. Complete the SMP/E installation of IBM Guardium S-TAP for Db2 V10.1.3.
2. APF-authorize the V10.1.3 SADHLOAD data set.
3. Customize and run the Db2 bind job in SADHSAMP(ADHBIND).
4. Customize and run the Db2 grant job in SADHSAMP(ADHGRANT).
5. Export and save your collection profiles.
(V8.1 collection profiles, or policies, were administered either with the InfoSphere® Guardium S-TAP for Db2 administration client, or the IBM Guardium system.)
6. Stop the previous version's collector agent and server address spaces.
7. Update the collector started task JCLs (ADHCssid) to:
Remove the previous version of the product SADHLOAD data sets.
Include the new V10.1.3 product SADHLOAD data sets in the STEPLIB DD concatenation members.
Note: ADHSssid and ADHAssid started tasks are not used in IBM Guardium S-TAP for Db2 V10.1.3.
8. Update the V10.1.3 collector configuration member (typically SADHSAMP(ADHCFGP).
9. Install a collection policy on the IBM Guardium system.
If policy pushdown was used for V8.1 collection administration, follow the Guardium Policy Builder instructions for migrating policies for V8.1 to V10.1.3.
If the InfoSphere Guardium S-TAP for Db2 administration client was used for V8.1 collection administration, use the XML exported in Step 4 as a reference
for the Guardium Policy Builder to define collection policies for V10.1.3.
10. Start the collector address space by typing /S ADHCssid at the z/OS® command prompt.
What to do next
Now you can install policies on the z/OS host by using the IBM Guardium system interface. No additional configuration steps are required.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
CEE.SCEERUN
CEE.SCEERUN2
Db2 EXIT data set (i.e. DSN.VAR1.SDNEXIT)
Db2 LOAD library data set (i.e. DSN.VAR1.SDSNLOAD)
SYS1.LINKLIB
Refer to the z/OS® Knowledge Center for more information about how to APF authorize libraries.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Procedure
Provide the user ID with ADD/UPDATE/DELETE authority.
For more information about how to enable the CSVDYLPA resource, see section 5.6.3 of the z/OS® V1R7.0 MVS™ Planning: Operations Guide (SA22-7601-06), section
Controlling/Adding A Module to LPA after IPL.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Related tasks
Configuring the collector agent
Procedure
1. Copy member ADHEMAC1 from the adhhilvl.SADHSAMP to your site's CLIST library, and then edit the ADHEMAC1 macro with the appropriate variables.
2. After you copy the edit macro to your CLIST library, use it to edit each sample library member individually. You might need to update the macro between edits
depending on the member being edited and the context of the variable to be modified in the sample library.
3. To run the macro, type the ADHEMAC1 command to automatically update the appropriate variables in the member that you are editing.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Related reference
ADHEMAC1 edit macro variables
Procedure
1. Edit SADHSAMP member ADHSJ000.
2. Add the appropriate job card to ADHSJ000.
3. In the DELETE instruction, change the data set name.
4. In the DEFINE CLUSTER instruction, change the following text within parentheses:
Data set NAME
VOLUMES
DATA NAME
INDEX NAME
5. In the REPRO instruction, change the name of the OUTDATASET.
6. Run ADHSJ000 to create the control file. The job steps must end with a return code of zero.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Procedure
1. Edit SADHSAMP member ADHSJ001.
2. Add the appropriate job card to ADHSJ001.
3. Change ADH.V0A00.CONTROL to the name of the VSAM control data set that you created using member ADHSJ000.
4. Change #SADHLOAD to the name of the product LOADLIB used for IBM Guardium S-TAP for Db2.
5. Modify the SYSIN DD statements as instructed in the sample member. For more information, see Required statements for each subsystem.
Important: In a data-sharing environment, specify subsystem names (not group names) in ADHSJ001.
6. Run ADHSJ001.
Ensure that the update job steps of the product control file end with a return code of zero. If a non-zero return code occurs, review the job output for errors, correct
the problem, and resubmit the JCL.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Related reference
Service class considerations
Procedure
1. Customize and submit the JCL according to the instructions in the member.
2. Submit the ADHBIND JCL to bind the collector agent packages and plan on each Db2 subsystem on which you want to use IBM Guardium S-TAP for Db2.
Procedure
1. Customize and submit the JCL according to the instructions in the member.
2. Submit the ADHGRANT JCL to grant authorizations to the user ID and plan that are used by the collector agent for each Db2 subsystem on which you want to use
IBM Guardium S-TAP for Db2.
Note: The ADHGRANT job contains examples of the GRANTs that meet the minimal authorization requirements for the collector agent. Alternative authorizations
and, subsequently, GRANTs, can be used to meet the minimal authorization requirements for the collector agent.
Note: The AUDIT parameter is required. It instructs the collector agent to audit a specific Db2 subsystem. It supports only one Db2 subsystem.
Procedure
1. Copy ADHCFGP to the appropriate location (PARMLIB) on your system.
2. Verify that the parameters are valid for your environment. If necessary, edit the parameter file for your IBM Guardium S-TAP for Db2 objects.
3. Edit the ADHPARMS DD in the started task JCL to point to the ADHCFGP data set that you have customized.
Example
An example of the ADHCFGP member contents is as follows:
BROWSE ADH.SMPE.SAMPLIB(ADHCFGP) - 01 L
Command ===>
SUBSYS(#SSID)
AUDIT(#SSID)
MASTER_PROCNAME(ADHMST31)
APPLIANCE_SERVER(#APPSRVR)
READ access to the ADHCFGP data set in the RACF® DATASET class
UPDATE access to the DB2PARMS data set in the RACF DATASET class
The ability to connect to the Db2 subsystem that is monitored by the collector agent
The ability to read data from the following Db2 subsystem catalog tables:
SYSTABLES
SYSINDEXES
SYSDBRM
SYSPACKAGE
SYSPACKSTMT
SYSSTMT
Procedure
1. Using the sample library member ADHCSSID as a template, customize the member according to the directions contained in the sample JCL. Any valid member
name can be used for the started task name, but the suggested started task name is ADHCSSID, where SSID is the identifier of the Db2 subsystem that is to be
monitored.
2. Copy the customized JCL to an appropriate SYSPROC data set. The JCL must include definitions for the following data descriptions:
ADHPARMS
ADHPARMS must name the IBM Guardium S-TAP for Db2 collector agent configuration file.
DB2PARMS
DB2PARMS must name the IBM Guardium S-TAP for Db2 product control file (example: ADH.V0A00.CONTROL).
ADHPLCY
ADHPLCY enables policy persistence. For more information, see the Policy Persistence information provided in Policy pushdown.
If ADHPLCY is defined, it must point to a data set that is allocated with a record format of fixed blocked (RECFM=FB) and a record length (LRECL) greater than
or equal to 256.
The ADHPLCY data set should be allocated with a minimum of 50 primary tracks and 10 secondary tracks. The ADHPLCY data set can be sequential, PDS, or
PDS/E. If you use PDS or PDS/E, the space requirements might need to be increased in relation to the number of members that are contained within the data
set.
ADHLOG
ADHLOG is the SYSOUT data set to which IBM Guardium S-TAP for Db2 collector agent log messages will be written.
STEPLIB
STEPLIB must include the IBM Guardium S-TAP for Db2 SADHLOAD data set.
Note: Every data set allocated to STEPLIB must be APF-authorized.
SYSPRINT
SYSPRINT is the SYSOUT data set to which log messages will be written.
Related reference
Sample library members
READ access to ADHCFGx parameter data sets, Db2 catalogs, and VSAM control data sets
Access to the DSNR resource class in Db2
OMVS segment definition
GRANT authority for SYSCTRL Db2 to communicate with the agent started task user IDs on all Db2 subsystems to be audited
READ authority for the Db2 catalog tables
Authority to use the dynamic LPA facility CSVDYLPA
Procedure
1. For additional stand-alone Db2 subsystems, use the SADHSAMP member ADHBIND to bind IBM Guardium S-TAP for Db2 plans on each Db2 subsystem that is to
be audited.
For data sharing group members, use ADHBIND to bind one member of the data sharing group. The bind will apply to all additional group members.
When configuring the product control file for each member of the data sharing group, the PLAN value that is used in the ADHBIND job can also be used for
the ADHPLAN1 value in the SJ001 JCL job.
For the first member of the data sharing group, PACKAGES and PLANS that are used in the ADHBIND job will work for all members of the data sharing group.
2. For each data sharing group or additional stand-alone Db2 subsystem, grant EXECUTE permission for the agent started task ID to the ADH PLAN 1, as specified in
the PCF file for the Db2 subsystem. Refer to the JCL SADHSAMP member ADHGRANT for additional details on granting EXECUTE permission to the ADH PLAN.
3. Update the control file with the new SSID, or create a new S-TAP control file for each SSID by using the SADHSAMP member ADHSJ001.
4. Configure a new S-TAP agent configuration file.
5. Add the agent started task name to the z/OS® started task table.
6. Start the new S-TAP agent.
Note:
Dispatching priority must be the same as, or higher than, Db2.
After you start the agent, review the agent log and MVSâ„¢ log for any error messages. When an active collection policy is received, the agent starts collecting audit
data.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
A Support Services Address Space, also referred to as a Master Address Space, starts for each z/OS® image after the first instance of IBM Guardium S-TAP for Db2,
InfoSphere® Optim™ Query Workload Replay for Db2, or IBM Db2 Query Monitor for z/OS starts with a MASTER_PROCNAME value that is not yet in use on that z/OS
image.
The Master Address Space is a Service Address Space for all instances of IBM Guardium S-TAP for Db2, InfoSphere Optim Query Workload Replay for Db2, or IBM Db2
Query Monitor for z/OS that specify the same MASTER_PROCNAME parameter value that is running on the z/OS image. The Master Address Space acts as a placeholder
for shared collector resources, and is similar to other Master Address Spaces that are used throughout MVSâ„¢. For sample, MVS and Db2 both have Master Address
Spaces.
Important: During installation, do not stop or start the Master Address Space unless required by product maintenance or instructed to do so by IBM Software Support.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
To ensure product stability, the Master Address Space should only be stopped by using the sample job that is provided in SADHSAMP, member ADHMSTR. This job verifies
that no IBM Guardium S-TAP for Db2, InfoSphere® Optim™ Query Workload Replay for Db2, or IBM Db2 Query Monitor for z/OS® subsystems are using the Master
Address Space before it is stopped.
Procedure
1. Set the ATTACHSEC parameter to ATTACHSEC(IDENTIFY) for the user ID to be passed from the Terminal-Owning Region (TOR) to the Application-Owning Region
(AOR).
This makes the user ID available for collection.
2. Ensure that the CICS_USERID collector agent parameter is set to Y to enable reporting of the CICS login user ID. For more information, see Collector agent
parameters.
Results
The CICS Login User ID is reported in Guardium interface DB2 Client Info field for SQL Statements that are run in Db2 for CICS transactions.
Parent topic: Configuring IBM Security Guardium S-TAP for Db2 on z/OS
Data collection
IBM Guardium S-TAP for Db2 collects data from an audited Db2 subsystem, in accordance with the collection policies that you create through the IBM Guardium system.
Use a collection policy to specify filtering criteria that captures relevant data and filters out irrelevant data. The filtering criteria that you specify determines which data is
streamed to your IBM Guardium system.
You can define and manage data collection and filtering in the Guardium Policy Builder of the IBM Guardium system interface.
All reads and all changes (with collector agent based collection)
Host variables up to a maximum of 256 bytes per variable
Dynamic SQL text up to 2 million bytes per statement
Static SQL text up to 4000 bytes
Data collected from Db2 is filtered during the collection process, and non-relevant events are discarded. Specify filtering criteria by defining a collection policy so that only
relevant events are captured. This limits the amount of unnecessary data that is collected and stored by IBM Guardium S-TAP for Db2.
Collection policy
The collection policy is defined by the Guardium policy. It is used to determine which events (SQL, Command, Utilities, etc.) are streamed from the z/OS collector
agent to the Guardium appliance. The following methodology determines how the collection policy determines whether to stream events to the Guardium
appliance.
Collected event types
All event types are collected with the SQL Collection mechanism, which is not dependent on other SQL Trace information such as the Db2 Trace (IFI) or SMF data.
Filtering criteria is defined and managed through the IBM Guardium system interface. This table lists the types of events that can be collected.
Collection policy
The collection policy is defined by the Guardium policy. It is used to determine which events (SQL, Command, Utilities, etc.) are streamed from the z/OS collector agent to
the Guardium appliance. The following methodology determines how the collection policy determines whether to stream events to the Guardium appliance.
The collection policy is comprised of one or more rules. Each rule includes a list of filtering criteria (fields), which is used to determine the events that are streamed. An
event is streamed to the appliance if the fields within the event match all of the fields defined within any rules of the collection policy. (Evaluation of the rules within the
collection policy is or.) For example, if a collection policy is composed of three rules (rule 1, rule 2, and rule 3), an event is streamed if it matches rule 1, or rule 2, or rule 3.
Each rule is made up of filter types and values (fields) that are used to determine if an event should be collected. If the fields of the rule are equivalent to the
corresponding fields in the event, the rule evaluates the event to be true, or a match, and the event is captured. A rule is considered true if one of each specified filter type
and value matches that of the event. (Evaluation of the rule is and.) For example:
If a rule is comprised of the filters DBUser=User1 and PLAN=DSNTEP2, an event is collected by the rule if both DBUser=User1 and PLAN=DSNTEP2 are present in
the event. If only one of the filtering criterion is present, or neither of the filtering criteria are present, the event does not meet the conditions of the rule and will not
be collected by the rule.
If a rule is comprised of the filters NET_PROTOCOL=TSO and OS_USER=User1, then only TSO workload events executed by User1 will be collected by the rule
(wherein User1 is Original Auth ID). Non-TSO workloads run by User1 will not be collected by the rule, nor will TSO workloads run by User2.
The following sections further describe how to filter the collector agent.
Set the STAP_UTILITY_TS_TO_TABLE parameter to Y to collect audit data for Db2 utilities. See Collector agent parameters for more information.
Audit data for Db2 utilities is collected according to the following rules:
When a single table is contained in the tablespace, the table information is reported.
When more than one table is contained in the tablespace, the product can be configured to report either:
No tables
The tablespace is reported, but no tables are reported.
All tables in the tablespace
Utility operations are reported against the accessed table.
This option can result in false positives being reported against tables in the tablespace that were not affected by the running of the utility.
Filtering
IBM Guardium S-TAP for Db2 V10.1.3 greatly simplifies the filtering process from that which was used in past product versions. All filtering occurs at the point of
collection regardless of the field types that are included in the rules for the active collection policy. Filtering occurs at the point of collection with or without the
specification of object types, which results in efficient CPU usage.
Filtering occurs when you create a filter that uses one or more of the following filter fields:
Net Prtcl
Specifies the appliance connection type to Db2.
OS User
Specifies the original operator user ID that is used to connect to Db2.
DB User
Specifies the primary AUTHID that is used for authorization within Db2. In most situations, this value is the same as OS User.
App. User (PROG=program)
Specifies a valid DB2 program name, such as DSNTEP2.
App. User (PLAN=plan)
Specifies a valid DB2 plan name, such as DSNTEP2.
Client Info (APPL=transaction name)
Specifies a valid program (or user workstation transaction) name, such as db2.exe.
Client Info (WKSTN=workstation name)
Specifies a valid user workstation name, such as PCsys1.
Client Info (USER=user name)
Specifies a valid user name, such as PCuser1.
Object type (%/SYSIBM.SYSTABLE)
Specifies a table.
These fields can be fully qualified, or partially qualified by using the percent sign wildcard character. For more information about using wildcard characters, see
Filter wildcard support.
The most efficient CPU usage is achieved when you create a filter that eliminates the greatest number of events. To increase filtering efficiency, refine your filtering
criteria by indicating the additional filtering types with specific values that are associated with the data that you want to collect.
Example
To capture access to a table called MY.TABLE, you could create the following filter:
Filter 1
Schema.Table equal to MY.TABLE
This filter causes IBM Guardium S-TAP for Db2 to capture only those events that access MY.TABLE.
To increase efficiency in this example, specify a filter field, such as plan, even if you are sure that plan is the only plan that accesses this table. To capture access to the
table MY.TABLE for an application that runs under a specific plan, such as MYPLAN, the following is an example of a more efficient filter:
Filter 2
Plan equal to MYPLAN
Schema.Table equal to MY.TABLE
Specifying the plan results in only those events with the specified plan and object being streamed to appliance. Fewer events streamed to the appliance results in
improved CPU usage.
If you enable collection of SELECT/UPDATE/INSERT/DELETE events, then the event collection is subjected to additional filtering. If you enable collection of event types
other than SELECT/UPDATE/INSERT/DELETE, then the events are collected without being subjected to filtering.
An event that is enabled in Rule 1 is subjected to subsequent rule filters. The following is an example using ASC event type collection:
Tip: This example could be simplified by placing both AUTHIDs into a group within a single rule.
The following is an example using event type collection:
This list describes how you can enable the collection of specific event types:
SELECT/UPDATE/INSERT/DELETE (SUID)
Enable collection by including any filter type or non-blank value in the Object field of the rule.
Included operations
The event is audited if any of the objects are in any of the DBNAMEs.
Excluded operations
If all of the objects are not in any of the DBNAMEs, then it is a considered a match.
Example: All of the objects must be in one or more of the DBNAMEs for them to be excluded. If an object is from a DBNAME that is not in the list, then it is
considered a match. If any database that is accessed by the query is not in the EXCLUDE DB list, then the query must be captured.
Wildcarding
Filter values can include the percent sign (%) as a wildcard character.
Note: The use of wildcards in filters can potentially result in the collection of significant amounts of captured data.
Filtering fields can be fully qualified, or partially qualified, by using the percent sign wildcard character. You can insert the wildcard character (%) anywhere within the
value string. The presence of the wildcard character (%) represents a string of zero of more characters. It can be embedded within a string in the following ways to achieve
the following results:
%
Matches all strings.
%a
Matches all strings that end with the letter a, for example: a, ba, cba.
a%
Matches all strings that start with the letter a, for example: a, ab, abc.
a%a
Matches all strings the begin and end with the letter a, for example a, aba, aca.
Note: The wildcard character (%) cannot be used explicitly as part of the filter value.
Parent topic: Filtering
Policy pushdown
At startup, the IBM Guardium S-TAP for Db2 collector agent waits for a policy to be streamed (or pushed down) from the Guardium system before activating a collection.
When the collector agent receives a policy, it inactivates the active collection (if a collection is active), updates the collection profile with the new policy, and then activates
the collection policy.
The following processing occurs in the collector agent when a policy is received:
1. The new policy is compared to the currently active policy if the new policy contains one or more rules.
a. If the policies are identical, no further processing is required.
b. If the policies are not identical, the policy is written to DD:ADHPLCY (if defined) and it becomes the active collection policy.
2. If the new policy does not apply to this subsystem, processing continues without any changes. In this case, if there is an active policy, the collection continues to
use it. If no policy is active, none is started.
3. If the new policy is inactive (contains no general audit settings, table or target definitions), the active policy is inactivated.
The file contents defined by the ADHPLCY DD contains the policy from the last successful policy pushdown from the appliance.
If ADHPLCY is defined, it must point to a data set that is allocated with a record format of fixed blocked (RECFM=FB) and a record length (LRECL) greater than or equal to
80.
The ADHPLCY data set should be allocated with a minimum of 50 primary tracks and 10 secondary tracks. The ADHPLCY data set can be sequential, PDS, or PDS/E. If you
use PDS or PDS/E, the space requirements might need to be increased in relation to the number of members that are contained within the data set.
For more information about creating, activating, and inactivating policies from the Guardium system interface, see the how-to topics in the Security Guardium V10.1.3
documentation in the IBM Knowledge Center.
For more information about using data sets, see the z/OS documentation in the IBM Knowledge Center,
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.idad400/toc.htm.
Multistream mode provides a mechanism for distributing a high-volume workload over multiple connected appliances. In multistream mode, a single audit event is only
sent to a single appliance. Multistream mode does not enable mirroring of the same set of audit events to multiple appliances.
IBM Guardium S-TAP for Db2 sends events to a single appliance until a ping occurs, or the number of records that is specified by MEGABUFFER_COUNT is reached.
To enable multistreaming, you must specify MULTI_STREAM when you configure the APPLIANCE_SERVER_LIST parameter. Parameters APPLIANCE_SERVER and
APPLIANCE_SERVER_[1-5] specify the appliances to which you intend to stream events. The appliance that is specified by APPLIANCE_SERVER provides the policy that is
used for event matching.
The APPLIANCE_SERVER parameter specifies the first appliance to which audit events are streamed. The collection policy that is pushed down from the first appliance
determines which events are collected and streamed to all appliances that are enabled for multistreaming.
The IBM Guardium S-TAP for Db2 agent streams events to the first appliance, then sequentially to each subsequent appliance in the multistreaming set. Each appliance in
the multistreaming set then processes (logs and discards) each event in accordance with the locally installed policies.
Procedure
1. To start the collector agent, use the START command.
Example: /S ADHCSSID
2. To stop the collector agent, use the STOP command, or the MODIFY command with the STOP parameter.
Example:
/P ADHCSSID
or
/F ADHCSSID,STOP
In the Guardium appliance interface, create a list of SQL codes to include of exclude during data collection. A policy can contain either all values to be included, or all
values to be excluded. In an include list, any SQL activity that fails within the SQLCODE list will be collected. In an exclude list, any SQL activity that does not fail within the
SQLCODE list will be collected.
Note:
Quarantining a user of a specific Db2 subsystem means that for the period of time that is specified, the quarantined user will not be able to run SQL statement in the
targeted Db2 subsystem. If a quarantined user attempts access during a restricted time, access will be denied. Use the Guardium appliance interface to quarantine user
activity.
Note: Quarantine does not take effect immediately. The SQL statement that produces the event to trigger the quarantine is completed before the quarantine takes effect.
It is possible for additional SQL statements to be run by the quarantined user before the quarantine takes effect.
Parent topic: Data collection
SQL Blocking
You can block the SQL activity of DB2® users' (Auth IDs) access to specific tables and databases. SQL statements that are run against accelerated tables are eligible for
blocking if the blocking filtering criteria is met. If a SQL statement matches the blocking criteria, the SQL statement is prevented from running. Use the Guardium®
appliance interface to define blocking policies.
When a blocking policy is received, the collector agent completes the following steps:
1. Compares the new blocking policy to the currently active blocking policy, if the new policy contains one or more rules.
If the blocking policies are identical, the collector agent determines that no further processing is required.
If the blocking policies are different, then the new blocking policy replaces the old one.
2. Evaluates the pushed-down list and filters to determine which events to block.
3. Validates the list of supplied objects.
The object must exist at the time of the installation of the blocking policy.
If a table that is included in the blocking policy does not exist when the blocking policy is installed, message ADHP190W is generated to identify the table.
Blocking is not enabled for tables that are reported by a ADHP190W message.
The obid/dbid for the object are checked for performance reasons.
If the object is dropped and then recreated, the policy must be reinstalled.
If the field values of the SQL event match corresponding filter values (blocking rule conditions) in the blocking policy, then the SQL statements are blocked and ended with
a -807 error code.
For more information about creating, activating, and inactivating blocking policies from the IBM Guardium system interface, refer to the Security Guardium documentation
in the IBM Knowledge Center.
/F <adhstc>,BLOCKING ENABLE
/F <adhstc>,BLOCKING DISABLE
/F <adhstc>,BLOCKING STATUS
These commands override and determine the blocking status whether or not a blocking policy is present. By default, blocking is enabled at startup; but if you use the /F
<adhstc>,BLOCKING DISABLE command and push down blocking rules, the blocking rules will be processed and blocking will be established within the z/OS® agent, but
blocking will not be enabled. If you use the /F <adhstc>,BLOCKING ENABLE command, blocking is not activated until a blocking policy is pushed down.
The ADHPARMS z/OS collector agent parameter, STAP_BLOCKING, controls whether the blocking operator command is permitted and whether blocking is enabled or
disabled. For more information about STAP_BLOCKING, see Collector agent parameters.
In the Guardium appliance interface, specify whether host variable information should be sent to the appliance for activity that matches a rule. When host variable
collection is enabled, up to 256 bytes per variable of host variable data is sent to the Guardium appliance. For enhanced security of Personally Identifiable Information
(PII), host variables are not collected by default in IBM Guardium S-TAP for Db2 V10.0 and later.
In the Guardium appliance interface, specify whether host variable information should be sent to the appliance for activity that matches a rule.
If FORCE_LOG_LIMITED is set to Y, the policy setting for the collection of host variables is ignored, and host variables are not collected.
If FORCE_LOG_LIMITED is set to N, the collection of host variables is controlled by the host variable settings in the active policy.
Command events are not subjected to filtering. All command events are streamed directly to the Guardium appliance for post-collection filtering. All command events are
streamed directly to the Guardium appliance for optional post-collection filtering.
Collecting SET CURRENT SQLID events by using the Audit SQL Collector
IBM Guardium S-TAP for Db2 V10.1.3 enables you to collect SET CURRENT SQLID events by using the Audit SQL Collector.
In IBM Guardium S-TAP for Db2 V10.1.3, IFI TRACE CLASS 7 is no longer enabled, and SET CURRENT SQLID events are automatically collected by using the Audit SQL
Collector. SET CURRENT SQLID events are streamed to the Guardium appliance without being subjected to filtering.
Reference information
These reference topics are designed to provide you with quick access to information about IBM Guardium S-TAP for Db2 sample library members, parameters, and
variables.
Topics:
Sample library members
Collector agent parameters
Collector agent sample parameter file
ADHEMAC1 edit macro variables
Other resources
The following IBM documentation provides more information about configuring and operating this product.
Related tasks
Defining the collector agent started task JCL
MODIFY command
The MODIFY command allows you to issue requests against, and dynamically change, characteristics of an active S-TAP task.
The abbreviated version of the MODIFY command is the letter F. The general format of MODIFY is as follows:
>>-+-MODIFY-+--procname--,--parameter--------------------------><
'-F------'
wherein:
procname
The name of the member in a procedure library that was used to start the server or address space.
parameter
Any of the parameters that are valid for the server.
>>-+-MODIFY-+--procname,--+-STAP-+-----------------+
| +-,HELP ----------|
| +-,ALL------------|
| +-,POLICY---------|
| +-,COUNTS---------|
| +-,CONFIG---------|
| +-,HISTORY_QUEUE--|
| +-,HISTORY_FILTER-|
| +-,HISTORY_IO-----|
| +-,BLOCKING-------|
| +-,QUARANTINE-----|
| +-,GET_STATUS-----|
+-BLOCKING--+ ENABLED-----|
| -+ DISABLED----|
| -+ STATUS------|
| +-,MUSTGATHER------|
| +-,TRACE_POLICY,ENABLE----|
| +-,TRACE_POLICY,DISABLE---|
| +-,TRACE_COMPILE,ENABLE---|
| +-,TRACE_COMPILE,DISABLE--|
| +-,TRACE_PROTOBUF,ENABLE--|
| +-,TRACE_PROTOBUF,DISABLE-|
| +-,LOG_EVENTS,ENABLE------|
| +-,LOG_EVENTS,DISABLE-----|
| +-,LOG_LEVEL,F|I|W|E|S----|
| +-,RESET_CONFIG-----------|
Note the space (rather than the comma) before BLOCKING ENABLED, DISABLED, and STATUS.
Options are defined as follows:
HELP
Display all available commands
STAP
Display the current status of the started task
ALL
F ADHPROC,STAP,POLICY
F ADHPROC,STAP,POLICY
<policy>
<selectblocking-rule>
<target>
<schema>DBTROS</schema>
<name>TABLE1</name>
</target>
<target>
<schema>DBTROS</schema>
<name>TABLE2</name>
</target>
</selectblocking-rule>
</policy>
F ADHPROC,STAP,GET STATUS
ADHP170I - Event count reported by the appliance at time: 112
Procedure
1. Locate the policy component for your S-TAP (for example, RS22:A91A:POLICY) and select the G icon.
2. Select STAP Logging for Command.
3. Select a logging level and click Apply to request S-TAP logging.
S-TAP logging levels provide log information as follows:
Level 0
Logs program levels, event queue statistics, agent configuration, policy, and event counts.
Level 1
Logs agent configuration, policy, and event counts.
Level 2
Logs agent configuration.
Level 3
Logs policy.
Level 4 or higher
Logs event counts.
4. To view the S-TAP logging information, locate the policy component of your S-TAP and click the i icon.
APPLIANCE_CONNECT_RETRY_COUNT
Required: No
Default: 0
Description: The number of consecutive failed connection attempts before terminating. The value of 0 indicates to never stop attempting connections. A value of 1
indicates a stop immediately after connection attempt fails. Range: 0 - 99999.
Syntax:
Example:
APPLIANCE_CONNECT_RETRY_COUNT(1000)
APPLIANCE_NETWORK_REQUEST_TIMEOUT
Default: 0
Description: The value in milliseconds of the period of time to wait for network communication request send or receive to complete. A value of 0 results in no
timeout period.
Syntax:
APPLIANCE_NETWORK_REQUEST_TIMEOUT(timeout)
Example:
APPLIANCE_NETWORK_REQUEST_TIMEOUT(0)
APPLIANCE_PING_RATE
Required: No
Default: 5
Description: Specifies the time interval between accesses to the Guardium system to prevent timeouts (disconnects) during idle periods. The value is in number of
seconds.
Syntax:
APPLIANCE_PING_RATE(ping_interval)
Example:
APPLIANCE_PING_RATE(5)
APPLIANCE_PORT
Required: No
Default: 16022
Description: The IP port number of the Guardium system to which the IBM Guardium S-TAP for Db2 audit data collector should connect. This parameter must be
properly configured to enable collection of audit data and a connection to the IBM Guardium system. If port 16023 is used, encryption support is required for the
connection to the appliance.
Note: Specifying this keyword and parameter designates the port on which the Guardium appliance is listening to the S-TAP. The port is dedicated to the IP address
of the appliance. Port 16022 or 16023 can also be in use on z/OS® by another application.
Syntax:
APPLIANCE_PORT(port_number)
Example:
APPLIANCE_PORT(16022)
APPLIANCE_RETRY_INTERVAL
Required: No
Default: 3
Description: Specifies the time interval between attempts to establish a connection to the IBM Guardium system. The value is in number of seconds.
Syntax:
APPLIANCE_RETRY_INTERVAL(retry_interval)
Example:
APPLIANCE_RETRY_INTERVAL(3)
APPLIANCE_SERVER
Required: Yes
Default: None
Description: The host name or IP address (in dotted-decimal notation, for example: 1.2.3.4) of the IBM Guardium system to which the IBM Guardium S-TAP for
Db2 audit data collector should connect.
Note: This parameter must be properly configured to enable collection of audit data, and a connection to the IBM Guardium system. The value can contain up to
128 characters.
Syntax:
APPLIANCE_SERVER(hostname|ip_address)
APPLIANCE_SERVER(192.168.2.205)
APPLIANCE_SERVER_FAILOVER_[1-5]
Required: No
Default: None
Description: The host name or IP address (in dotted-decimal notation, for example: 1.2.3.4) of the IBM Guardium system to which the IBM Guardium S-TAP for
Db2 audit data collector should fail over to if APPLIANCE_SERVER is not available.
Note:
1. This parameter must be properly configured to enable collection of audit data and a connection to the IBM Guardium system. The value can contain up to
128 characters.
2. The collector agent attempts to connect to the fail over systems beginning with APPLIANCE_SERVER_FAILOVER_1, and ending with
APPLIANCE_SERVER_FAILOVER_5.
3. Both the APPLIANCE _SERVER_FAILOVER_[1-5] and APPLIANCE_SERVER_[1-5] parameters can be used to designate servers for multistreaming or failover.
Use the APPLIANCE_SERVER_LIST(MULTI_STREAM|FAILOVER) parameter to designate how these parameters are used.
Syntax:
APPLIANCE_SERVER_FAILOVER_1(hostname|ip_address)
APPLIANCE_SERVER_LIST(MULTI_STREAM)
APPLIANCE_SERVER(guardium1.company.com)
APPLIANCE_SERVER_1(guardium2.company.com)
APPLIANCE_SERVER_2(guardium3.company.com
APPLIANCE_SERVER_LIST(FAILOVER|MULTI_STREAM|HOT_FAILOVER)
Required: No
Default: FAILOVER
Description: If set to MULTI_STREAM, this parameter specifies that a Guardium appliance connection is to be established for each server that is identified by the
APPLIANCE_SERVER_n parameter.
If a connection is lost, S-TAP audit events continue to transmit over the remaining appliance connection.
Lost connections are retried at regular intervals that are determined by multiplying the APPLIANCE_CONNECT_RETRY_COUNT by the
APPLIANCE_PING_RATE.
If set to FAILOVER, this parameter specifies that one Guardium appliance connection is to be active at a time.
If the connection to the primary appliance is lost, a failover action occurs, which results in an attempt to connect to the next available server. The next
available server is identified by the APPLIANCE_SERVER_FAILOVER_n parameter.
After a failover action occurs, the connection to the primary server is retried at regular intervals that are determined by multiplying the
APPLIANCE_CONNECT_RETRY_COUNT by the APPLIANCE_PING_RATE.
With either setting of APPLIANCE_SERVER_LIST, if all connections fail, and a spill file is specified (parameter OUTAGE_SPILLAREA_SIZE), events are buffered to
the spill file until a connection becomes available. If no spill file is specified, and all connections are lost, data loss occurs.
If set to HOT_FAILOVER, this parameter causes all connection types (POLICY and ASC) for each connected Guardium appliance to be kept active by pings. You can
specify the primary Guardium appliance by using the APPLIANCE_SERVER parameter. If the primary Guardium appliance becomes unavailable and failover occurs,
HOT_FAILOVER maintains the activity of the primary appliance policy.
Syntax:
APPLIANCE_SERVER_LIST_FAILOVER
Example:
APPLIANCE_SERVER_LIST_FAILOVER
AUDIT
Required: Yes
Default: None
Description: The Db2 subsystem ID for the Db2 subsystem on which the IBM Guardium S-TAP for Db2 Collector Agent should capture query data.
Note: This parameter must be properly configured to enable collection of capture data. The value can contain up to 4 characters.
Syntax:
AUDIT(ssid)
Example:
AUDIT(DSN1)
AUTHID
Required: No
Default: Defaults to the user ID under which the started task will run.
Notes:
1. The ID specified in the startup parameter AUTHID must be a valid TSO user ID and not a RACF group name.
2. If the AUTHID parameter is defined in the RACF Started Procedures Table (ICHRIN03), it should not be used as a startup parameter. The Started Procedures
Table (ICHRIN03) associates the names of started procedures with specific RACF user IDs and group names. It can also contain a generic entry that assigns
a user ID or group name to any started task that does not have a matching entry in the table. However, it is recommended that you use the STARTED class for
most cases rather than the started procedures table.
Syntax:
AUTHID(db2authid)
Where db2authid is the Db2 AUTHID that IBM Guardium S-TAP for Db2 uses when establishing a connection to Db2 during interval processing.
Example:
AUTHID(DB2USER)
CICS_USERID
Required: No
Default: N
Description: If set to Y, the CICS_USERID parameter enables the capture of CICS Login User ID for SQL statements that are run in Db2 for CICS. For more
information see Enabling CICS Login User ID reporting.
Syntax:
CICS_USERID(YES|NO)
Example:
CICS_USERID(Y)
COLLECT_COMMIT_ROLLBACK
Required: No
Default: N
Description: If set to Y, the COLLECT_COMMIT_ROLLBACK parameter enables the collection of COMMIT and ROLLBACK events.
Syntax:
COLLECT_COMMIT_ROLLBACK(YES|NO)
Example:
COLLECT_COMMIT_ROLLBACK(Y)
DEBUG
Required: No
Default: N
Description: The DEBUG parameter turns on debug mode and produces diagnostic messages for use by IBM Software Support.
Syntax:
DEBUG(YES|NO)
Example:
DEBUG(Y)
FORCE
Required: No
Default: N
Description: The FORCE parameter forces installation of a monitoring agent. If you use this parameter, any return codes from any failure reported in message
ADHQ2002E are overridden.
Note: This parameter should not be specified without instruction by IBM Software Support.
Syntax:
FORCE(YES|NO)
Example:
FORCE(Y)
FORCE_LOG_LIMITED
Required: No
Description: This parameter enables you to restrict the collection of sensitive data by controlling whether the active policy controls the collection of host variables.
The policy setting for collection of host variables is ignored and host variables are not collected.
The APPLIANCE_PORT parameter must be set to 16023. Port 16023 is used for AT-TLS-configured encrypted communications. If APPLIANCE_PORT is not
set to 16023, the S-TAP agent will generate a log message indicating the configuration inconsistency, and shut down.
If this parameter is set to N, the collection of host variables is controlled by the host variable settings in the active policy.
Syntax:
FORCE_LOG_LIMITED(YES|NO)
Example:
FORCE_LOG_LIMITED(Y)
HOSTVAR_LIMIT
Required: No
Default: 1500
Description: This parameter designates the number of storage blocks to be allocated for host variable collection per event. The valid range is 1 -- 9999. If this
parameter is not customized, the default value of 1500 is set.
If error message ADHQ1203I is encountered with RC=0008 and RSN=003F, increase the HOSTVAR_LIMIT setting to accommodate the collection of host variables
for the monitored workload.
If IBM Guardium S-TAP for Db2 and IBM Db2 Query Monitor for z/OS are simultaneously monitoring the same Db2 subsystem, both products must have matching
HOSTVAR_LIMIT settings to avoid receiving a mismatch error.
Syntax:
HOSTVAR_LIMIT(n)
HOSTVAR_LIMIT(1500)
ISM_CONSTRAINT_AGE
Required: No
Default: 300
Description: This parameter controls how much time must have passed since the last storage constraint occurrence for a given ISM storage space before the
constraint event is considered to have been relieved.
Syntax:
ISM_CONSTRAINT_AGE(n)
where n is an integer between 1 - 60000 specified in .01 seconds. The default value is 300.
Example:
ISM_CONSTRAINT_AGE(16)
ISM_ERROR_DETAIL
Required: No
Default: Y
Description: This parameter controls whether messages ADHQ1203I and ADHQ1204I are issued to provide detailed information for ISM Storage Constraint
situations. The product recommendation is to leave this parameter set to Y. This setting can be overridden at run time with the /f cqmstc,ISMERROR_DETAIL
command.
Syntax:
ISM_ERROR_DETAIL(Y|N)
Example:
ISM_ERROR_DETAIL(Y)
ISM_ERROR_BLOCKS
Required: No
Default: 256
Description: This parameter determines the number of ISM Error Blocks that are allocated when IBM Guardium S-TAP for Db2 initializes.
If this value is too low, message ADHQ1219W might be issued. ISM Error Blocks communicate a storage constraint event from somewhere in the product to the
task that issues storage constraint messages. If you run out of ISM Error Blocks, the storage constraint message will not be issued. However, an abend table entry
will be created to document this event. This is most likely a temporary situation and it does not impact the overall performance of IBM Guardium S-TAP for Db2.
ISM_ERROR_BLOCKS(n)
ISM_ERROR_BLOCKS(256)
ISM_ERROR_MSG_BLOCKS
Required: No
Default: 256
Description: This parameter determines the number of ISM Error Message Blocks that are allocated when IBM Guardium S-TAP for Db2 initializes. If this value is
too low, duplicate ISM error message can be issued for the same space and reason instead of incrementing the occurrence count.
ISM Error Message Blocks are used by the task that issues storage constraint messages to do two things:
1. To consolidate similar storage constraint events to eliminate duplicate messaging for the same condition, and
2. To keep track of storage constraint events so that the Storage Constraint Relieved situation can be detected and messaged.
If you run out of ISM Error Message Blocks, this consolidation will not always occur. This would result in additional, duplicate messages in the log for the similar
storage constraint events.
Syntax:
ISM_ERROR_MSG_BLOCKS(n)
ISM_ERROR_MSG_BLOCKS(256)
MASTER_PROCNAME
Required: Yes
Default: None.
Description: The MASTER_PROCNAME parameter enables users to specify the PROCNAME to be used for the Master Address Space. Specifying this parameter
causes IBM Guardium S-TAP for Db2 to use the Master Address Space with the same name.
The MASTER_PROCNAME for IBM Guardium S-TAP for Db2 and Query Monitor must be the same when each is started at the same time for the same Db2
Subsystem.
If this Master Address Space is already started, it is shared with other IBM Guardium S-TAP for Db2 subsystems that are already using it.
If this Master Address Space has not already been started, it will start automatically.
Syntax:
MASTER_PROCNAME(procname)
where procname is the specified Master Address Space PROCNAME (character, 8 bytes).)
Example:
MASTER_PROCNAME(CQMMASTR)
MAXIMUM_ALLOCATIONS
Required: No
Default: 2048
Description: This parameter determines the maximum amount of global shared memory to be allocated by IBM Guardium S-TAP for Db2 for internal Integrated
Storage Manager spaces.
Syntax:
MAXIMUM_ALLOCATIONS(n)
where n is an integer between 512 - 32768 specified in megabytes; must be smaller than SMEM_SIZE.
Example:
MAXIMUM_ALLOCATIONS(2048)
MESSAGE_LOG_LEVEL
Required: No
Default: I
Description: Controls the amount of output log information that is generated by the agent:
I
Includes all log messages with an informational severity or higher
W
Includes all log messages with a warning severity or higher
E
Includes all log messages with an error severity or higher
The ADHPARMS file is read when the agent is started. Modifying the log-level setting in the ADHPARMS file does not implement the new setting until you restart the
collector agent.
Note: During installation, it is recommended that you set the MESSAGE_LOG_LEVEL to I.
Syntax:
MESSAGE_LOG_LEVEL(I|W|E|S)
Example:
MESSAGE_LOG_LEVEL(I)
OUTAGE_SPILLAREA_SIZE
Required: No
Default: 0
Description: This parameter determines the maximum amount of memory to be allocated to support the retention of audit data in the event of a Guardium system
connection outage.
Note: A value of 0 disables spillfile support. When enabled, OUTAGE_SPILLAREA_SIZE supersedes SEND_FAIL_EVENT_COUNT for temporary data retention.
Syntax:
OUTAGE_SPILLAREA_SIZE(n)
OUTAGE_SPILLAREA_SIZE(2)
PREFER_IPV4_STACK
Required: No
Default: N
Description: If set to Y, this parameter causes a request to be issued to the Domain Name Server (DNS) for an IPV4 address for the hostname that is specified in
the APPLIANCE_SERVER parameter:
The DNS lookup request for an IPV4 address is attempted. If an IPV4 address is defined for the hostname, the DNS will respond with the value that will be
used to connect to the Guardium appliance.
If only an IPV6 address is defined at the DNS, then the DNS will respond with the IPV6 address that will be used to connect to the Guardium appliance.
If both IPV4 and IPV6 addresses are defined at the Guardium appliance, the DNS will respond with both addresses, and the IPV4 address will be used to
connect to the appliance.
If this parameter is set to N or omitted from configuration, a request for an IPV6 address is issued to the DNS for the hostname that is specified by the
APPLIANCE_SERVER parameter:
The DNS lookup request for an IPV6 address is attempted. If an IPV6 address is defined for the hostname, the DNS will respond with the value that will be
used to connect to the Guardium appliance.
If only an IPV4 address is defined at the DNS, then the DNS will respond with the IPV4 address that will be used to connect to the Guardium appliance.
If both IPV4 and IPV6 addresses are defined at the Guardium appliance, the DNS will respond with both addresses, and the IPV4 address will be used to
connect to the appliance.
Note: Whether or not this parameter is used, if the address returned from the DNS is not valid for the hostname, it will result in failure to connect to the appliance,
and the IBM Guardium S-TAP for Db2 started task will terminate.
Syntax:
PREFER_IPV4_STACK(Y|N)
Example:
PREFER_IVP4_STACK(Y)
SEND_FAIL_EVENT_COUNT
Required: No
Default: 100
Description: Specifies the maximum number of events to be buffered during a communication outage with the Guardium system. Events are buffered in internal
memory objects and streamed to the appliance at the time of reconnection.
Note: SEND_FAIL_EVENT_COUNT and OUTAGE_SPILLAREA_SIZE are mutually exclusive. When OUTAGE_SPILLAREA_SIZE is specified, spillfile support is enabled,
which supersedes SEND_FAIL_EVENT_COUNT for temporary data retention.
Syntax:
SEND_FAIL_EVENT_COUNT (event_count)
where event_count is an integer between 0 – 1024 that represents the number of events to be buffered.
Example:
SEND_FAIL_EVENT_COUNT(100)
SMEM_SIZE(5|n)
Required: No
Description: This parameter determines the maximum amount global shared memory to be allocated by IBM Guardium S-TAP for Db2 for all purposes.
Syntax:
SMEM_SIZE(n)
where n is an integer between 3 - 32 specified in gigabytes; must be three times larger than MAXIMUM_ALLOCATIONS.
Example:
SMEM_SIZE(5)
STAP_BLOCKING
Required: No
Default: ENABLED
Description: The STAP_BLOCKING parameter controls whether blocking is enabled or disabled and whether the blocking operator command is permitted to enable,
disable, or report status for blocking. This parameter cannot be overwritten by the BLOCKING operator command. STAP_BLOCKING parameter options are as
follows:
STAP_BLOCKING(ENABLED) enables the blocking feature. Blocking is activated if a blocking rule is pushed.
STAP_BLOCKING(DISABLED) disables the blocking feature.
STAP_BLOCKING(OPERATOR) enables the blocking feature and enables the BLOCKING operator command. Blocking is activated if a blocking rule is pushed.
Syntax: STAP_BLOCKING(ENABLED|DISABLED|OPERATOR)
Example: STAP_BLOCKING(ENABLED)
STAP_MEGABUFFER
Required: No
Default: Y
Description: When multiple IBM Guardium S-TAP for Db2 audit events are accumulated in a buffer, it is referred to as a megabuffer. A megabuffer reduces the CPU
usage that is related to TCP/IP activity. To optimize IBM Guardium S-TAP for Db2 performance, STAP_MEGABUFFER must remain set to Y. However,
STAP_MEGABUFFER can be set to N when buffering is not desired.
Setting the STAP_MEGABUFFER parameter to N eliminates buffering, and provides near real-time event streaming to the Guardium appliance. It also increases CPU
usage, due to additional TCP/IP calls.
Syntax:
STAP_MEGABUFFER(Y|N)
Example:
STAP_MEGABUFFER(Y)
STAP_STREAM_EVENTS
Required: No
Default: Y
Description: This parameter specifies whether events will be streamed to the IBM Guardium system. The default value, Y, enables streaming. Specify N to disable
streaming and enable Simulation mode.
Syntax:
STAP_STREAM_EVENTS(Y|N)
Example:
STAP_STREAM_EVENTS(Y)
STAP_TERMINATE_OPTIMIZE
Required: No
Default: N
Description: This parameter can be used to improve the response time for processing STAP_TERMINATE requests from the Guardium appliance. Roundtrip time for
STAP_TERMINATE activity is impacted by the STAP_MEGABUFFER parameter. STAP_TERMINATE policies require near real-time event recording to the IBM
Guardium system to analyze events against the policy and issue the termination requests to IBM Guardium S-TAP for Db2. To enable near real-time event recording
to the Guardium appliance, set the STAP_MEGABUFFER parameter to N.
Syntax:
STAP_TERMINATE_OPTIMIZE(Y|N)
Example:
STAP_TERMINATE_OPTIMIZE(N)
STAP_UTILITY_MULTITABLE
Required: No
Default: N
The collector will report all tables in the tablespace that are impacted by the utility. This guarantees that tablespace access by a utility execution will result in
an audit event against the table name.
Tables within a tablespace, which were not accessed by the utility, might be reported.
When STAP_UTILITY_MULTITABLE is set to N, no attempt is made to report table information for multi-table tablespaces accessed by a utility. Only the tablespace
name is reported.
Syntax:
STAP_UTILITY_MULTITABLE(Y|N)
Example:
STAP_UTILITY_MULTITABLE(N)
STAP_UTILITY_MULTITABLE(Y)
Required: No
Default: Y
Description: The STAP_UTILITY_TS_TO_TABLE parameter controls how table information is reported for Db2 Utility accesses to tablespaces. When the parameter is
set to Y, the collector queries the Db2 catalog. The collector then determines and reports on which table exists within the tablespace that has been accessed by the
utility execution. If multiple tables are contained in the tablespace, the STAP_UTILITY_MULTITABLE parameter controls whether the collector reports either:
All tables
All table names in the accessed tablespace
No tables
Only the tablespace is reported.
STAP_UTILITY_TS_TO_TABLE(Y|N)
Example:
STAP_UTILITY_TS_TO_TABLE(Y)
STARTUP_DIAGNOSTICS
Required: No
Default: N
Description: The STARTUP_DIAGNOSTICS parameter causes IBM Guardium S-TAP for Db2 to produce diagnostic information output during startup of the collector
agent. This output might be useful to IBM Support when diagnosing reported problems.
Syntax:
STARTUP_DIAGNOSTICS(Y|N)
Example:
STARTUP_DIAGNOSTICS(Y)
SHUTDOWN_DIAGNOSTICS
Required: No
Default: N
Description: The SHUTDOWN_DIAGNOSTICS parameter causes IBM Guardium S-TAP for Db2 to produce diagnostic information output during shutdown (stop) of
the collector agent. This output might be useful to IBM Support when diagnosing reported problems.
Syntax:
SHUTDOWN_DIAGNOSTICS(Y|N)
Example:
SHUTDOWN_DIAGNOSTICS(Y)
SUBSYS
Required: No
Description: The SUBSYS parameter defines the SQL Collector subsystem name. The subsystem name does not need to correspond to a Db2 subsystem nor an
MVSâ„¢ operating system name. The name must be 1-4 characters in length.
Syntax:
SUBSYS(ADH1)
TS_OFFSET(E|W.HH.MM)
Required: No
Description:
This parameter enables you to adjust the event timestamps that are steamed to the appliance by specifying the amount of time to adjust (offset) based on
timezone.
For example, if running with a clock that is set to UTC 0.0 in a timezone that it is UTC + 9, GMT can be considered 9 hours west of the current time. In
this situation, the parameter should be set as follows: TS_OFFSET(W.09.00). Event timestamps will be adjusted (offset) by subtracting 9 hours from
the original timestamp.
If TS_OFFSET is not supplied, the timestamps that are streamed to the appliance are not adjusted based on timezone.
Syntax:
E|W
East or west offset from GMT
HH
Number of hours
MM
Number of minutes
Example: TS_OFFSET(W.09.00)
ZIIP_FILTER(Y|N)
Required: No
Default: ZIIP_FILTER(N)
Description:
ZIIP_FILTER(Y) indicates that the z/OS image running the collector agent started task has an IBM System z® Integrated Information Processor (zIIP). In this
case, allow collector agent to perform offload profile filtering to a zIIP.
If ZIIP_FILTER(Y) is specified and the collector agent started task is running on a z/OS that has no zIIP, message ADHQ1060I is issued, indicating the WLM
related service has failed. In this case, collector agent continues to run as if ZIIP_FILTER(N) were set.
Syntax:ZIIP_FILTER(Y)
Example:ZIIP_FILTER(Y)
ZIIP_TCP(Y|N)
Required: No
Default:ZIIP_TCP(N)
Description:
ZIIP_TCP(Y) indicates that the z/OS image running the collector agent started task has an IBM System z Integrated Information Processor (zIIP). In this case,
allow collector agent to offload TCP/IP message processing to a zIIP.
If ZIIP_TCP(Y) is specified and the collector agent started task is running on a z/OS that has no zIIP, message ADHQ1060I is issued, indicating the WLM
related service has failed. In this case, collector agent continues to run as if ZIIP_TCP(N) were set.
Note: ZIIP_TCP(Y) requires that zIIP filter support be enabled: ZIIP_FILTER(Y). If ZIIP_FILTER(N) and ZIIP_TCP(Y) are specified together, ZIIP_FILTER will be
automatically set to Y.
Syntax:ZIIP_TCP(Y)
Example:ZIIP_TCP(Y)
/f cqmstc,ISMERROR_DETAIL(Y|N)
Description: This parameter controls whether ISM constraint message detail is on or off. When the parameter is specified, messages ADHQ1203I and ADHQ1204I
are issued for ISM storage constraint situations.
If the primary appliance becomes unavailable and failover occurs, the appliance policy that was originally pushed from the primary appliance continues to be active. When
all Guardium appliances are connected, the status of each appliance connection, listed in the Guardium interface, is green.
- 5655-STP
- (C) COPYRIGHT ROCKET SOFTWARE, INC. 1999 - 2015 ALL RIGHTS RESERVED.
-
- MEMBER: ADHCFGP
-
- DESCRIPTION: THIS IS A SAMPLE MINIMUM ADHCFGP MEMBER
- USED FOR IBM SECURITY GUARDIUM S-TAP for DB2 on z/OS
- COLLECTOR AGENT STARTUP.
- VERIFY THAT THE VALUES ON EACH PARM ARE APPROPRIATE
- FOR YOUR ENVIRONMENT.
-
- NOTE: AFTER USING THE EDIT MACRO, VERIFY THAT NONE OF THE
- STATEMENTS EXCEED COLUMN 72 IN LENGTH.
-
-
SUBSYS(#SSID) -
AUDIT(#SSID) -
MASTER_PROCNAME(ADHMST31) -
APPLIANCE_SERVER(#APPSRVR)
Related tasks
Customizing JCL members
Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
These topics document the messages and error codes issued by Security Guardium S-TAP for DB2. Messages are presented in ascending alphabetical and numerical
order.
Error messages
Error messages and codes: ADHAxxx
Error messages and codes: ADHGxxx
Error messages and codes: ADHIxxxx
Error messages and codes: ADHKxxxx
Error messages and codes: ADHPxxxx
Error messages and codes: ADHQxxxx
Error messages
Security Guardium S-TAP for DB2 messages adhere to the following format: ADHnnnx
Where:
ADH
Indicates that the message was issued by Security Guardium S-TAP for DB2.
nnn
Indicates the message identification number.
x
Indicates the severity of the message:
Table 1. Error message severity codes
Severity Code Description
E Indicates that an error occurred, which might or might not require operator intervention.
I Indicates that the message is informational only.
S Indicates that operator intervention is required before processing can continue.
W Indicates that the message is a warning to alert you to a possible error condition.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
ADHA507E
Callable service invocation failed with return code = rc and reason code = rs
Parent topic: Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
ADHA507E Callable service invocation failed with return code = rc and reason code = rs
Explanation
A callable service invocation failed with a return code and reason code that are identified in the message.
If the problem is not resolved after attempting all user responses for existing additional messages, contact IBM Software Support.
ADHG000I
Attempting connection to server server-address port=server-port
ADHG001I
Establishing ASC connection to server [server-address]
ADHG002I
Connection established to server [server-address]
ADHG003I
Connection re-established to [server-address]
ADHG004W
Connection was lost from server [server-address]
ADHG005S
Unable to establish a connection to a server [server-address]
ADHG006E
Data loss has occurred as the result of a network send failure
ADHG007E
Unable to create a communications interface
ADHG008S
Required parameter was not supplied. Parameter=parameter-name
ADHG009I
TCP/IP streaming disabled due to user setting.
ADHG010I
Disconnecting from server server-name
ADHG011E
Unable to create an output stream
ADHG012E
Unable to set socket timeout value. rc=return-code reason=reason-code
ADHG013I
Connection attempt timed out. Reattempting connection reattempt-number of total-reattempts
ADHG014I
Spillfile support enabled. Spill area size: [size] MB
ADHG015W
Primary server is unavailable
ADHG017W
Data is being temporarily stored in a spillfile until a connection is re-established
ADHG018I
Spillfile contents have been successfully be sent to server [server]
ADHG019S
Spillfile storage has been exhausted. Data loss will occur.
ADHG020I
Registering server [server] as eligible for failover.
ADHG021E
Spillfile is approaching [50% | 85% | 95% |100$] capacity.
ADHG022I
A connection has been established to failover server [server].
ADHG026W
Invalid port specified for APPLIANCE_PORT. Port 16022 will be used instead.
ADHG027I
Registering server server as eligible for multi-stream.
ADHG030I
Security Guardium S-TAP for DB2 Collector Agent is terminating
ADHG031I
Security Guardium S-TAP for DB2 V10.1.3 [component] connection established
ADHG097E
Unexpected error: [error_description]. Return code:[return_code].
ADHG098I
This event will be logged due to an unexpected data condition.
ADHG099E
Unexpected error: error-condition
Parent topic: Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
Explanation
The S-TAP® collector will attempt to establish a TCP/IP connection to a Guardium® system at the specified server address and port.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The S-TAP® collector was successful in establishing a TCP/IP connection to the Guardium® system.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The S-TAP® collector was successful in re-establishing a TCP/IP connection to the Guardium® system following a disconnect.
Explanation
The TCP/IP connection between the S-TAP® collector and the Guardium® system was lost. The S-TAP collector will automatically attempt to re-establish the connection,
however a potential for data loss does exist if the connection is not re-established. A data loss condition is indicated by message ADHG006E.
User response
Determine the cause of the network interruption and correct the problem so that the connection can be re-established.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The S-TAP® collector was unable to establish a TCP/IP connection to the Guardium® system.
User response
Ensure that the Guardium system is listening for a connection at the server and port specified in message ADHG001I.
Ensure that no firewalls are blocking connections between the collector and Guardium system.
If port 16023 is used, ensure that AT-TLS has been configured properly between the z/OS® LPAR and the appliance.
ADHG006E Data loss has occurred as the result of a network send failure
Explanation
During a disconnected state, the S-TAP® collector exceeded the number of events to retain in memory while waiting for the network connection to the Guardium®
system to be reestablished.
User response
Determine the cause of the network interruption and correct the problem so that the connection can be reestablished.
If deemed necessary, increase the SEND_FAIL_EVENT_COUNT value in the ASC ADHPARMS parameter file to increase the number of events that can be retained in
memory during short outages.
Explanation
An attempt to create an internal communications interface failed.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
A required parameter was not supplied.
User response
Supply a parameter and value for the specified parameter.
Parent topic: Error messages and codes: ADHGxxx
Explanation
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The S-TAP® collector is disconnecting from the Guardium® system.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
An attempt to create an internal output stream failed.
User response
Contact IBM® Customer Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
An attempt to set the timeout threshold in the socket interface failed.
User response
Contact IBM® Customer Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The S-TAP® collector agent was unable to establish a TCP/IP connection to the Guardium® system within the timeout period. The connection will be reattempted until
the reattempt-number specified meets the total-reattempts number specified.
User response
Ensure that the Guardium system is listening for a connection at the server and port specified in message ADHG001I.
Ensure that there no firewalls are blocking connections between the collector and Guardium system.
Explanation
A spillfile area was successfully allocated at the specified size.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
User response
Determine the cause of the connection interruption to the primary Guardium system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHGxxx
Explanation
A Guardium® system connection is unavailable. Collected data is written to the spillfile area until a system connection can be established.
User response
Determine the cause of the system connection outage and attempt to restore the connection.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The Guardium® system connection has been restored. The spillfile data that was collected during a connection outage has been sent to the specified system.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
ADHG019S Spillfile storage has been exhausted. Data loss will occur.
Explanation
A Guardium® system connection is unavailable and the spillfile is out of space. Data collected after this time will be lost.
User response
Determine the cause of the connection outage to the system and attempt to restore the connection. Notify others of the outage as necessary.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The specified server will be added to the list of failover servers to register for the connection. Registration is attempted after all failover servers have been added. A
successful failover registration is indicated by message ADHG012I.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
A Guardium® system connection is unavailable and the spillfile area is at the specified capacity.
User response
Determine the cause of the connection outage to the system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHGxxx
Explanation
A connection to the primary Guardium® system is not available. A connection has successfully been established to one of the specified failover server.
User response
ADHG026W Invalid port specified for APPLIANCE_PORT. Port 16022 will be used instead.
Explanation
The APPLIANCE_PORT parameter currently supports a setting of 16022, but the parameter has been retained for future support. If APPLIANCE_PORT is specified with a
value other than 16022, message ADHG026W is issued, and port 16022 will be used instead.
User response
Change APPLIANCE_PORT parameter setting to 16022 or remove the parameter entirely.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The specified server will be added to the list of servers that are eligible for multistream support.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The collector is terminating.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
The specified component successfully established a TCP/IP connection to the Guardium system.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
An unexpected error was encountered.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
A collected event contained unexpected or invalid data fields. The event fields are written to DD:ADHLOG for use in diagnosing the problem.
User response
Contact IBM® Software Support with the error log.
Parent topic: Error messages and codes: ADHGxxx
Explanation
An unexpected error was encountered.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
A –CANCEL THREAD command was issued by Security Guardium® S-TAP® for DB2® as a result of a request received by the Guardium system. The command ended
successfully. Thread-token represents the cancelled thread token, as would be reported by a –DISPLAY THREAD DB2 command.
User response
No action is required.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While sending a message, the socket interface encountered a bad host name condition.
User response
Verify that the host name value provided for APPLIANCE_SERVER in the ASC ADHPARMS parameter file is valid.
Contact IBM® Software Support.
Explanation
While sending a message, a problem was encountered with an internal interface.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While sending a message, the socket interface encountered a socket I/O problem.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
An attempt to send a status (non-audit) message to the Guardium® system failed because a connection was unavailable.
User response
Determine the cause of the connection outage to the system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHGxxx
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While building a message, a problem was encountered with an internal interface.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While building a message, a problem was encountered with an internal interface.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While building a message, a problem was encountered with an internal interface.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While building a message, a problem was encountered with an internal interface.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While building a message, a problem was encountered with an internal interface.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
Explanation
While building a message, a problem was encountered with an internal interface.
Explanation
While building a message, a problem was encountered with an internal interface.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHGxxx
ADHG520W Encoding exception: Event exceeds protocol message size limit. code=error-code
Explanation
The network protocol used to communicate to the Guardium® system is limited to 64 KB in payload size. If an audited event results in a payload that exceeds this limit,
this message is issued, and a truncated message is built and sent to the system. This message is only issued once per collector instance. At termination, message
ADHG521W reports the total number of events impacted by this exception. The specified error-code value is for use by technical support.
User response
No action is required. If an excessive number of exceptions are observed, or if you are concerned that the exceptions are impacting audit data integrity, use
APPLIANCE_PORT(16022), which uses a communications protocol capable of delivering events with larger payloads.
Parent topic: Error messages and codes: ADHGxxx
ADHG521W Total encoding exceptions encountered due to exceeded message size: exception-
count
Explanation
The network protocol used to communicate to the Guardium® system is limited to 64 KB in payload size. If an audited event results in a payload that exceeds this limit,
message ADHG520W is issued. At termination, this message reports the total number of events that have been impacted by this exception, displayed as exception-count.
User response
No action is required. If an excessive number of exceptions are observed, or if you are concerned that the exceptions are impacting audit data integrity, use
APPLIANCE_PORT(16022), which uses a communications protocol capable of delivering events with larger payloads.
Parent topic: Error messages and codes: ADHGxxx
Explanation
During an attempted TCP/IP data send of the length specified, the send failed with the specified return and reason code.
User response
Refer to the IBM manual, z/OS UNIX System Services Messages and Codes, for an explanation of the reason code. The last 4 digits of the reason code correspond to the
errors of the send API. Also, review the ADHLOG of the S-TAP Collector Agent for other messages that might indicate problems with the connection between the S-TAP
Collector Agent and the Guardium appliance.
This send failure might be the result of excessive amounts of data being sent to the appliance. Refer to the appliance reporting to determine whether excessive numbers of
events were sent to the appliance prior to the send failure. If you determine the failure to be the result of excessive amounts of data, review and modify the active policy to
decrease the amount of data that is sent to the appliance.
ADHI026W
Invalid port specified for APPLIANCE_PORT. Port 16022 will be used instead.
ADHI031I
Security Guardium S-TAP for DB2 V10.1.3 [component] connection established
ADHI530E
DB2 connection failed [function] SQLCODE=[sqlcode] RSN=[reason-code]
Parent topic: Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
ADHI026W Invalid port specified for APPLIANCE_PORT. Port 16022 will be used instead.
Explanation
The APPLIANCE_PORT parameter currently supports a setting of 16022, but the parameter has been retained for future support. If APPLIANCE_PORT is specified with a
value other than 16022, message ADHG026W is issued, and port 16022 will be used instead.
User response
Change APPLIANCE_PORT parameter setting to 16022 or remove the parameter entirely.
Parent topic: Error messages and codes: ADHIxxxx
Explanation
The specified component successfully established a TCP/IP connection to the Guardium system.
User response
None action is required.
Parent topic: Error messages and codes: ADHIxxxx
Explanation
A DB2 attachment facility error occurred.
User response
An error occurred while performing a DB2 attachment function. See the IBM® DB2 for z/OS® Messages and Codes manual for more information about the return and
reason codes.
Parent topic: Error messages and codes: ADHIxxxx
Explanation
The option STAP_UTILITY_TS_TO_TABLE was set to enable collection of expanded utility information. However, an error occurred when attempting to establish the DB2®
connection, which is required for this feature. The option is disabled.
User response
Review ADHLOG for occurrences of message ADHG503E to determine the cause of the DB2 connection failure.
Parent topic: Error messages and codes: ADHIxxxx
Explanation
An unrecoverable error condition was encountered. A shutdown request will sent to the collector agent.
User response
Check the ADHLOG for prior errors and attempt to resolve any previous errors.
Parent topic: Error messages and codes: ADHIxxxx
Explanation
A DB2® bind error -805 was encountered for the specified plan name.
User response
Run the ADHBIND job located in the SADHSAMP library.
Parent topic: Error messages and codes: ADHIxxxx
Explanation
An unexpected error was encountered.
User response
Contact IBM® Support.
Parent topic: Error messages and codes: ADHIxxxx
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHIxxxx
ADHK001I
Scope expression received, len = length of expression text
ADHK002I
Starting Compilation...
ADHK004I
Constant Pool for routine: (at memoryLocation).
ADHK005W
Level level ‘compilerMessage'.
ADHK101I
Compiling filter. Flags1 Flags; Compile Trace True/False; Runtime Trace RuntimeTraceFlag; RuntimeTrace RuntimeTraceValue; Stage 1 Requested True/False.
ADHK102I
Rule Expression.
ADHK103I
Profile contained no filter information for this agent.
ADHK104I
Filter Compile Failed.
ADHK105I
Variable text
ADHK106I
Compiled filter requires bytes bytes of dynamic save area.
ADHK110I
Rule expression:
ADHK111I
Compiling filter. flags1 flags1 trace=trace runtimeTraceFlag runtimeTraceFlag runtimeTrace runtimeTrace
ADHK203I
Stage one filtering was not enabled.
ADHK204I
Error while creating stage one filter.
ADHK205I
No valid stage one filter criteria found.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
Explanation
User response
None required.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
The expression compiler is starting to compile the filter expression. Only issued when trace-filter is true.
User response
No action is required.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
This is a debugging message that shows the memory location of an important data structure for the compiled filter. This line is followed by a hexadecimal printout of the
contents of that memory. Only issued when trace-filter is true.
User response
No action is required.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
These are messages generated by the filter compiler if there is anything wrong with the generated filter expression. The compiled filter will not be used. The agent and/or
collector will shut down.
User response
Contact IBM® Software Support. Provide the agent and/or collector logs along with the xml file for the active profile at the time the message was generated.
Parent topic: Error messages and codes: ADHKxxxx
ADHK101I Compiling filter. Flags1 Flags; Compile Trace True/False; Runtime Trace
RuntimeTraceFlag; RuntimeTrace RuntimeTraceValue; Stage 1 Requested True/False.
Explanation
An informational message is issued whenever a new profile is about to be compiled into a compiled filter.
User response
No action is required.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
The following lines show the filter expression that was generated from the profile.
User response
No response required.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
User response
No response is required, in general. However, if you had intended data to be collected, you may wish to review the active profile. If you believe the message is issued in
error, contact IBM® Software Support.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
The expression that was generated from the currently active profile could not be compiled into a filter.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
This message has been issued from the filter compiler
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
The compiled filter needs a certain amount of filter working memory to be able to do filtering, and this message only appears if the amount of filter working memory
allocated (8192 bytes) is insufficient. This is unusual, and indicates a very large and complicated profile.
User response
You can consider reducing the size of the profile through the use of wildcards. If that is not possible, contact IBM® Software Support.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
his message will be followed by a full, multi-line, display of the filter expression generated from the profile. This message is only printed if trace-filter is true.
User response
No action is required.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
An informational message issued whenever a new profile is about to be compiled into a compiled filter.
User response
No action is required.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
User response
To enable stage 1 filtering, enter STAGE1_FILTER(Y) in the ADHCPARMS DD.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
A bug in the filtering code prevented the correct creation of a filter for stage 1. If the stage 2 filter compiled correctly, filtering proceeds successfully at a higher overhead.
User response
Contact IBM® Software Support with XML export of the profile, and the JES output that contained this message.
Parent topic: Error messages and codes: ADHKxxxx
Explanation
Stage 1 filtering is based on a subset of the profile fields. If one or more rules in the profiles do not include at least one of the profile fields, then stage 1 filtering might not
apply.
User response
Review the filtering stages section of the User's Guide and adjust the profile accordingly.
Parent topic: Error messages and codes: ADHKxxxx
ADHP000I
Attempting connection to server server-address port=server-port
ADHP001I
Establishing Policy connection to server [server-address]
ADHP002I
Connection established to server [server-address]
ADHP003I
Connection was re-established to [server name]
ADHP004W
Connection was lost from server [server-address]
ADHP005S
Unable to establish a connection to server [server-address]
ADHP006E
Data loss has occurred as the result of a network send failure
ADHP007E
Unable to create a communications interface
ADHP008S
Required parameter was not supplied. Parameter=parameter-name
ADHP009I
TCP/IP streaming disabled due to user setting.
ADHP010I
Disconnecting from server server-name
ADHP012I
Failover support enabled
ADHP013I
Connection attempt timed out. Reattempting connection reattempt-number of total-reattempts.
ADHP015W
Primary server is unavailable
ADHP017W
Data is being temporarily stored in a spillfile until a connection is re-established
ADHP018I
Spillfile contents have been successfully be sent to server [server]
ADHP019S
Spillfile storage has been exhausted. Dataloss will occur
ADHP020I
Registering server [server] as eligible for failover
ADHP021E
Spillfile is approaching [50% | 85% | 95% |100%] capacity
ADHP022I
A connection has been established to failover server [server]
Parent topic: Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
Explanation
The S-TAP® policy component will attempt to establish a TCP/IP connection to a Guardium® system at the specified server address and port.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The Security Guardium® S-TAP® for DB2® policy component is preparing to establish the TCP/IP connection to the specified Guardium system.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The S-TAP® policy component was successful in establishing a TCP/IP connection to the Guardium® system.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The S-TAP® policy component was successful in establishing a TCP/IP connection to the Guardium® system following a disconnect.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The TCP/IP connection between the S-TAP® policy component and the Guardium® system was lost. The S-TAP policy component will automatically attempt to
reestablish the connection, however a potential for data loss exists if the connection is not established. A data loss condition is indicated by message ADHP006E.
User response
Determine the cause of the network interruption and correct the problem so that the connection can be established.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The S-TAP® Policy component was unable to establish a TCP/IP connection to the Guardium® system.
User response
Ensure that the Guardium system is listening for a connection at the server and port specified in message ADHP001I. .
Ensure that there are no firewalls blocking connections between the collector and the Guardium system.
ADHP006E Data loss has occurred as the result of a network send failure
1002 IBM Security Guardium V10.1
Explanation
During a disconnection, the S-TAP® policy component exceeded the number of events that can be retained in memory while waiting for the network connection to the
Guardium® system to be reestablished.
User response
Determine the cause of the network interruption and correct the problem so that the connection can be established.
If necessary, increase the SEND_FAIL_EVENT_COUNT value in the ASC ADHPARMS parameter file to increase the number of events that can be retained in memory
during short outages.
Explanation
An attempt to create an internal communications interface failed.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A required parameter was not supplied.
User response
Supply a parameter and value for the specified parameter.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A debug setting was specified that has disabled TCP/IP streaming between the S-TAP® policy component and the Guardium® system.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The S-TAP® policy component is disconnecting from the Guardium® system.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
One or more failover servers were successfully registered with the communications interface, enabling failover support.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
User response
Ensure that the Guardium system is listening for a connection at the server and port specified in message ADHP001I.
Ensure that no firewalls are blocking connections between the collector and Guardium system.
Explanation
A connection to the primary Guardium® system is not available. Failover appliances will be attempted for connection.
User response
Determine the cause of the connection interruption to the primary system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A Guardium® system connection is unavailable and collected data is being written to the spillfile area until an system connection can be restored.
User response
Determine the cause of the connection outage to the system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The spillfile data that was collected during a connection outage has been sent to the specified Guardium® system upon reconnection.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A Guardium® system connection is unavailable and the spillfile is out of space. Data collected after this time will be lost.
User response
Determine the cause of the connection outage to the system and attempt to restore the connection. Notify others of the outage as necessary.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The specified server will be added to the list of failover servers to register for the connection. Registration is attempted after all failover servers have been added. A
successful failover registration is indicated by message ADHP012I.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
User response
Determine the cause of the connection outage to the system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A connection to the primary Guardium® system is not available. A connection has successfully been established to one of the specified failover server.
User response
Determine the cause of the connection interruption to the primary system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The S-TAP® policy component was unable to establish a connection to the Guardium® system. A persisted policy from DD:ADHPLCY is being used.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
ADHP026W Invalid port specified for APPLIANCE_PORT. Port 16022 will be used instead.
Explanation
The APPLIANCE_PORT parameter currently supports a setting of 16022, but the parameter has been retained for future support. If APPLIANCE_PORT is specified with a
value other than 16022, message ADHG026W is issued, and port 16022 will be used instead.
User response
Change APPLIANCE_PORT parameter setting to 16022 or remove the parameter entirely.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
At startup, the policy manager did not receive a policy from the Guardium appliance or policy DD.
User response
If APPLIANCE_SERVER_LIST is set to FAILOVER, this problem can be resolved by verifying that either:
IF APPLIANCE_SERVER_LIST is set to MULTI_STREAM, verify that the primary server is active during startup.
Parent topic: Error messages and codes: ADHPxxxx
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
All of the DB2 collection profile interception policies that were pushed down from the Guardium® appliance contain errors. As a result, Security Guardium S-TAP® for
DB2 collection is deactivated.
User response
Review the ADHLOG for messages that were issued prior to this message that indicate why the DB2 rules were discarded. Examples of relevant messages include
ADHP096E and ADHP101W. Use the reason and value that is reported in the message to correct the incorrect value or error in the collection policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
One or more errors were detected while processing an interception policy that was pushed down from the Guardium appliance. As a result, the entire policy, as well as any
rules that are contained within the policy, are ignored.
User response
Review the ADHLOG for messages that were issued prior to this message (for example, ADHP101W) that indicate why the policy was discarded. Use the reason and value
that is reported in the message to correct the incorrect value or error in the collection policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
One or more errors were detected while processing an interception policy rule that was pushed down from the Guardium appliance. As a result, the rule containing these
errors is ignored.
User response
Review the ADHLOG for messages that were issued prior to this message that indicate why the rule was discarded. Examples of relevant messages include ADHP096E and
ADHP101W. Use the reason and value that is reported in the message to correct the incorrect value or error in the collection policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
An error was detected while processing an interception policy rule that was pushed down from the Guardium appliance.
User response
Use the error text that is provided in this message to correct the value or error in the collection policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
An unexpected error was encountered.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHPxxxx
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
An invalid value was detected while processing the collection policy received from the Guardium® system.
User response
Attempt to correct the invalid value or error in the collection policy by referencing the reason and value reported in the message.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A SQL code that was detected while processing the collection policy from the IBM® Guardium® system is not valid.
User response
Attempt to correct the SQL code in the collection policy by referencing the value that is reported in the message. See SQL error codes in the IBM Knowledge Center for
more information.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
This message is issued when information about the event streaming mode is requested by issuing the /F STAP command, where ***** is either STREAMING EVENTS or
POLICY SIMULATION.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
This message indicates that an S-TAP MODIFY command has been issued.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of the installed policy
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of the installed quarantine policy
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A segment of the installed quarantine policy
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of the installed blocking policy
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A segment of the installed blocking policy
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
This message indicates whether S-TAP blocking is enabled, disabled, or in operator mode.
User response
No action is required. See SQL Blocking for more information.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of the agent configuration
User response
Explanation
A segment of the agent configuration
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of the event collection statistics
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The total count collected for the event
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The total count collected for the event
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The total count collected for the event
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The total count collected for the event
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The total count collected for the event
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of S-TAP program levels
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A segment of S-TAP program levels
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of S-TAP allocation queue history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
ADHP161I TimeStamp-------Queued------------Freed
Explanation
The subheader of S-TAP allocation queue history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A segment of the allocation queue history
Explanation
The header of S-TAP filter history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The subheader of the S-TAP filter history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A segment of S-TAP filter history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The header of S-TAP IO history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The subheader of S-TAP IO history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A segment of S-TAP IO history
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
Number of collected events reported by the appliance.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
An invalid value was detected while processing the S-TAP command.
User response
Check the command and try again.
Parent topic: Error messages and codes: ADHPxxxx
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The FORCE_LOG_LIMITED parameter is enabled but APPLIANCE_PORT is not set correctly.
User response
Check the compatible values for FORCE_LOG_LIMITED and APPLIANCE_PORT.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The S-TAP has been configured not to collect host variables.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The appliance does not support the FORCE_LOG_LIMITED feature.
User response
Check for the compatible appliance with which to use the FORCE_LOG_LIMITED feature.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A policy supplied by DD is in use rather than one from push down.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
ADHP186I A [policy | quarantine | blocking] from DD is in use, ignoring any pushed down
policy.
Explanation
A policy supplied by DD is in use. Any pushed down policy will be discarded.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
All blocking policies have been uninstalled.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The database [database name] that was specified in the blocking policy is either empty or not defined.
User response
Rebuild the blocking policy with a valid database for blocking to be active for the database.
Parent topic: Error messages and codes: ADHPxxxx
ADHP190W DB2 object: [object type] with name: [object name] does not exist.
Explanation
The DB2 object [object type] specified in the blocking policy does not exist.
User response
Rebuild the blocking policy with valid blocking targets for blocking to be active for the DB2 object.
Parent topic: Error messages and codes: ADHPxxxx
ADHP191W Blocking is NOT ACTIVE because there is no valid target in the policy.
Explanation
No valid blocking target has been found in the blocking policy. Blocking will not be activated.
ADHP192E SQL statement execution was unsuccessful, SQLCODE is: [sqlcode value] SQLSTATE
is: [sqlstate value]
Explanation
A SQL statement execution was unsuccessful during policy pushdown process.
User response
Determine the cause of the SQLCODE. Correct the installed policy if necessary.
Parent topic: Error messages and codes: ADHPxxxx
ADHP193I STAP Logging command pushed down from UI to request STAP logging information.
Explanation
S-TAP logging levels provide log information as follows:
Level 0
Logs program levels, event queue statistics, agent configuration, policy, and event counts.
Level 1
Logs agent configuration, policy, and event counts.
Level 2
Logs agent configuration.
Level 3
Logs policy.
Level 4 or higher
Logs event counts.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
An unexpected element has been found while parsing policy.
User response
Correct the unexpected element and update the policy.
Parent topic: Error messages and codes: ADHPxxxx
User response
Update the policy to contains at least one rule.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A duplicated schema within one target has been detected.
User response
Update the policy with only one schema per target.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A duplicate table within one target has been detected.
User response
Update the policy with only one table per target.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A duplicate First Read event has been detected.
User response
Update the policy with only one First Read event per target.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A duplicate First Change event has been detected.
User response
Update the policy with only one First Change event per target.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The <policy> tag was expected but a different tag (<***>) was found.
User response
Correct the policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A syntax error was found while parsing the policy.
User response
Correct the policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
An error occurred while opening a data set for policy parsing.
User response
Make sure the dataset exists and is associated with the appropriate permissions.
Parent topic: Error messages and codes: ADHPxxxx
ADHP210I A thread termination request was received for thread [thread ID]
Explanation
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
A syntax error was found while parsing the policy.
User response
Correct the policy.
Parent topic: Error messages and codes: ADHPxxxx
ADHP212W [policy | quarantine | blocking] not enabled for ddname [ddname] reason: XML
error
Explanation
The policy from DD is not enabled because a syntax error was found.
User response
Correct the policy in the DD.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
Network value is not valid in the installed blocking policy.
User response
Correct the network value and reinstall the blocking policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
Netmask value is not valid in the installed blocking policy.
User response
Correct the netmask value and reinstall the blocking policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
IP address value is not valid in the blocking policy.
User response
Correct the IP address value and reinstall the blocking policy.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
The installed blocking policy contains a syntax error. The blocking policy is discarded.
Explanation
An incomplete policy rule is detected.
System action
The rule is discarded.
User response
Use the Guardium Policy Builder of the Guardium® appliance interface to define and manage data collection and filtering. Correct the specified rule rule-name and add
the necessary filters to make it a complete rule.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
More than one SQLCODE list is detected.
System action
The first list is accepted. Additional lists are discarded.
User response
Ensure that there is only one SQLCODE list for each installed policy.
Parent topic: Error messages and codes: ADHPxxxx
ADHP220I Appliance connect retry count has been reached, appliance ping rate is now
increased to [number]
Explanation
Ping rate has been increased to a larger value after reaching the specified number of APPLIANCE_CONNECT_RETRY_COUNT attempts.
User response
No action is required.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
S-TAP was unable to send messages to the appliance.
User response
Make sure the appliance is online and reachable by the S-TAP.
Parent topic: Error messages and codes: ADHPxxxx
Explanation
An attempt to send a non-audit status message to the Guardium® system failed because no connection to the appliance is available.
User response
Determine the cause of the connection outage to the system and attempt to restore the connection.
Parent topic: Error messages and codes: ADHPxxxx
ADHQ1000E
NOT APF AUTHORIZED
ADHQ1001I
DB2 QUERY COMMON COLLECTOR INITIALIZATION IN PROGRESS FOR SUBSYSTEM
ADHQ1002I
DB2 AUDIT SQL COLLECTOR INITIALIZATION COMPLETE FOR SUBSYSTEM
ADHQ1003E
SUBSYSTEM ssid ALREADY ACTIVE
ADHQ1004I
QUERY COMMOON COLLECTOR TERMINATION IN PROGRESS FOR SUBSYSTEM subsystem
ADHQ1005I
QUERY COMMON COLLECTOR TERMINATION COMPLETE FOR SUBSYSTEM ssid
ADHQ1006E
statement DD STATEMENT MISSING
ADHQ1007E
INVALID USERID SPECIFIED FOR AUTHID
ADHQ1010I
DEBUG MODE ON
ADHQ1011I
DEBUG MODE OFF
ADHQ1016E
INVALID COMMAND SYNTAX
ADHQ1017E
INVALID COMMAND
ADHQ1019I
INTERVAL EXTERNALIZATION MODE OFF
ADHQ1020E
DB2 SUBSYSTEM ssid IS NOT DEFINED
ADHQ1024E
dsn SPECIFICATION INVALID
ADHQ1026E
SHARED MEMORY FAILURE FOR OBJECT object request RC =rc RS=rs
ADHQ1027I
CPU=CPU Type-CPU Model-CPU Manufacturer. OS Name OS Version.OS Release.OS Modification.
ADHQ1028E
Component requires a 64 bit processor and z/OS® 1.5 or higher.
ADHQ1031E
Serious error in master address space address space.
ADHQ1032I
Recreating master address space.
ADHQ1033E
Unable to create master address space address space.
ADHQ1034I
Master address space has started.
ADHQ1035E
Unable to restart master (RS=rc).
ADHQ1055E
CQM1055E DB2 ssid IS EXPERIENCING STORAGE CONSTRAINTS, DATA LOSS MAY OCCUR, REASON=code
ADHQ1060I
ZIIP SUPPORT IS NOT ACTIVE. nnnnnnnn RC=yy RSN=zzzzzzzz nnnnnnnn is the name of the service that failed with a nonzero return code (RC).
ADHQ1061E
MISSING PARAMETER: parameter
ADHQ1062E
COMMUNICATION INTERFACE DISABLED BY CROSS MEMORY FAILURE
ADHQ1062I
ZIIP SUPPORT IS INSTALLED
ADHQ1065E
REQUIRED DATA ACCESS COMMON COLLECTOR MODULE NOT FOUND
ADHQ1066E
Subsystem terminating due to abend while compiling the collection profile. SVCDUMP collected.
ADHQ1070E
Terminating due to XML profile processing error RC (xxxxxxxx)
ADHQ1071E
Terminating due to missing XML profile at start up
ADHQ1080I
POLICY MANAGER STARTED.
ADHQ1081I
POLICY MANAGER STOPPED.
POLICY PUSH DETECTED.
ADHQ1083I
POLICY PUSH SENT.
ADHQ1084I
QUARANTINE ONLY POLICY DETECTED.
ADHQ1085I
CURRENT QUARANTINE POLICY IS REMOVED.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for DB2 on z/OS
Explanation
The collector agent started task or job is not APF authorized.
User response
Explanation
This message appears during the normal initialization process of the collector agent.
User response
No action is required.
Explanation
This message appears during the normal initialization process of the collector agent and confirms the initialization process has completed.
User response
No action is required.
Explanation
The collector agent indicated in the message is already active and can therefore cannot process another activate command.
User response
Verify that you are activating the correct system. If you are attempting to activate a subsystem that is already active, do not attempt activation.
Explanation
This message appears during normal shutdown of the Collector Agent and indicates the collector is undergoing shutdown.
User response
No action is required.
Explanation
The collector agent subsystem has been terminated. This message could appear as part of normal shutdown or as a failure to connect to a subsystem.
User response
Investigate other write-to-operator (WTO) messages preceding this one to determine the reason for the termination.
Explanation
The parameter DD statement (for example, ADHCFGP DD statement) is missing from the JCL for the collector agent started task.
Explanation
The user ID entered in the AUTHID parm in the ADHCFGP data set has not been defined to RACF® or an equivalent security system.
User response
Correct the user ID, or ensure the ID is defined to your security system.
Explanation
Debugging mode has been turned on.
User response
None required.
Explanation
Debugging mode has been turned off.
User response
None required.
Explanation
The command syntax is invalid.
User response
Correct the command.
Explanation
An invalid MVSâ„¢ Modify command was issued.
User response
Correct the command and execute it again.
Explanation
The collector agent subsystem was started with externalization mode set to off.
Explanation
The DB2 subsystem indicated in the message is not defined.
User response
Verify that you have specified the correct DB2 subsystem.
Explanation
The data set name listed in this message is not valid.
User response
Verify that you specified the correct data set name in ADHCFGP.
ADHQ1026E SHARED MEMORY FAILURE FOR OBJECT object request RC =rc RS=rs
Explanation
A shared memory failure has occurred for the indicated object.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
This message displays information about the CPU and the operating system.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
Your system does not meet the minimum system requirements.
User response
Upgrade to the minimum requirements.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
A serious error has occurred in the master address space specified.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
DB2® Query Monitor is not able to create the master address space specified.
User response
Many issues that cause this error relate to security setup. If you encounter this message, send your console log to IBM® Software Support.
Parent topic: Error messages and codes: ADHQxxxx
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The master address space could not be restarted.
User response
verify the master address space is available and restart.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The DB2 subsystem indicated in the message is experiencing storage constraints.
User response
Verify that your DB2 subsystem has the needed storage allocations.
Parent topic: Error messages and codes: ADHQxxxx
ADHQ1060I ZIIP SUPPORT IS NOT ACTIVE. nnnnnnnn RC=yy RSN=zzzzzzzz nnnnnnnn is the
name of the service that failed with a nonzero return code (RC).
Explanation
Table 1. Return code explanations
Service Description
IWM4ECRE (WLM The return codes and reason codes are documented in z/OS V1R10.0 MVSâ„¢ Programming Workload Management Services.
Enclave Create)
IWM4EoCT (WLM CPU The return codes and reason codes are not documented in any existing WLM manual. However, RC=4 typically means no ZIIP is configured on
Offload Time Service) the instance of z/OS®. If you have a ZIIP processor and it is properly configured, report the RC to the vendor.
MAXWFLOAD (Enclave An error occurred trying to LOAD ADHMAXWF (the enclave SRB routine that runs on the ZIIP). Make sure you have the correct STEPLIB
SRB load service) configured.
IEAVAPE (Z/OS Allocate These return codes are described in z/OS V1R10.0 MVS Programming Assembler Services References V2. If the ADHQ1060I has IEAVAPE has
Pause Element) the failing service, contact the vendor for resolution.
Explanation
The specified parameter has not been defined in the sample library member ADHCFGP.
User response
Add the missing parameter to the ADHCFGP sample library member.
Explanation
A cross memory failure has occurred and as a result the communication interface has been disabled.
User response
Troubleshoot the memory failure and restart the ASC.
Explanation
The collector agent has detected that WLM is configured for zIIP support. This does not necessarily indicate that zIIP processors are installed or are available for zIIP
offload of collector agent processing.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The started task did not find the Data Access Common Collector (CQC) initialization module, which prevented successful startup.
User response
Verify that the Data Access Common Collector (CQC) has been installed and that the load library is included in the started task STEPLIB concatenation
Parent topic: Error messages and codes: ADHQxxxx
ADHQ1066E Subsystem terminating due to abend while compiling the collection profile.
SVCDUMP collected.
Explanation
An abend was detected when compiling the collection profile. A memory dump was collected to gather the diagnostic information.
User response
If you are unable to take corrective measures to resolve the abend, then the SVCDUMP, the collector joblog, and the details of the collection profile in use should be
reported to IBM® Software Support for resolution of this error.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
A policy is sent from the Guardium® system to the Security Guardium S-TAP® for DB2® collector agent during their initial communication. If the policy received by the
collector agent is not composed of valid XML syntax, the collector terminates.
User response
Explanation
A policy is sent from the Guardium® system to the Security Guardium S-TAP® for DB2® collector agent during their initial communication. If the policy is not received by
the collector agent during the initial communication set up, then the collector terminates.
User response
Verify that the Guardium system is properly configured, using the APPLIANCE_SERVER parameter. The appliance should be set up to accept connections from collectors.
If the problem persists, contact IBM® Software Support.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The internal policy manager task has started.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The internal policy manager task has stopped.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
A policy was received from the appliance.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The policy was sent to Audit SQL Collector.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
A pushed policy was included on a quarantine list. The currently active audit policy is unchanged and is still active.
User response
No action is required.
Explanation
A new policy push occurred which resulted in the removal of the quarantine list.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
A new policy push occurred, which resulted in new policy and quarantine lists to be activated.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The parameter DD statement (for example, ADHPARMS DD statement) is missing from the JCL for the collector agent started task.
User response
Create the necessary DD statement and code the appropriate parameters in the data set.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
An error was encountered during the translation of the indicated CCSIDs. This may be the result of not having defined conversion paths between the CCSID of the
collected SQL text and CCSID 1208 when performing a DB2® offload.
User response
To offload SQL text, verify that all necessary CCSID paths to 1208 are installed. You must define conversion paths between the CCSID of the collected SQL text and CCSID
1208.
ADHQ1202I STORAGE CONSTRAINT RELIEVED FOR SPACE – space – OCCURRENCES:
count
Explanation
An Integrated Storage Manager error had previously occurred due to a storage constraint for the space named in the message. The storage constraint has now been
relieved. The number of storage constraint occurrences for this incident is displayed in the message.
User response
No action is required.
Explanation
User response
Provide the text of this message to IBM Software Support.
ADHQ1204I FUNC=func,SP=subpool,FLG2=flag,FLG3=flag
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager error has occurred. This message provides details that can be used by IBM® Software Support to
diagnose the situation.
User response
Provide the text of this message to IBM Software Support.
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager error has occurred. This message and messages ADHQ1203I and ADHQ1204I provide details that
can be used by IBM® Software Support to diagnose the situation.
User response
Provide the text of this message and messages ADHQ1203I and ADHQ1204I along with any memory dumps that have been produced to IBM Software Support.
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager error has occurred. This message and messages ADHQ1203I and ADHQ1204I provide details that
can be used by IBM® Software Support to diagnose the situation.
User response
Provide the text of this message and messages ADHQ1203I and ADHQ1204I along with any memory dumps that have been produced to IBM Software Support.
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager error has occurred. This message and messages ADHQ1203I and ADHQ1204I provide details that
can be used by IBM® Software Support to diagnose the situation.
User response
Provide the text of this message and messages ADHQ1203I and ADHQ1204I along with any memory dumps that have been produced to IBM Software Support.
ADHQ1211I AN ABEND OCCURRED DURING ISM PROCESSING FOR SPACE – space
Explanation
A Query Monitor Integrated Storage Manager error has occurred. This message and messages ADHQ1203I and ADHQ1204I provide details that can be used by IBM®
Software Support to diagnose the situation.
User response
Provide the text of this message and messages ADHQ1203I and ADHQ1204I along with any dumps that may have been produced to IBM Software Support.
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager error has occurred. This message and messages ADHQ1203I and ADHQ1204I provide details that
can be used by IBM® Software Support to diagnose the situation.
User response
Provide the text of this message and messages ADHQ1203I and ADHQ1204I along with any memory dumps that might been produced to IBM Software Support.
ADHQ1213W SPACE IS FULL AND NO MORE EXTENTS CAN BE OBTAINED FOR SPACE –
space
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager operation has failed because no more extents can be obtained for the space named in the
message. This message and messages ADHQ1203I and ADHQ1204I provide details that can be used by IBM® Software Support to diagnose the situation.
User response
This may be a temporary situation due to the level of DB2 activity currently monitored by Security Guardium S-TAP for DB2. If message ADHQ1202I is also issued to
indicate that the Storage Constraint has ended, then processing resumes. If this situation occurs frequently, adjust the amount of data collected by Security Guardium S-
TAP for DB2, or increase the amount of available memory by using the MAXIMUM_ALLOCATIONS and SMEM_SIZE parameters.
If you need assistance with modifying these parameters, provide the text of this message and messages ADHQ1203I and ADHQ1204I to IBM Software Support.
User response
Provide the text of this message and messages ADHQ1203I and ADHQ1204I along with any memory dumps that might have been produced to IBM Software Support.
ADHQ1215W SPACE IS FULL AND NO MORE LARGE EXTENTS CAN BE OBTAINED FOR SPACE
– space
Explanation
A Security Guardium® S-TAP® for DB2® Monitor Integrated Storage Manager operation has failed because no more large extents can be obtained for the space named
in the message. This message and messages ADHQ1203I and ADHQ1204I provide details that can be used by IBM® Support to diagnose the problem.
User response
This might be a temporary situation due to the level of DB2 activity currently being monitored by Security Guardium S-TAP for DB2. If message ADHQ1202I is also issued
to indicate that the Storage Constraint has ended, then processing resumes. If this situation occurs frequently, adjust the amount of data collected by Security Guardium
S-TAP for DB2, or increase the amount of available memory by using the MAXIMUM_ALLOCATIONS and SMEM_SIZE parameters.
If you need assistance with modifying these parameters, provide the text of this message and messages ADHQ1203I and ADHQ1204I to IBM Software Support.
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager error has occurred. This message and messages ADHQ1203I and ADHQ1204I provide details that
can be used by IBM® Software Support to diagnose the situation.
User response
Provide the text of this message and messages ADHQ1203I and ADHQ1204I along with any memory dumps that have been produced to IBM Software Support.
ADHQ1217W SPACE IS FULL AND NO MORE LARGE EXTENTS CAN BE OBTAINED FOR SPACE
– space
Explanation
A Security Guardium® S-TAP® for DB2® Integrated Storage Manager operation has failed because the request would have exceeded the maximum storage allocation
specified in the MAXIMUM_ALLOCATIONS parameter in ADHPARMS. At the time of the error, Security Guardium S-TAP for DB2 was attempting to allocate additional
storage for the space named in the message. This message and messages ADHQ1203I and ADHQ1204I provide details that can be used by IBM® Software Support to
diagnose the situation.
User response
This might be a temporary situation due to the level of DB2 activity currently being monitored by Security Guardium S-TAP for DB2. If message ADHQ1202I is also issued
to indicate that the Storage Constraint has ended, then processing resumes. If this situation occurs frequently, adjust the amount of data collected by Security Guardium
S-TAP for DB2, or increase the amount of available memory by using the MAXIMUM_ALLOCATIONS and SMEM_SIZE parameters.
If you need assistance with modifying these parameters, provide the text of this message and messages ADHQ1203I and ADHQ1204I to IBM Software Support.
ADHQ1218W MAXIMUM EXTENTS HAS BEEN REACHED FOR SPACE – space
Explanation
An Integrated Storage Manager operation has failed because the request would have exceeded the maximum number of extents allowed for the space named in the
message. This message and messages ADHQ1203I and ADHQ1204I provide details that can be used by IBM® Software Support to diagnose the situation.
User response
This might be a temporary situation due to the level of DB2® activity currently being monitored. If message ADHQ1202I is issued later to indicate that the Storage
Constraint has ended, then processing resumes normally. If this situation rarely occurs, it might not be a problem. If this situation occurs frequently, adjust the amount of
data collected by Security Guardium® S-TAP® for DB2, or increase the amount of available memory by using the MAXIMUM_ALLOCATIONS and SMEM_SIZE parameters.
If you need assistance with tuning these parameters, provide the text of this message and messages ADHQ1203I and ADHQ1204I to IBM Software Support.
Explanation
An Integrated Storage Manager error has occurred. However there were no free ISMERROR message blocks available.
User response
Increase the value of the ISM_ERROR_BLOCKS parameter in the ADHPARMS file. If this parameter is already set to the maximum value and the problem persists, contact
IBM® Software Support.
Explanation
An abnormal end of task occurred for the subtask indicated in the message.
User response
Verify conditions surrounding the abnormal end of task and reissue the subtask.
Explanation
The indicated DB2 subsystem is already being monitored by the collector agent shown in the message.
User response
Explanation
A monitoring agent was unable to start. Another SQL-type monitoring product might be active within the specified DB2® subsystem.
User response
Check to see if another SQL-type monitoring product is active. If so, shut down the other product and restart the S-TAP® collector. If this does not resolve the problem,
contact IBM® Software Support.
If you encounter message ADHQ2002E and receive a memory dump, contact IBM Software Support and provide the memory dump for diagnostic purposes.
Explanation
The collector agent has detected that a monitoring agent is already active, but is forcing installation because FORCE
(Y) was included.
User response
No action is required.
Explanation
The collector agent has installed multiple monitoring agents for the subsystem shown in the message.
User response
No action is required.
ADHQ2008E DB2 SYSTEM ssid IS BEING MONITORED BY A 2.2 OR BELOW VERSION CQM
SUBSYSTEM AND CANNOT BE AUDITED
Explanation
This message indicates an incompatibility between DB2 Query Monitor and S-TAP. InfoSphere® Guardium S-TAP for DB2 Version 9.1 will not start auditing a DB2
subsystem that is running Query Monitor at Version 3.1 or earlier.
User response
Ensure that you are running compatible versions of S-TAP and Query Monitor, or run only one product at a time.
ADHQ2009E DB2® SYSTEM ssid WAS PREVIOUSLY MONITORED BY A 2.2 OR EARLIER CQM
SUBSYSTEM qmid WHICH HAS NOT APPLIED APAR PK55535.
Explanation
You must apply Query Monitory V2R2 APAR PK55535.
User response
Apply the required maintenance.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The currently installed collection policy, as received from the Guardium® system, results in no ASC collection. This can be the result of:
User response
If ASC collection is expected when this message is issued, review installed policy definitions in the Guardium system administration interface for the previously listed
conditions. If no ASC collection is expected when this message is issued, no action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The activated policy enables the collection of GRANT and REVOKE SQL statements. GRANT and REVOKE SQL statements are collected if they match the policy filter
criteria.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
Host variables, which are also known as BIND variables, are not collected.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The active policy contains a negative SQL code list that results in the collection of events ending with a negative SQL code.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
Collection of COMMAND events is enabled.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The currently active policy contains rules with DBNAME filters, which enables optimized filtering of audit events.
User response
No action is required.
Explanation
The active policy contains a quarantine list that might cause DB2 activity to be quarantined.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The active policy enables the collection of DB2 utilities.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The active policy enables the collection of Failed Login events.
User response
No action is required.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The collector agent has encountered an unrecognized parameter.
User response
Check the startup parameters to ensure that the parameters specified are all valid.
Explanation
The collector agent has encountered an error in one of the startup parameters.
Note: Message ADHQ2101E can be issued when the collector agent is started if the ADHCFGP file specifies primary space allocations for back store data sets that are less
than the default.
User response
Check the startup parameters to ensure that all are specified properly. Check that primary space allocations for back store data sets are not set for less than their default
values.
Explanation
Duplicate parameters were specified in the Query Common Collector startup parameters.
User response
Explanation
An error in the collector agent parameter file caused the termination of processing.
User response
Verify that the input you specified for your collector agent parameters in ADHCFGP is valid and correct for your objectives.
Explanation
The collector agent encountered an error while attempting to read the ADHCFGP data set. The ADHPARMS DD statement specified a PDS data set and the member name
specified did not exist.
User response
Correct the JCL specification for the ADHPARMS DD statement and specify a valid member name.
Explanation
Indicates dataspace management is in progress for the subsystem shown in the message.
User response
No action is required.
Explanation
Displays the number of dataspace pages that have been released for the subsystem shown in the message.
User response
No action is required.
Explanation
The replay you entered is not valid.
User response
Enter U to accept or R to reject.
Explanation
This message is issued by the started task if there is a problem during the dynamic allocation of a data set. When this message occurs, the collector agent stops and the
startup process and terminates.
Explanation
This message reports errors encountered during the execution of a CLOSE macro instruction.
User response
To further diagnose and resolve the problem using the return code and reason code listed in the message, refer to the z/OS® V1R1.0 DFSMS/DFP Diagnosis Reference
(GY27-7618-01) or the following Web page:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dgt2r101/20.8.1.2
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The collector agent determined that a DB2 subsystem in its monitor list has started.
User response
No action is required.
Explanation
Security Guardium® S-TAP® for DB2® has initiated monitoring for the named subsystem.
User response
No action is required.
Explanation
The collector agent determined that a DB2 subsystem in its monitor list has shut down.
User response
No action is required.
Explanation
The monitoring agent has been deactivated for the indicated Collector Agent.
User response
None required.
User response
No action is required.
Explanation
This message displays if a mismatch in code level exists between Security Guardium® S-TAP® for DB2® and Query Monitor. One message per mismatched code level
will occur.
User response
Ensure that all the programs listed have the Query Monitor and corresponding Security Guardium S-TAP for DB2 maintenance applied.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
This message displays if a mismatch in code level exists between Security Guardium® S-TAP® for DB2® and DB2 Query Monitor. This message occurs once per
mismatched code level.
User response
Verify that all the programs listed have the Query Monitor and corresponding S-TAP for DB2 maintenance applied.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
This message is used in conjunction with other messages to indicate display agents.
User response
No action is required.
Explanation
Indicates the DB2 subsystem and agent address.
User response
None required.
Explanation
Indicates the monitoring agent address.
User response
No action is required.
Explanation
Indicates ASC diagnostic display is in effect.
User response
No action is required.
Explanation
Indicates the SDA address.
User response
No action is required.
Explanation
This message is used in conjunction with other messages to indicate the address.
User response
None required.
Explanation
The message displays diagnostic data for the abend.
User response
No action is required.
Explanation
The message indicates the system completion code.
User response
No action is required.
Explanation
Indicates the number of occurrences and the date and time at which the took place.
User response
None required.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message displays diagnostic information about the current contents of the register.
User response
Contact IBM® Software Support.
Explanation
This message appears in conjunction with other messages as a result of the MVSâ„¢ Modify command DISPLAY DATASPACES.
User response
No action is required.
Explanation
This message appears in conjunction with ADHQ3240I as a result of the MVSâ„¢ Modify command DISPLAY DATASPACES.
User response
No action is required.
Explanation
This message appears in conjunction with ADHQ3240I as a result of the MVSâ„¢ Modify command DISPLAY DATASPACES. This message lists the node size for the named
data space.
User response
No action is required.
Explanation
This message appears in conjunction with ADHQ3240I as a result of the MVSâ„¢ Modify command DISPLAY DATASPACES. This message lists the total number of nodes
allowed for the named data space.
User response
Explanation
This message appears in conjunction with ADHQ3240I as a result of the MVSâ„¢ Modify command DISPLAY DATASPACES. This message lists the total number of nodes
available for use by the named data space.
User response
No action is required.
Explanation
This message appears in conjunction with ADHQ3240I as a result of the MVSâ„¢ Modify command DISPLAY DATASPACES. This message lists the percentage of nodes
used for the named data space.
User response
No action is required.
Explanation
This message appears to inform you that the interval processor has been started through an MVSâ„¢ Modify INTERVAL command.
User response
No action is required.
Explanation
The interval processor was not started because a DB2 subsystem is not available.
User response
Verify the status of all monitored DB2 subsystems.
Explanation
This message appears to inform you that the interval processor was already started through an MVSâ„¢ Modify INTERVAL command.
User response
No action is required.
ADHQ3308E DB2® SYSTEM ssid IS MONITORED BY DB2 QUERY MONITOR ssid WHICH HAS
MISMATCHED OBJ AGENT
Explanation
User response
Ensure that the maintenance levels match between the Security Guardium S-TAP for DB2 and Query Monitor installations. Apply maintenance as required to one or both
environments to ensure that the maintenance levels match.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
For monitoring and auditing to be active on the DB2® subsystem, a DB2 subsystem that is monitored by DB2 Query Monitor or Workload Replay for DB2 for z/OS® or
audited by Security Guardium® S-TAP® for DB2 must have a matching MASTER_PROCNAME parameter between the Query Monitor subsystem and the Workload Replay
DB2 subsystem, or the Security Guardium S-TAP for DB2 ASC started task.
User response
Update the MASTER_PROCNAME parameter for DB2 Query Monitor, Security Guardium S-TAP for DB2, or Workload Replay so that the same MASTER_PROCNAME is in use
by all products for the monitored DB2 subsystem. After updating the MASTER_PROCNAME, restart the started task for the task that is affected by the parameter change.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
Indicates command execution.
User response
No action is required.
ADHQ3551E VSAM LOGIC ERROR ENCOUNTERED WHILE ACCESSING CONTROL FILE FOR
DB2® ssid. VSAMRC='rc' VSAMRS=X'rs'
Explanation
A VSAM logic error was encountered when accessing the control file for the DB2 subsystem indicated in the message.
User response
Verify that the DB2 control file for the DB2 subsystem listed in the message has been properly allocated and that the appropriate DB2 subsystem and plan names
information have been specified correctly.
ADHQ3552E SETUP INFORMATION MISSING FROM CONTROL FILE FOR DB2® ssid
Explanation
There is insufficient information in the control file for the DB2 subsystem indicated in the message.
User response
Modify the control file to include the necessary information.
Explanation
An error has occurred. This message is customized to display various messages such as initialization errors.
User response
Contact IBM® Software Support.
Explanation
Security Guardium® S-TAP® for DB2 was not able to connect to the DB2 subsystem using the plan shown in the message.
User response
Refer to DB2 Universal Database for z/OS® V8 Messages (GC18-9602-01) and DB2 Universal Database for z/OS V8 Codes (GC18-9603-01) to further diagnose and
resolve the problem.
Explanation
The collector agent was not able to connect to the DB2 subsystem because DB2 is not currently operational.
User response
Verify that DB2 is functioning correctly.
Parent topic: Error messages and codes: ADHQxxxx
Explanation
The monitoring agent deinstallation is in progress for the DB2® subsystem indicated in the message.
User response
No action is required.
Explanation
The monitoring agent deinstallation completed for the DB2® subsystem indicated in the message.
User response
No action is required.
Explanation
The monitoring agent for the indicated DB2 subsystem is being requested for activation.
User response
No action is required.
Explanation
The monitoring agent for the indicated DB2 subsystem is being requested for deactivation.
User response
No action is required.
Explanation
A catalog located failed during interval data set expiration processing. r0 contains the contents of the register zero and rc is the LOCATE return code.
User response
See z/OS® DFSMSdfp Advanced Services (SC26-7400-02) for a description of the return codes issued by LOCATE.
User response
See z/OS® DFSMSdfp Advanced Services (SC26-7400-02) for a description of the return codes issued by LOCATE.
Explanation
The table indicated in the message cannot be found in the DB2 catalog.
User response
Verify that the table you specified exists.
ADHQ7008E QUERY COMMON COLLECTOR ssid NOT VALID OR HAS NOT BEEN STARTED SINCE
IPL
Explanation
The collector agent shown in the message is not a valid collector agent.
User response
Verify that you specified the correct Query Common Collector subsystem ID, and that the collector agent is available.
ADHQ7009E OUT OF SPACE CONDITION DETECTED WHILE WRITING TO THE dsn DATASET
Explanation
An out-of-space condition was encountered when attempting to write to the data set indicated in the message.
User response
Verify that adequate space has been allocated to the data set.
ADHQ7010E MISSING "ADD" PARAMETER FOR parameter AT LINE line COLUMN column
Explanation
The ADD parameter is missing for the indicated line and column.
User response
Explanation
There has been an internal error.
User response
Contact IBM® Software Support.
Explanation
An invalid number of BSDS parameters has been sent as input to the ADH#CTLF utility.
User response
Verify that the two boot strap data sets used for your DB2® subsystem are properly specified.
Explanation
This message describes an error condition when attempting to load records into the control file that already exist without specifying REPLACE(Y) for the DB2 subsystem
indicated in the message.
User response
Edit your ADH#CTLF job to include REPLACE(Y). Refer to the instructions in SADHSAMP library member ADH#CTLF for details.
Explanation
Errors have been detected in ADHCFGP.
User response
Verify that the parameters you specified in ADHCFGP are correct and modify any syntax errors before proceeding.
Explanation
An unknown keyword has been found.
User response
Verify the correct syntax and modify the keyword as needed.
ADHQ8003E INVALID SYNTAX SPECIFIED FOR parameter NEAR LINE line COLUMN column
Explanation
The syntax specified for the parameter indicated in the message is not valid.
User response
ADHQ8004E PARAMETER LENGTH EXCEEDED FOR parameter NEAR LINE line COLUMN column
Explanation
The length of the value specified for the parameter indicated in the message exceeded the valid length for that parameter.
User response
Correct the syntax and resubmit the job.
ADHQ8005E PARAMETER MISSING FOR parameter NEAR LINE line COLUMN column
Explanation
A required parameter is missing from ADHLOADP.
User response
Correct the syntax and resubmit the job.
ADHQ8006E NON NUMERIC DATA SPECIFIED FOR parameter NEAR LINE line COLUMN column
Explanation
Non-numeric data was specified in ADHLOADP for the parameter listed in the message.
User response
Specify numeric data for the parameter.
ADHQ8007E INVALID VALUE SPECIFIED FOR parameter NEAR LINE line COLUMN column
Explanation
An invalid value was specified in ADHLOADP.
User response
Correct the value and resubmit the job.
Explanation
The value of the parameter shown in the message must be within the specified range.
User response
Correct the value of the parameter so it falls within the range indicated in the message text.
Explanation
A parameter you specified is a duplicate.
User response
Explanation
A sub-parameter you specified is a duplicate.
User response
Correct the syntax to eliminate the duplicate sub-parameter.
Explanation
The version of DB2 with which you are attempting to use is not supported by unload functionality of the collector agent.
User response
The collector agent unloads data to DB2 Version 8, DB2 Version 9, or DB2 Version 10.
Explanation
The collector agent encountered an error attempting to open the TEXTDATA data set.
User response
Verify that the TEXTDATA data set is configured properly and has adequate space available.
Explanation
The value you specified for the TBCREATOR parameter is too long and is therefore invalid.
User response
Specify a valid value for TBCREATOR. Valid values are up to eight characters in length.
Explanation
The collector agent has encountered a logic error.
User response
Contact IBM® Software Support.
Explanation
This message is used to display the contents of the ADHPARMS file that was processed when Security Guardium® S-TAP® for DB2® was started.
Explanation
This message is used to display the text of a modify command that was issued to Security Guardium® S-TAP® for DB2®.
User response
No action is required.
What does IBM Security Guardium S-TAP for IMS on z/OS V10.1.3 do?
IBM Guardium S-TAP for IMS assists auditors in determining who read or updated a particular IMS database and its associated data sets, what mechanism was used to
perform that action, and when the access took place.
IBM Guardium S-TAP for IMS can collect and correlate many different types of information, including:
Restriction: IBM Guardium S-TAP for IMS supports auditing of Data Entry Databases (DEDBs) and IMS Full Function databases. Auditing of Main Storage Databases
(MSDBs) is not supported.
What's new in IBM Security Guardium S-TAP for IMS on z/OS V10.1.3?
Here's what's new in version 10.1.3 of IBM Guardium S-TAP for IMS.
IBM Guardium S-TAP for IMS components
IBM Guardium S-TAP for IMS consists of an agent, a Common Storage Management Utility, and the IBM Guardium system.
What's new in IBM Security Guardium S-TAP for IMS on z/OS V10.1.3?
Here's what's new in version 10.1.3 of IBM Guardium S-TAP for IMS.
Parent topic: What does IBM Security Guardium S-TAP for IMS on z/OS V10.1.3 do?
Parent topic: What does IBM Security Guardium S-TAP for IMS on z/OS V10.1.3 do?
Note: In environments where multiple agents connect to a common IBM Guardium system or appliance, the z/OS agent started task names (AUIASTC, AUILSTC, AUIFSTC)
must be unique. Unique started task names enable the IBM Guardium S-TAP for IMS policies that are pushed from the IBM Guardium system to be attributed to, and
monitored by, the correct z/OS agent.
Provides the user interface, which processes requests and displays the resulting information.
Enables you to create collection policies, which specify the types of data to be collected by the agent.
Stores the collected data.
The IBM Guardium S-TAP for IMS agent can collect data from one or more of the following sources within a SYSPLEX:
The agent maintains the communication links that are needed to exchange information with:
The agent also provides data collection schemas, called policies, to the activity monitors on which detail the IMS artifacts are to be audited, and to what level.
The agent runs as a started task on the z/OS host. An example of the JCL to be used is in member AUIASTC of the SAUISAMP installation data set.
For more information about how data is collected from these sources, see Data collection monitors.
Parent topic: IBM Guardium S-TAP for IMS components
Review the IBM Guardium S-TAP for IMS V10.1.3 Program Directory for a list of product materials and SMP/E installation instructions.
64-bit memory
TCP/IP connectivity
z/OS System logger log streams
UNIX System Services
OMVS segment
Parent topic: Installing IBM Security Guardium S-TAP for IMS on z/OS
If you are installing this product, your z/OS user ID must have the authority to:
Parent topic: Installing IBM Security Guardium S-TAP for IMS on z/OS
APF authorization
IBM Guardium S-TAP for IMS requires certain data sets to be accessible and APF-authorized on all LPARS of the SYSPLEX where IMS batch jobs or monitored IMS
online regions might run.
OMVS segment
TCP/IP connectivity and other UNIX System services on z/OS require that the address space that is using these services use a z/OS user ID or group name that is
defined with an OMVS segment.
TCP/IP connections
IBM Guardium S-TAP for IMS uses Transmission Control Protocol/Internet Protocol (TCP/IP) to connect to the Guardium appliance. To enable this communication,
make sure you have the correct permissions assigned.
z/OS log streams
IBM Guardium S-TAP for IMS monitors the IMS batch jobs and online regions and writes audit data to z/OS log streams.
IMS RESLIB data sets
READ access to the IMS RESLIB/SDFSRESL data sets is required for each IMS system that requires the IMS SLDS to be processed by IBM Guardium S-TAP for IMS.
READ access is required to allow a LOAD/READ of module DFSVC000 to determine the version release level of the audited IMS.
SMF and IMS archive log data sets
READ access to the SMF data sets and the IMS archived logs data sets (SLDS) is required for the user under whose authority the agent runs. If these data sets are
protected by RACF® or another security product, a policy must be defined to grant this access. The z/OS catalogs containing the names of these data sets, as well
as the physical data sets themselves, must be accessible from the LPAR on which the IBM Guardium S-TAP for IMS agent runs.
DBRC RECON data sets
IBM Guardium S-TAP for IMS uses the native VSAM services to read data from the RECON data sets. These RECON data sets must be accessible from all the LPARS
where the IBM Guardium S-TAP for IMS agents might run.
Operator commands
You can use z/OS Operator commands, to start IBM Guardium S-TAP for IMS tasks.
Quarantining Database DLI calls
IBM Guardium S-TAP for IMS enables you to quarantine the DB DLI calls of specific users for specific periods of time.
APF authorization
IBM Guardium S-TAP for IMS requires certain data sets to be accessible and APF-authorized on all LPARS of the SYSPLEX where IMS batch jobs or monitored IMS online
regions might run.
Procedure
1. APF-authorize product data set SAUILOAD on all LPARS of the SYSPLEX.
SAUILOAD contains the IMS Online and Batch Activity Monitor executable code.
2. APF-authorize product data set SAUIIMOD on all LPARS of the SYSPLEX where IMS batch jobs or IMS online regions to be monitored might run.
SAUIIMOD contains IMS specific executable load modules.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
OMVS segment
TCP/IP connectivity and other UNIX System services on z/OS require that the address space that is using these services use a z/OS user ID or group name that is defined
with an OMVS segment.
Defining your z/OS user ID or group name with an OMVS segment might require the use of the IBM RACF command ADDUSER/ALTUSER xxxxxx OMVS(UID(zzz)) or a
security product equivalent command. Review your z/OS Security Server documentation for more information.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
TCP/IP connections
IBM Guardium S-TAP for IMS uses Transmission Control Protocol/Internet Protocol (TCP/IP) to connect to the Guardium appliance. To enable this communication, make
sure you have the correct permissions assigned.
If you are working from a secure communications port, enable the user ID that is associated with the agent started task to have READ/WRITE permissions on the ports
that are assigned to the agent.
See Using agent configuration keywords to customize auditing for more information about the ADS_LISTENER_PORT, APPLIANCE_PORT, and LOG_PRT_SCAN_START
configuration keywords.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
The IBM Guardium S-TAP for IMS Online and DLI/DBB batch data collectors audit DLI events that occur in the IMS Online and DLI/DBB Batch regions. Audited DLI events
are written to z/OS System Logger log streams, which are then read by the IBM Guardium S-TAP for IMS agent. The IMS agent sends the audit data to the IBM Guardium
appliance by using TCP/IP connections.
You can now use an additional SAF resource to further secure the online and batch log streams. For example, you can now prevent the log streams from being read by a
user program or utility that is initiated by a user who is authorized to update to the log stream. Apply z/OS V2R3 and V2R4 APAR OA56050 to optionally add an additional
authority check for a SAF profile that covers resource (WRITE_ONLY_log-stream-name) in class LOGSTRM. This new profile option enables you to limit users to only
connecting to (IXGCONN REQUEST=CONNECT), writing to (IXGWRITE), and disconnecting from (IXGCONN REQUEST=DISCONNECT) the log stream. Other IXG calls, such
as IXGBRWSE (read), are rejected with return code 8 and reason code '081C'x. For more information, refer to the documentation provided in the HOLD data for APAR
OA56050.
Note: User IDs that are associated with the IBM Guardium S-TAP for IMS agent must have authority to read and delete data from the log stream and should not be limited
by using resource (WRITE_ONLY_log-stream-name). Log stream UPDATE authority is recommended for the IBM Guardium S-TAP for IMS agents.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
Consult your security administrator to determine what is currently protected and how to grant the required access.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
VSAM access to the RECON data sets is READ-ONLY, allowing the IBM Guardium S-TAP for IMS jobs and started tasks with a security access of READ to process the
RECON data sets.
Consult your security administrator to determine how your RECON data sets are protected, and how to grant the required access.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
Operator commands
You can use z/OS Operator commands, to start IBM Guardium S-TAP for IMS tasks.
The user ID that is assigned to the IBM Guardium S-TAP for IMS agent started task must be permitted to issue START commands to initiate the AUIFstc, AUILstc, and
AUIUstc tasks. During installation, administrators can configure the z/OS security product to restrict users and programs from issuing z/OS Operator commands.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
Quarantining a user of a specific IMS subsystem means that for the specified time period, the quarantined user is not able to run DB DLI calls either by using the targeted
IMS subsystem, or while running DLI/DBB batch jobs.
If a quarantined user attempts access during a restricted time, the DLI call is not performed, and a status code of AI is returned in the DBPCB status code field.
To create quarantine rules, access the Policy Builder from the Tools and Views section of the Guardium appliance interface Setup menu.
Note:
DLI calls that are made to IMS Fast Path databases by using IMS Fast Path exclusive transactions or BMPs cannot be quarantined.
Quarantine does not take effect immediately. The audited DLI call that produces the event to trigger the quarantine is completed before the quarantine takes effect.
It is possible for DLI calls to be run by the quarantined user before the quarantine takes effect.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS security
Configuration overview
These actions are required to configure IBM Guardium S-TAP for IMS.
Review the following steps, which are described in greater detail in the following sections:
Verify that you have the resource authorizations that are required to configure the product.
Note: No WLM (Workload Manager) considerations are necessary. All agent started tasks use the STC WLM class.
Procedure
1. Deactivate or uninstall all policies that apply to the agent that you are upgrading.
2. Shut down the agent that you are upgrading.
3. Customize the AUIMIG10 SAMPLIB member to convert the configuration file and repository to V10.1.3 format, and submit.
The comments that are contained in the AUIMIG10 SAMPLIB member describe how to customize the JCL. A V10.1.3 format configuration file, and an IMS definition
report will be produced.
4. Use the IMS definition report, which is produced by the AUIMIG10 utility, to add the IMS definitions to your IBM Guardium system.
5. Update the new configuration file, which is produced by the AUIMIG10 utility, with any changes.
6. Update the AGENT (AUIASTC) and Memory Management Utility (AUIUSTC) JCLs as follows:
a. Remove the //AUICFG DD JCL statement.
b. Add a //AUICONFG DD JCL statement, and set it to reference the new configuration member produced by the AUIMIG10 utility.
c. Change the //STEPLIB DD JCL statement to reference the V10.1.3 product load library (SAUILOAD).
d. Remove the //AUIREPOS DD JCL statement from the AUIUSTC JCL.
7. Update the SMF (AUIFSTC) and IMS Archive Log (AUILSTC) JCLs as follows:
a. Remove the //AUICFG DD JCL statement, and any procedure parameters that reference it.
b. Change the //STEPLIB DD JCL statement to reference the V10.1.3 product load library (SAUILOAD).
8. Update the IMS Control region JCLs that are audited by the agent to use the V10.1.3 product IMS load library (SAUIIMOD).
9. Update the IMS DBBBATCH and DLIBATCH cataloged procedures, and any equivalent JCL members, to use the V10.1.3 product IMS load library (SAUIIMOD).
10. Start the agent.
11. Install or activate the policies that you want to apply.
12. Stop and restart your IMS systems.
What to do next
Now, you can:
Install additional policies on the z/OS host by using the IBM Guardium system user interface.
Manage agent and IMS definitions by using the IBM Guardium system user interface.
Note: The format of the data that is written to the z/OS logstreams has changed from V9.0 to V10.1.3. IBM Guardium S-TAP for IMS V10.1.3 converts any existing V9.0
data from existing logstreams to a usable format. If you migrate from a V10.1.3 system back to a V9.0 system, you must reinitialize the z/OS log streams before restarting
InfoSphere Guardium S-TAP for IMS V9.0.
Parent topic: Configuration overview
Tip: To upgrade to IBM Guardium S-TAP for IMS from a previous version, refer to the appropriate topic:
If you are upgrading from a previous version to V10.1.3, no further configuration steps are required. Upgrading to V10.1.3 requires the use of, and modifications to, the
same agent name and JCLs that were used with previous versions. For your reference, see the Sample library members table.
Before you configure a new installation of IBM Guardium S-TAP for IMS V10.1.3, determine the following:
The user IDs that will be used to run the agent started tasks
Where the agent started tasks will run
Then, customize the ISPF edit macro, review the job card requirement, and set up the z/OS log streams, as described in the following sections.
Procedure
1. To set up the edit macro, copy AUIEMAC1 from the #HLQ.SAUISAMP to a CLIST library.
2. Edit the macro by providing the appropriate values for each of the variables.
3. To run the macro, type the name of the edit macro in the command line in ISPF.
Results
After you modify the edit macro, you can use it as a command to customize other SAMPLIB members in the following steps, unless otherwise specified.
Example
The contents of the edit macro AUIEMAC1 included in the SAMPLIB are as follows:
This table describes each variable in the edit macro AUIEMAC1 included in the SAMPLIB:
A valid job card conforming to your site's JCL standards must be provided before submitting any of the JCL.
Parent topic: Planning your configuration and customizing your environment
one log stream for DLI events generated by IMS Control regions
one log stream for DLI events generated by DLI/DBB batch jobs
Log streams cannot be shared between agents. Each log stream name must be unique.
It is recommended that XCF based log streams be used whenever possible, because this type of log stream is accessible from any LPAR within a sysplex, and has
performance benefits. For more information about log streams, refer to the IBM publication: System Programmer's Guide to: z/OS System Logger.
Important:
The USERID your IMS online control region runs under must have WRITE access to the log stream.
If DLI/DBB batch jobs runs under a common USERID, that USERID must have WRITE permission to the log stream.
The USERID under which the DLI Event Collector (AUIASTC task) executes must have READ/WRITE access to the log streams.
If individual users are permitted to run DLI/DBB batch jobs under their own USERID, a universal access of WRITE is recommended for the log stream.
AUILSTR1
Two JCL members in the SAUISAMP product data set are included to assist in the definition of XCF-based log streams.
This JCL is used to define the XCF structures to a CFRM policy needed by the log streams used by the DLI/DBB batch and IMS online control regions. Detailed instructions
are in the comments of the JCL.
Note: The addition of structures to a CFRM policy are cumulative, and the execution of this JCL without consideration to previously defined structures within the CFRM
policy result in the loss of existing CFRM structure definitions. It is highly recommended that a systems programmer customize and submit this JCL.
There are two DEFINE STRUCTURE sections for this JCL: one for the batch structure, and one for the online structure. The following values must be customized for the
batch structure:
Do not change any other values, such as SIZE, INITSIZE, and ALLOWAUTOALT without carefully considering the impact that your changes will have on performance and
data integrity.
AUILSTR2
This JCL is used to add the XCF based log streams to a LOGR policy used by the IMS Control region and DLI/DBB batch jobs. Detailed instructions are in the comments of
the JCL.
Note: The addition of structures to a CFRM policy are cumulative, and the execution of this JCL without consideration to previously defined structures within the CFRM
policy result in the loss of existing CFRM structure definitions. It is highly recommended that a systems programmer customize and submit this JCL.
There are two DEFINE STRUCTURE sections for this JCL: one for the batch structure and log stream, and one for the online structure.
The name of this log stream is used as input to the Batch DLI Log Stream Name field when defining log streams to the agent. Use the LOG_STREAM_DLIB keyword of the
configuration member that is specified by the AUICONFG DD statement of the agent (AUIASTC) JCL. The LOGSNUM, MAXBUFSIZE and AVGBUFSIZE should not be
changed from the default values.
These parameters indicate the SMS classes to be used when the System logger allocates a staging data set for the log stream. The IBM publication, System Programmer's
Guide to: z/OS System Logger contains recommendations and considerations for the choice of these parameters.
These parameters indicate the SMS classes to be used when the System logger allocates an offload data set for the log stream. The IBM publication, System Programmer's
Guide to: z/OS System Logger contains recommendations and considerations for the choice of these parameters.
The default value is 13500 (the number of 4K blocks). The IBM publication, System Programmer's Guide to: z/OS System Logger contains recommendations and
considerations for the choice of this size. When auditing in a large test or production environment, a value of 40500 might improve throughput.
The High level qualifier of the offload and staging data sets
(HLQ or EHLQ)
The HLQ and EHLQ are mutually exclusive and only one can be used. Other parameters found in the batch structure and online log stream definition might have a do not
change comment. These parameters contain the recommended values and should not be altered without careful consideration of the impact of changes to log stream
performance and data integrity. The IBM publication, System Programmer's Guide to: z/OS System Logger contains recommendations and considerations of each potential
parameter.
You must customize the following values for online structure and log stream processing:
The LOGSNUM, MAXBUFSIZE and AVGBUFSIZE should not be changed from the default values.
DEFINE LOGSTREAM values:
The name of this log stream is used as input to the Online DLI Log Stream Name field when defining log streams to the agent. Use the LOG_STREAM_DLIO keyword of the
configuration member specified by AUICONFG DD statement of the agent (AUIASTC) JCL.
These parameters indicate the SMS classes to be used when the System logger allocates a staging data set for the log stream. The IBM publication, System Programmer's
Guide to: z/OS System Logger contains recommendations and considerations of each potential parameter.
These parameters indicate the SMS classes to be used when the System logger allocates an offload data set for the log stream. The IBM publication, System Programmer's
Guide to: z/OS System Logger contains recommendations and considerations of each potential parameter.
The default value is 13500 (the number of 4K blocks). The IBM publication, System Programmer's Guide to: z/OS System Logger contains recommendations and
considerations for the choice of this size. When auditing in a large test or production environment, a value of 40500 might improve throughput.
The High level qualifier of the offload and staging data sets
(HLQ or EHLQ)
The HLQ and EHLQ are mutually exclusive and only one can be used. Other parameters found in the batch structure and online log stream definition might have a do not
change comment. These parameters contain the recommended values and should not be altered without careful consideration of the impact of changes to log stream
performance and data integrity. The IBM publication, System Programmer's Guide to: z/OS System Logger contains recommendations and considerations of each potential
parameter.
Parent topic: Setting up z/OS log streams
DASD-based logs streams can only be accessed from one LPAR at a time. Any IMS Online Control regions and DLI/DBB batch jobs to be audited must run on the same
LPAR as the agent runs on.
One JCL member in the SAUISAMP product data is included to assist in the definition of DASD-based log streams.
AUILSTR3
This JCL is used to add the DASD based log streams to a LOGR policy used by the IMS Control region and DLI/DBB batch jobs. Detailed instructions can be found within
the comments of the JCL.
Note: It is highly recommended that a systems programmer customize and submit this JCL.
There are two DEFINE STRUCTURES sections to this JCL: one for the batch structure, and one for the online structure. Values which must be customized for IMS batch log
stream processing are as follows:
DEFINE LOGSTREAM values:
The name of this log stream is used as input to the Batch DLI Log Stream Name field when defining log streams to the agent. Use the LOG_STREAM_DLIO keyword of the
configuration member specified by AUICONFG DD statement of the agent (AUIASTC) JCL.
These parameters indicate the SMS classes to be used when the System logger allocates a staging data set for the log stream. Other parameters found in the batch
structure and online log stream definition might have a do not change comment. These parameters contain the recommended values and should not be altered without
careful consideration of the impact of changes to log stream performance and data integrity. For more information, the IBM publication, System Programmer's Guide to:
z/OS System Logger contains recommendations and considerations for the choice of these parameters, and can be found on the IBM Information Center.
These parameters indicate the SMS classes to be used when the System logger allocates an offload data set for the log stream. For more information, the IBM publication,
System Programmer's Guide to: z/OS System Logger contains recommendations and considerations for the choice of these parameters, and can be found on the IBM
Information Center.
A value of 13500 (the number of 4K blocks) is the default/supplied value. For more information, the publication, System Programmer's Guide to: z/OS System Logger
contains recommendations and considerations for the choice of this size, and can be found on the IBM Information Center.
The High level qualifier of the offload and staging data sets
(HLQ or EHLQ)
The HLQ and EHLQ are mutually exclusive and only one can be used. For more information, the IBM publication, System Programmer's Guide to: z/OS System Logger
contains recommendations and considerations of each potential parameter, and can be found on the IBM Information Center.
Values which must be customized for IMS ONLINE processing include the following:
The name of this log stream is used as input to the Online DLI Log Stream Name field when defining log streams to the agent using the Guardium user interface.
These parameters indicate the SMS classes to be used when the System logger allocates a staging data set for the log stream. For more information, the IBM publication,
System Programmer's Guide to: z/OS System Logger contains recommendations and considerations for the choice of these parameters, and can be found on the IBM
Information Center.
These parameters indicate the SMS classes to be used when the System logger allocates an offload data set for the log stream. For more information, the publication,
System Programmer's Guide to: z/OS System Logger contains recommendations and considerations for the choice of these parameters, and can be found on the IBM
Information Center.
A value of 13500 (the number of 4K blocks) is the default/supplied value. For more information, the publication, System Programmer's Guide to: z/OS System Logger
contains recommendations and considerations for the choice of this size, and can be found on the IBM Information Center.
The High level qualifier of the offload and staging data sets
(HLQ or EHLQ)
The HLQ and EHLQ are mutually exclusive and only one can be used. Other parameters found in the batch structure and online log stream definition might have a do not
change comment. These parameters contain the recommended values and should not be altered without careful consideration of the impact of changes to log stream
performance and data integrity. For more information, the publication, System Programmer's Guide to: z/OS System Logger contains recommendations and considerations
of each potential parameter, and can be found on the IBM Knowledge Center
Parent topic: Setting up z/OS log streams
Configuring the IBM Security Guardium S-TAP for IMS on z/OS agent
This section describes the information necessary for configuring the agent.
The agent has a primary agent address space that runs as a started task (AUIASTC) and multiple secondary address spaces (AUIFSTC, the SMF collector, AUILSTC, the IMS
log collector, AUIUSTC, the common storage utility) that are automatically started and stopped by the primary address space.
The agent primary address space reads the configuration file specified by the AUICONFG DD statement in the AUIASTC JCL, and passes the appropriate configuration
information to the associated AUIFSTC and AUILSTC tasks. The AUIUSTC JCL requires the same configuration file to be specified as was specified for the AUIASTC task.
Use the AUICONFG DD statement to specify the configuration file.
The SAUISAMP member AUICONFG provides a sample configuration that can be used by the agent primary address space started task.
Refer to the following instructions about the AUICONFG data set or the instructions in the data sets to complete the next steps.
Note:
The data set must be edited using the EBCDIC encoding (1047 CCSID).
It is recommended that you make a copy of the AUICONFG from SAUISAMP and customize it for use by a given agent.
Required parameters
The following parameters must be manually configured:
APPLIANCE_SERVER
LOG_STREAM_DLIB
LOG_STREAM_DLIO
SMF_DSN_MASK
SMF_SPILL_FILE
Syntax: ADS_SHM_ID(Shared_Memory_label)
Example: ADS_SHM_ID(100010)
ADS_LISTENER_PORT
Required: No
Default: 39987
Description: This keyword is optional when only one agent exists in a sysplex environment. If more than one agent exists, the configuration file for each agent
should have this keyword specified with a unique port number specified. This keyword identifies an agent-specific communications port between the agent
(AUIASTC) and the agent secondary address spaces (AUIFSTC, AUILSTC). Valid port numbers are 1 - 65535. Check with your network administrator for a list of
ports available for this use.
Note:
Syntax: ADS_LISTENER_PORT(port_number)
Example: ADS_LISTENER_PORT(16055)
APPLIANCE_SERVER
Required: Yes
Default: None
Description: The host name or IP address (in dotted decimal notation, for example: 1.2.3.4) of the IBM Guardium system to which the agent (AUIASTC) should
connect.
Note: This parameter must be correctly configured to enable a connection to the IBM Guardium system. This value can contain up to 128 characters.
Syntax: APPLIANCE_SERVER(hostname|IP_address)
Example:
APPLIANCE_SERVER(wal-vm-guardium20)
APPLIANCE_SERVER(192.168.2.205)
APPLIANCE_SERVER_[1-5]
Required: No
Default: None
Description: Enables alternative host names or TCP/IP addresses to be used for multistream Guardium appliance destinations or failover recovery processing. Up
to five alternative host names or TCP/IP addresses are supported.
To specify one or more entries, include this parameter with a numeric suffix from 1 - 5. Provide a unique host name or TCP/IP address for each entry.
Valid values are any valid host name or TCP/IP address.
Note:
The use of this keyword does not eliminate the need for the APPLIANCE_SERVER keyword.
The APPLIANCE_SERVER_LIST parameter designates how this parameter is used.
If used in combination, this parameter overrides the APPLIANCE _SERVER_[MULTI_STREAM|FAILOVER|HOT_FAILOVER]_[1-5] parameter.
Syntax:
APPLIANCE_SERVER_n(hostname|IP_addr)
where n can be 1, 2, 3, 4, or 5.
Example:
APPLIANCE_SERVER_1(nwt-vm-guardium3)
APPLIANCE_SERVER_1(192.168.2.205)
The use of this keyword does not eliminate the need for the APPLIANCE_SERVER keyword.
If this parameter, or the APPLIANCE_SERVER_[1-5] parameter, is not detected at startup, then neither failover nor hot failover processing is activated.
The APPLIANCE_SERVER_LIST parameter designates how this parameter is used.
If used in combination, this parameter is overridden by the APPLIANCE_SERVER_[1-5] parameter.
Syntax:
APPLIANCE_SERVER_[MULTI_STREAM|FAILOVER|HOT_FAILOVER]_n(hostname|IP_address)
where n can be 1, 2, 3, 4, or 5.
Example:
APPLIANCE_SERVER_MULTI_STREAM_1(wal-vm-guardium20)
APPLIANCE_SERVER_FAILOVER_1(nwt-vm-guardium8)
APPLIANCE_SERVER_HOT_FAILOVER_1(wal-vm-guardium16)
APPLIANCE_SERVER_MULTI_STREAM_1(192.168.2.201)
APPLIANCE_SERVER_FAILOVER_1(192.168.2.202)
APPLIANCE_SERVER_HOT_FAILOVER_1(192.168.2.203)
APPLIANCE_SERVER_LIST(MULTI_STREAM|FAILOVER|HOT_FAILOVER)
Required: No
Default: FAILOVER
Description: Set APPLIANCE_SERVER_LIST to MULTI_STREAM for a Guardium appliance connection to be established for each server that is identified by the
APPLIANCE_SERVER_MULTI_STREAM_n parameter.
If a connection is lost, S-TAP audit events continue to transmit over the remaining appliance connection.
Lost connections are retried at regular intervals that are determined by multiplying the APPLIANCE_CONNECT_RETRY_COUNT by the
APPLIANCE_PING_RATE.
Set APPLIANCE_SERVER_LIST to FAILOVER for one Guardium appliance connection to be active at a time.
If the connection to the primary appliance is lost, a failover action occurs, which results in an attempt to connect to the next available server. The next
available server is identified by the APPLIANCE_SERVER_FAILOVER_n parameter. The agent attempts to connect to subsequent Guardium systems,
beginning with APPLIANCE_SERVER_FAILOVER_1 and ending with APPLIANCE_SERVER_FAILOVER_5.
After a failover action occurs, the connection to the primary server is retried at regular intervals that are determined by multiplying the
APPLIANCE_CONNECT_RETRY_COUNT by the APPLIANCE_PING_RATE.
Set APPLIANCE_SERVER_LIST to HOT_FAILOVER to cause connection types for each connected Guardium appliance identified by the
APPLIANCE_SERVER_HOT_FAILOVER_n parameter to be kept active by pings.
You must specify the primary Guardium appliance by using the APPLIANCE_SERVER parameter.
If the primary Guardium appliance becomes unavailable and failover occurs, HOT_FAILOVER maintains the activity of the primary appliance policy.
With any setting of APPLIANCE_SERVER_LIST, if all connections fail, and a spill file is specified (parameter OUTAGE_SPILLAREA_SIZE), events are buffered to the
spill file until a connection becomes available. If no spill file is specified, and all connections are lost, data loss occurs.
The default is FAILOVER.
APPLIANCE_PORT
Required: No
Default: 16022
Valid ports: 16022 or 16023
Description: The IP port number of the IBM Guardium system to which the IBM Guardium S-TAP for IMS agent should connect. This parameter must be correctly
configured to enable a connection to the IBM Guardium system. If port 16023 is used, encryption support is required for the connection to the appliance.
Note: Specifying this keyword and parameter designates the port on which the IBM Guardium system is listening to the S-TAP. The port is dedicated to the IP
address of the appliance. Port 16022 or 16023 can also be in use on z/OS by another application.
Syntax: APPLIANCE_PORT(port_number)
Example: APPLIANCE_PORT(16022)
APPLIANCE_PING_RATE
Required: No
Default: 5
Description: Specifies the interval time between accesses to the IBM Guardium system to prevent timeout disconnections during idle periods. The value is in
number of seconds.
Syntax: APPLIANCE_PING_RATE(ping_interval)
Example: APPLIANCE_PING_RATE(5)
APPLIANCE_NETWORK_REQUEST_TIMEOUT
Required: No
Default: 500
Description: Specifies a value in milliseconds of time to wait for the completion of a network communication request to send or receive. A value of 0 results in no
timeout period. Range: 0 or 500 - 12000.
Syntax: APPLIANCE_NETWORK_REQUEST_TIMEOUT(milliseconds)
Example: APPLIANCE_NETWORK_REQUEST_TIMEOUT(500)
AUIU_EXCLUDE_LPAR
Required: No
Default: None
Description: Specifies a list of LPAR names (one to eight characters) in a SYSPLEX environment where the Common Storage Management Utility (AUIUSTC) should
not be scheduled. Multiple AUIU_EXCLUDE_LPAR statements can be specified to allow for LPAR name strings that are longer than 53 bytes.
Note: Use this keyword with caution. DLI calls run on the excluded LPARS are not audited.
The default setting, N, prevents these messages from being written to the AUILOG DD.
Syntax: DISPLAY_IMSMSG_DLIB(Y|N)
Example: DISPLAY_IMSMSG_DLIB(Y)
DISPLAY_IMSMSG_DLIO(Y|N)
Required: No
Default: N
Description: Controls the output of informational messages AUIJ255I, AUIJ256I, AUIJ257I, and AUIJ258I in the AUILOG output DD of the AUIASTC agent address
space. These messages are generated from data that is produced by the IMS Control Region and passed to the agent from the DLIO z/OS log stream.
The default setting, N, prevents these messages from being written to the AUILOG DD.
Syntax: DISPLAY_IMSMSG_DLIO(Y|N)
Example: DISPLAY_IMSMSG_DLIO(Y)
DLIFREQ
Required: No
Default: 100K
Description: Enables you to customize the number of DLI calls that are sent to the Guardium appliance before message AUIJ012I (providing a count of the number
of events sent to appliance) is issued.
The count can be represented in thousands (K) or millions (M). Valid values are 10K – 999K and 1 – 10M.
Syntax: DLIFREQ(100K)
Example: DLIFREQ(100K)
FORCE_LOG_LIMITED
Required: No
Default: N
Description: Enables you to force limited audit logging by removing sensitive information (such as IMS segment data and concatenated key values) from data that
is sent to the Guardium appliance by the S-TAP.
Specify Y to restrict sensitive data from being sent to the Guardium appliance.
Syntax: FORCE_LOG_LIMITED(Y|N)
Example: FORCE_LOG_LIMITED(N)
IMSL_AUDIT_LEVELS
Required: No
Default: ALL
Description: Specifies the events to be audited from those that are found using the IMS Archive Log task (AUILSTC) for each IMS instance under control of this
agent. A specification other than ALL limits auditing to the events you specify.
For example, if you specify USERS, then all audited IMS instances under the agent report user signons and signoffs. If you specify ALL, you can use the Guardium
interface to specify further limitations on what is audited for each audited IMS subsystem.
If an IMS checkpoint does not exist for the SLDS reader, AUILxxxx will search for IMS SLDS that were created on the current day and for x days prior to the
current day (where x is the value that you set for this parameter).
If an IMS checkpoint that is set for the SLDS reader exceeds the number of days between the current day and the value that you set for this parameter, then
the IMS checkpoint will be used as the starting point for IMS SLDS to be read and processed.
If you set a value of 0 (zero) for this parameter, then only the current day's IMS SLDS will be processed. Also, IMS SLDS that were migrated from a
hierarchical storage manager product will not be recalled for processing.
Note: If you set a value of 0 (zero) for this parameter, AUILxxxx processing will omit any IMS SLDS that were created on the previous day. This can cause data
to be missed if, for example, the AUILxxxx task is run at 12:05 AM. IMS SLDS that were created prior to midnight will not be recognized as being within the
current day, and thus will not be processed.
Syntax: IMSL_SLDS_SRCH(number_of_days)
Example: IMSL_SLDS_SRCH(15)
LOG_FILTER(I/E)
Required: No
Default: I (include)
Description: Specifies whether to include or exclude messages that have been specified by the LOG_FILTER_MSG_ID parameter.
The default value, I, allows only the specified message IDs to be included in the AUILOG output stream. Message IDs that are not specified by the
LOG_FILTER_MSG_ID(messages) parameter will be suppressed. The default value should be used unless there is a specific business need to suppress
messages.
The optional value, E, suppresses the specified message IDs from the AUILOG output stream.
Tip: The E value should only be used if the LOG_FILTER_MSG_ID keyword has been customized to suppress specific messages. Do not use the optional value
(E) in conjunction with LOG_FILTER_MSG_ID(*) unless you want to prevent all messages from being written to the AUILOG output stream. Suppressing all
messages is not recommended.
Syntax: LOG_FILTER(include/exclude)
Example: LOG_FILTER(E)
LOG_FILTER_MSG_ID(messages)
Required: No
Default: * (all messages)
Description: Can be used in conjunction with the LOG_FILTER(I/E) parameter to suppress specific messages from being written to the AUILOG output stream.
Tip: The LOG_FILTER_MSG_ID(*) default value should only be used with the LOG_FILTER(I) default value. Do not specify LOG_FILTER(E) in conjunction with
LOG_FILTER_MSG_ID(*) unless you want to prevent all messages from being written to the AUILOG output stream. Suppressing all messages is not recommended.
Syntax: LOG_FILTER_MSG_ID(id1,id2,id3...)
Example: LOG_FILTER_MSG_ID(AUIZ014W)
LOG_PORT_SCAN_START
Required: No
Default: 41500
Description: Specifies the first communications port number to be checked for availability to be used for internal message logging communications. Use this
keyword if environmental conditions dictate that a sequential scan and test of ports from port numbers 41500 - 65535 should not be performed. You can override
the starting port with a port of your choice. This keyword and parameter can be used with the LOG_PORT_SCAN_COUNT keyword to limit the ports that are scanned
to a specific range.
Syntax: LOG_PORT_SCAN_START(port_number)
Example: LOG_PORT_SCAN_START(41500)
LOG_PORT_SCAN_COUNT
Required: No
Default: 10
Description: This keyword can be used in conjunction with the LOG_PORT_SCAN_START keyword to limit number of the ports that are scanned and tested for
availability. The integer specified (1 - 65535) represents the number of ports that should be scanned. If the port number specified by the LOG_PORT_SCAN_START
value plus the LOG_PORT_SCAN_COUNT value exceeds 65535, the scan terminates at port 65535.
Syntax: LOG_PORT_SCAN_COUNT(number_of_ports)
Example: LOG_PORT_SCAN_COUNT(1000)
LOG_STREAM_DLIB
PREFER_IPV4_STACK
Required: No
Default: N
Description: If set to Y, this parameter causes a request to be issued to the Domain Name Server (DNS) for an IPV4 address for the hostname that is specified in
the APPLIANCE_SERVER parameter:
The DNS lookup request for an IPV4 address is attempted. If an IPV4 address is defined for the hostname, the DNS will respond with the value that will be
used to connect to the Guardium appliance.
If only an IPV6 address is defined at the DNS, then the DNS will respond with the IPV6 address that will be used to connect to the Guardium appliance.
If both IPV4 and IPV6 addresses are defined at the Guardium appliance, the DNS will respond with both addresses, and the IPV4 address will be used to
connect to the appliance.
If this parameter is set to N or omitted from configuration, a request for an IPV6 address is issued to the DNS for the hostname that is specified by the
APPLIANCE_SERVER parameter:
Note: Whether or not this parameter is used, if the address returned from the DNS is not valid for the hostname, it will result in failure to connect to the appliance,
and the IBM Guardium S-TAP for IMS started task will terminate.
Syntax:
PREFER_IPV4_STACK(Y|N)
Example:
PREFER_IVP4_STACK(Y)
SMF_AUDIT_LEVELS
Required: No
Default: ALL
Description: Specifies which events to audit of those found using the SMF task (AUIFSTC). A specification other than ALL limits the events to be audited to the
events you specify. For example, if DELETE is specified, then all audited IMS instances under the agent would only be capable of reporting data set DELETE events.
If ALL is specified, you can further limits what is audited for each audited IMS subsystem, using the user interface.
Table 3. SMF_AUDIT_LEVELS audit
parameters and events
Parameter Audited event
ALL All events are audited (default)
UPDATE Data sets opened with UPDATE access
DELETE Data sets deleted
READ Data sets opened with READ access
CREATE Data sets created
ALTER Data sets opened with ALTER access
RACF® RACF violations on data sets
Syntax: SMF_AUDIT_LEVELS(ALL|UPDATE|DELETE|READ|CREATE|ALTER|RACF)
Example: SMF_AUDIT_LEVELS(ALL)
SMF_CYCLE_INTERVAL
Required: No
Default: 300
Description: Specifies the frequency (in minutes) that the SMF task (AUIFSTC) checks the z/OS catalog for new data sets, which meet the specified data set masks,
using the SMF_DSN_MASK keyword. This value should correspond to the frequency at which your z/OS system swaps SMF logging VSAM files (sometimes known as
SMF MANX|MANY) during a normal workday. For example, if the SMF logging files are swapped every 8 hours, the SMF_CYCLE_INTERVAL should be set to 480 (8
hours * 60 minutes). A value of zero can be specified to indicate that the agent should not start the AUIFSTC task and SMF auditing should not be performed. Valid
parameters are 0 – 1440.
Syntax: SMF_CYCLE_INTERVAL(time_in_minutes)
Example: SMF_CYCLE_INTERVAL(45)
SMF_DSN_MASK_[1-10]
Required: Yes
Default: None
Description: At least one instance of this keyword is required (SMF_DSN_MASK_1). This keyword provides a data set mask used to query the z/OS catalog for
sequential format data sets containing SMF data offloaded from the SMF log-files (MANX|MANY) using the IFASMFDP program. These sequential files can be the
original files created when offloading the MANX|MANY files, or a copy of these sequential files created by customizing and running AUISMFDF and AUISMFDP jobs
located in the product sample data set. In most environments, only one SMF_DSN_MASK would be specified, but up to 10 are allowed.
Table 4. Masking character rules
Characte
Rule
r
% Indicates that only one alphanumeric or national character can occupy that position
%%% Indicates that more than one character can be substituted, with the number of substitution characters being equal to the number of percent signs
specified.
Example 1: specifying a GDG data set in the mask: If the AUISMFDP job has been customized to produce a GDG data set as the SORTOUT DD output data sets,
you can choose to specify the fully qualified GDG base name in the mask for system name field. For example, A.B.C. IBM Guardium S-TAP for IMS uses catalog
services to determine the names of all cataloged GDG entries under this name, for example:
A.B.C.G0001V00
A.B.C.G0002V00
A.B.C.G0003V00
Example 2: specifying a data set name explicitly: Provide the generation and version values as a mask. For example, A.B.C.G%%%%V%%. IBM Guardium S-TAP
for IMS uses catalog services to determine the names of all cataloged data sets that match this mask, for example:
A.B.C.G0021V00
A.B.C.G0022V00
A.B.C.G0023V00
Example 3: specifying a DSN using a DATE/TIME naming convention: If you have customized the AUISMFDP job to produce a data set name that contains date
and time values as qualifiers within the data set name as the SORTOUT DD output data sets, you can specify the data set name using a string of percent signs within
the date and time qualifier names. For example: HLQ.D%%%%%%.T%%%%%%.SMFDATA. IBM Guardium S-TAP for IMS uses catalog services to determine the
names of all cataloged data sets matching the mask, for example:
HLQ.D091122.T131000.SMFDATA
HLQ.D091123.T131100.SMFDATA
HLQ.D091124.T131200.SMFDATA
SMF_DSN_MASK_1(AUI.SMF.DUMP.COPY)
SMF_DSN_MASK_2(AUI.SMF.DUMP.GDG.G%%%%V%%)
SMF_DSN_MASK_3(AUI.SMF.D%%%%%%.T%%%%%%.COPY)
SMF_EVENT_EXPIRY
Required: No
Default: 5
Description: Specifies the number of days that incomplete SMF events should be retained in the SMF spill file. Incomplete SMF events are audited events that have
not yet received the associated SMF Type 30 record, which indicates that the step/job is complete, and contains information that is needed to complete the
reporting of the event. When an event exceeds the expiration date, it is flagged as incomplete, sent to the IBM Guardium system, and removed from the SMF spill
file. The valid range is 1 to 180 days.
Syntax: SMF_EVENT_EXPIRY(days)
Example: SMF_EVENT_EXPIRY(5)
SMF_PROC_NAME
Required: No
Default: AUIFSTC
Description: Specifies the PROCLIB member name that contains the SMF secondary address space JCL. This JCL is supplied as member name AUIFSTC in the
sample library (AUISAMP). If multiple agents are used within a sysplex, each agent requires a separate JCL for each AUIFSTC address space.
Syntax: SMF_PROC_NAME(auif_mbr_name)\
Example: SMF_PROC_NAME(AUIFV91)
SMF_SELF_AUDIT
Required: No
Default: N
Description: Indicates whether to audit the accesses of IMS data sets that are used by the product to determine the names of IMS artifacts to be audited.
Examples of IMS data sets that can be accessed include RECON data sets and IMS archived logs (SLDS). A value of N indicates that these accesses should not be
audited. A value of Y indicates that these data sets should be considered for auditing.
Syntax: SMF_SELF_AUDIT(N|Y)
Example: SMF_SELF_AUDIT(N)
SMF_SPILL_FILE
Required: Yes
Default: None
Description: Specifies the DSN of a sequential format fixed block data set with a LRECL of 300. This data set is used to store incomplete audited SMF events.
Incomplete audited SMF events are events triggered by SMF records that have yet to encounter an SMF Type 30 record, indicating the step or job has completed.
The AUIFUSPL member of the SAUISAMP data set provides an example of the allocation specifications for this data set.
Syntax: SMF_SPILL_FILE(dsn)
Example: SMF_SPILL_FILE(AUI.V1013.SPILL)
TCPIP_BUFFER_SIZE
Required: No
Default: 32768
Description: Specifies the size of an internal buffer that is used to hold audited events in preparation of the TCP/IP send to the IBM Guardium system, and specifies
the size of the TCP/IP buffer. In most environments, the size of this buffer should not be changed
Syntax: TCPIP_BUFFER_SIZE(buffer_size)
Example: TCPIP_BUFFER_SIZE(32768)
TRACE_CONFIG
Required: No
Default: ON
Description: TRACE_CONFIG(ON) enables IBM Guardium S-TAP for IMS configuration values to display by default at agent startup. You can optionally use this
keyword to disable the IBM Guardium S-TAP for IMS configuration value display. To prevent the displayed report of agent configuration parameters during agent
startup, specify TRACE_CONFIG(OFF).
Syntax: TRACE_CONFIG(ON|OFF)
Example: TRACE_CONFIG(OFF)
WTO_MSG
Required: No
Default: None
Description: Allows a user to request that specific informational, warning, or error messages written to the AUILOG DD statement of the agent (AUIASTC) or agent
secondary address spaces (AUIFSTC, AUILSTC or AUIUSTC) also be written to the Operator Console (WTO). This enables these messages to be recognized by an
automated operations tool, or provides higher operator visibility for these messages and allows appropriate action to be taken. Each message requires a separate
keyword, and each keyword must be specified on a separate line.
Syntax: WTO_MSG(msgnumber)
Example:
WTO_MSG(AUIJ011I)
WTO_MSG(AUIL607W)
WTO_MSG(AUIY006E)
XML_ECHO_AUILOG(Y|N)
Required: No
Default: N
Description: Indicates that when an audit policy is installed on a IBM Guardium system appliance, its corresponding XML is to be echoed to the AUILOG DD. If there
is more than one policy installed on the agent, the XML of each policy is echoed. If all installed policies are subsequently uninstalled, then the echoed XML reflects
that there are no installed policies. For more information about echoed XML statements, see XML statement definitions.
Syntax: XML_ECHO_AUILOG(Y|N)
Example: XML_ECHO_AUILOG(Y)
XML_ECHO_DATASET(Data_Set_Name[,Cylinders])
Required: No
Default: None
Description:
If Data_Set_Name is intended to be a Generation Data Group (GDG), then it must be set as the GDG base name. The agent checks the system catalog to determine
whether Data_Set_Name exists and whether or not it is a GDG base name.
Data_Set_Name can contain z/OS system symbols such as &SYSNAME. To determine the names of the system symbols that are currently defined to the system,
issue the DISPLAY SYMBOLS command to the system console.
If Data_Set_Name does not exist, and there is no GDG base defined in this name, the agent allocates the data set as non-GDG. If Data_Set_Name is a regular
physical sequential data set (non-GDG based) and does exist, the agent allocates space for the Cylinders keyword when the agent is restarted.
Parent topic: Configuring the IBM Security Guardium S-TAP for IMS on z/OS agent
Related reference
Customizing IMS to use a System z Integrated Information Processor (zIIP)
Agent configuration
The IP addresses of the IBM Guardium system appliances are specified using the SAUISAMP data set AUICONFG member using the APPLIANCE_SERVER and
APPLIANCE_SERVER_FAILOVER_[1-5] keywords.
Procedure
1. Edit SAUISAMP members AUIASTC, AUIFSTC, AUILSTC and AUIUSTC by running the ISPF edit macro.
See Planning your configuration and customizing your environment for more details.
2. Modify the CFG=AUI.V100.AGTCFG(AUICONFG) in AUIASTC to specify the location of the customized configuration data set for the agent created in the previous
section.
3. Optional: You can rename the AUIASTC member to any character name that is valid for started tasks in your environment.
4. Optional: You can rename the AUIFSTC, AUILSTC, and AUIUSTC. AUIFSTC, AUILSTC, and AUIUSTC names should match the values of the IMSL_PROC_NAME,
SMF_PROC_NAME, and AUIU_PROC_NAME keywords that you supply in the configuration file.
5. Copy the AUIASTC, AUIFSTC, AUILSTC and AUIUSTC members to the PROCLIB for the site.
Contact the z/OS systems programmer to determine the location of the PROCLIB.
Note: APF authorization of the AUILOAD file is required for each of these members before they are started.
Parent topic: Configuring the IBM Security Guardium S-TAP for IMS on z/OS agent
Stop the agent by issuing the command /STOP AUIASTC, or /MODIFY AUIASTC,STOP, from the SDSF command line. The primary agent address space then stops all the
secondary address spaces that are online, and shuts down. Depending on the load, and the activity in the other secondary address spaces, the shut down process can take
time. Monitor the AUILOG DD of the primary address space AUIASTC for informational messages on the status of the secondary address spaces.
Parent topic: Configuring the IBM Security Guardium S-TAP for IMS on z/OS agent
Important: Contact your system administrator to ensure that localhost is resolving to 127.0.0.1 (loopback address). The TCP/IP communication between the agent and
the secondary address spaces relies on this resolution. If this is not possible at your site, use the loop-back-address element in the AUICONFG sample library member to
avoid localhost resolution by specifying the loopback IP address directly, or by specifying an appropriate host name that resolves to the loopback address.
Parent topic: Configuring the IBM Security Guardium S-TAP for IMS on z/OS agent
Use the agent parameter keyword DLIFREQ to modify the frequency of AUIJ012I messages, or issue the command /MODIFY AGENT,SET CONFIG DLIFREQ aaaK | bbM,
from the SDSF command line.
Parent topic: Configuring the IBM Security Guardium S-TAP for IMS on z/OS agent
Note: The IBM Guardium S-TAP for IMS programs that are used to communicate with your IMS environments are found in the SAUIIMOD data set, and are created during
product installation.
Parent topic: Setting up an IMS environment for auditing
The IBM Guardium S-TAP for IMS programs that must be accessed reside in the SAUIIMOD installation data set. The preferred method of installing IBM Guardium S-TAP
for IMS into your IMS environment is to copy the entire contents of the SAUIIMOD data set into your IMS RESLIB (IMS.SDFSRESL) data set.
If copying IBM Guardium S-TAP for IMS programs into your IMS RESLIB is not possible, then the SAUIIMOD data set must be included in your IMS control region JCL as
the first data set of the STEPLIB DD concatenation. The SAUIIMOD data set must also be included as the first data set of the STEPLIB DD concatenation of the DLI batch
cataloged procedure (DLIBATCH member of the IMS PROCLIB data set) and the DBB batch cataloged procedure (DBBBATCH member of the IMS PROCLIB data set).
Note:
If the SAUIIMOD data set is included in any JCL, you must ensure that it is APF-authorized.
IBM Guardium S-TAP for IMS provides and uses the DFSFLGX0 and DFSISIV0 IMS exits to establish communication with IMS services, however no customization of
these exits is required.
IBM Guardium S-TAP for IMS supports the protocols used by the IMS Tools Generic Exit product. You can define the IBM Guardium S-TAP for IMS copy of the DFSFLGX0
exit by either supplying IMS with a PROCLIB member using a BPE-style control statement, or by building a load module that contains the required information.
See the IBM IMS Tools Generic Exit Reference Manual for Generic Logger Exit setup and usage.
Important: The IBM IMS Tools Generic Exit product does not support exit DFSISVI0.
When loaded and run, the IBM Guardium S-TAP for IMS supplied program AUIFLGX0 (DFSFLGX0) and AUIISVI0 (DFSISIV0) determines from which DSN within the
JOBLIB/STEPLIB concatenation it was loaded from. It then searches all subsequent DSNs within the JOBLIB/STEPLIB DD concatenation, looking for the next occurrence
of the exit with the same name.
If none are found, or it is determined that the IMS Tools Generic Exit product is involved in executing the exit, no cascading is done.
If an exit is found, and it is determined that the exit found is in fact another instance of the IBM Guardium S-TAP for IMS exit (as could happen if the SAUIIMOD data
set was specified multiple times in the JOBLIB/STEPLIB concatenation), the search will continue with the remainder of the DSNs in the concatenation.
If a non-IBM Guardium S-TAP for IMS Exit is found, this new exit is loaded, and called with R13 pointing to the save area supplied by IMS. A new 512 byte user
work area, obtained specifically for this exit instance, is then pointed to by the SXPLAWRK field of the IMS Standard User Exit Parameter List (DFSSXPL). This 512
byte work area is obtained when the first (or INIT) call is done; the work area address (in the SXLPAWRK field) and work area content are maintained for all
subsequent calls.
In a non-APF-authorized environment, such as when executing program DFSULTR0 or an IMS DLI/DBB batch program, the exit load module to be cascaded to must have
an ALIAS, and the ALIAS must be appropriately either DFSFLGX0 or DFSISVI0, if the target exit module has the RENT or REUS attribute on.
You must specify the exit name AUIFLGX0 in the list of LOGWRT exits to be used. This disables the cascading feature, which prevents other LOGWRT exits in the STEPLIB
from being unintentionally invoked. You must include the SAUIIMOD load library in the IMS Control Region STEPLIB concatenation.
Example:
<SECTION=USER_EXITS>
EXITDEF=(TYPE=LOGWRT,EXITS=(AUIFLGX0))
To use this feature, the LPAR on which the IMS Control region executes must have a zIIP installed. The IMS Control Region should also make use of the z/OS Workload
Manager product. For more information on using z/OS Workload Manager with the IMS Control Region, see the Workload Manager and IMS section of the IBM IMS System
Administration manual.
Calling of the compiled filter to determine if the DLI event is to be audited, and if the segment concatenated key or segment data should be sent to the Guardium
appliance.
Movement of the audited DLI calls to a storage buffer used to hold audited data until a write to the z/OS System Logger log-stream can be executed
Calling of the z/OS System Logger IXGWRITE, which moves the audited data from the buffer to the log-stream when the buffer fills, or a flush of the buffer is
scheduled
To indicate that the IMS Control region should attempt to schedule these processes on the zIIP, a //AUIZIIP DD DUMMY DD statement should be added to the IMS Control
Region JCL. When detected, the audit code produces the informational message AUII055I, indicating that zIIP processing will be attempted.
Warning messages AUII042W and AUII043W are issued if zIIP processing is requested when a zIIP is not available, and when IMS is not using Workload Manager. Error
message AUII044E indicates that the request was rejected. In all instances where the attempt to use the zIIP has failed, audit processing continues without attempting to
execute the audit code on the zIIP.
Related reference
Customizing the agent by using agent parameter keywords
AUI$NAP
Module used to trace data
Provided in the SAUILOAD data set
Also needed in the SAUIIMOD data set
AUICPMOD
An SAUISMAP member
Performs a copy of the AUI$NAP module from the SAUILOAD to the SAUIIMOD data set
Should be customized and submitted after the initial SMP/E installation
Procedure
1. Perform a Database Descriptor Generator (DBD gen) for the AUIAPPEV database.
An example of the DBD source to use is in member AUIAPPEV of the SAUISAMP data set.
2. Create a database data set for the AUIAPPEV database.
3. If appropriate for your site, register the DB and DDN to DBRC, specifying NOREOV if possible.
4. If appropriate for your site, create a dynamic allocation (MDA) member for the database data set.
5. Modify application program PSBs to include a PCB for the AUIAPPEV database.
Use a PROCOPT of G and a KEYLENGTH of 0.
6. If the APP_EVENT feature is to be used by an IMS Online system, perform an ACBGEN for DBD member AUIAPPEV and the modified PSBs.
7. Modify application programs to send APP_EVENT information using the AUIAPPEV PCB:
a. In the 2000 byte I/O area, modify the application programs to include the information that you want to be sent to the appliance.
b. Perform a DLI GET call by using the AUIAPPEV PCB.
A DLI status code of blanks will be returned.
APP_EVENT examples
Examples of the AUIAPPEV database, a PSB with DBPCB for the AUIAPPEV database included, the Assembler language of an IMS DLI call, and a C program are
provided here. These code samples are for example purposes only. There is no guarantee of the reliability, serviceability, or function of these programming
examples.
APP_EVENT examples
AUIAPPEV database
The AUIAPPEV database is used to support the transmittal of environmental information from an application program to the Guardium appliance. The following is an
example:
DBD NAME=AUIAPPEV,ACCESS=(HDAM,OSAM),RMNAME=(DFSHDC40,10,20)
DATASET DD1=AUIAPPEV,SIZE=2048
SEGM NAME=ROOT,PARENT=0,BYTES=2000
DBDGEN
FINISH
END
C program
The following is an example of a C program:
int rc = 0;
const static char GU = "GU ";
struct {
char output 2000¨;
} iodata ;
....
....
/* create a APP_EVENT */
sprintf(iodata.output, "THIS IS AN APP_EVENT");
rc = ctdli(GU, aepcb, &iodata);
Required keywords
The following keywords must be set for the product to function:
APPLIANCE_SERVER
This is the host name, or IP address, of the IBM Guardium system to which the agent should connect.
LOG_STREAM_DLIO
This is the log stream name for online DLI calls.
LOG_STREAM_DLIB
This is the log stream name for batch DLI calls.
You can also audit accesses to database-related data sets using SMF records. To audit accesses to IMS data sets that occur outside of IMS services, use the following
keywords:
SMF_SPILL_FILE
Optional keywords
To set the following optional specifications, use the keyword that is listed. More information about each specification is provided, following this list.
Simulation mode
Simulation mode enables you to simulate agent processing. IBM Guardium S-TAP for IMS uses various z/OS MVS system services to gather audit data and move it
to the agent address space. The agent address space evaluates this data according to the specified policy, and transmits the audit record to the Guardium appliance
by using TCP/IP. To assess the impact on MVS processing, use the STAP_STREAM_EVENTS parameter to simulate data collection.
Specifying multiple SMF data set masks
You can use the SMF_DSN_MASK keyword to specify up to nine additional SMF data set masks.
Disabling SMF auditing at the agent level
You can use the SMF_CYCLE_INTERVAL keyword to disable SMF auditing at the agent level.
Simulation mode
Simulation mode enables you to simulate agent processing. IBM® Guardium® S-TAP® for IMS uses various z/OS MVS system services to gather audit data and move it
to the agent address space. The agent address space evaluates this data according to the specified policy, and transmits the audit record to the Guardium appliance by
using TCP/IP. To assess the impact on MVS processing, use the STAP_STREAM_EVENTS parameter to simulate data collection.
When STAP_STREAM_EVENTS is set to N, the parameter stops the agent TCP/IP data transmission process. The agent performs all data collection processes but does not
send the audit record to the Guardium appliance.
Specifying SMF_CYCLE_INTERVAL(0) turns off auditing process that uses SMF records. The agent address space (AUIASTC) will not start the SMF auditing address space
(AUIFSTC).
To determine if any new, unread data sets match the specified SMF_DSN_MASK_x values, the SMF processing address space (AUIFSTC) periodically performs a query
against the z/OS catalog, looking for data sets to process. By default, this query is performed when the AUIFSTC task is started, and repeated every 300 minutes (5 hours).
To change the default time value, use the keyword SMF_CYCLE_INTERVAL(time in minutes). If you specify a time value of zero, the SMF auditing feature will be disabled.
Parent topic: Using agent configuration keywords to customize auditing
In some situations, such as a canceled job or end-of-memory events, a type 30 record is not produced for a step or job. To keep these types of records from filling your
SMF spill data set, you can set a time limit in days to determine how long incomplete SMF records are retained. The default value is 5 days and can be changed by
specifying the SMF_EVENT_EXPIRY keyword to indicate the number of days of your choice: SMF_EVENT_EXPIRY(number of days).
Parent topic: Using agent configuration keywords to customize auditing
AUIFSTC is the name of the JCL that provides auditing of data set accesses using SMF records. AUIFSTC is provided in the product installation sample data set
(SAUISAMP). If the name AUIFSTC conflicts with your site's naming convention standards, or if more than one agent is being used, you can change the name of this JCL.
Use the SMF_PROC_NAME keyword to change the member name from AUIFSTC to a name of your choice: SMF_PROC_NAME(new name).
Ensure that this JCL resides in a procedure data set (PROCLIB) that allows the z/OS START command S taskname to be used.
IBM Guardium S-TAP for IMS reads the IMS RECON data sets and system log data sets produced by IMS (SLDS) to obtain IMS environment information, such as IMS
artifact names. IMS artifact names determine the databases and data sets that are used to create audit information.
By default, IBM Guardium S-TAP for IMS does not report accesses of IMS artifacts. To obtain a report of these accesses, specify a value of Y using the SMF_SELF_AUDIT
keyword: SMF_SELF_AUDIT(Y).
Changing the types of events that are audited using SMF records
Use the SMF_AUDIT_LEVELS keyword to indicate a list of events to be audited, instead of collecting all event types.
When auditing using SMF records is enabled, the default action is to provide auditing for all of the following accesses to data sets:
To specify some and not all of these events for auditing, you can specify each type of event to be audited by using the SMF_AUDIT_LEVELS keyword: SMF_AUDIT_LEVELS
(ALL|READ|UPDATE|DELETE|CREATE|ALTER|RACF).
Remember: This keyword affects the SMF auditing level for all IMS subsystems controlled by this agent. If you do not include READ accesses in the SMF_AUDIT_LEVELS
parameter, then no READ accesses will be reported for any of the IMS environments that are audited by using the agent.
Note: You can separate parameters for the collection of different event types. For example, to audit UPDATE and READ events, include the UPDATE and READ records as
follows:
SMF_AUDIT_LEVELS(UPDATE)
SMF_AUDIT_LEVELS(READ)
instead of:
SMF_AUDIT_LEVELS(UPDATE|READ)
To use alternate RECON data sets for SMF and SLDS processing:
1. Add a //AUIARCN DD statement to the AUIFSTC and AUILSTC JCLs that contain the name of the IMS system (as defined in the IMS Definition panel of the Guardium
interface).
2. Add the alternate RECON data set names to be used when processing these two types of data sources.
Note: Specifying alternate RECON data set names only affects AUIFSTC and AUILSTC task processing. It has no effect on processing of any other tasks.
Use IDCAMS, or another VSAM-compatible method, to create cataloged, VSAM copies of your live RECON data sets.
The data set that is specified by the AUIARCN DD statement must be defined as Fixed Block (FB) with a record length of 80 bytes (LRECL=80), and it can be a PDS, PDS/E,
or sequential file. The following guidelines apply:
Example:
IMSNAME=IMSV14
RECON1=IMSEA1.ALT.RECON1
RECON2=IMSEA1.ALT.RECON2
RECON3=IMSEA1.ALT.RECON3
*
IMSNAME=IMSV13
RECON1=IMSDA1.ALT.RECON1
RECON2=IMSDA1.ALT.RECON2
Overriding the range of ports used for communication between address spaces
You can set the available port scan starting point and limit the number of ports to check for availability.
IBM Guardium S-TAP for IMS uses a communications port to pass messages between threads within each address space. The default port is 41500. If the address space
determines that port 41500 is not available for use, all subsequent ports up to 65535 are examined, and the first available port is used.
Some installations have restrictions on which ports should be examined and used. Use the LOG_PORT_SCAN _START and LOG_PORT_SCAN_COUNT keywords to set the
available port scan starting point and limit the number of ports to be checked for availability:
LOG_PORT_SCAN_START(41501)
LOG_PORT_SCAN_COUNT(24003)
The sum of the value of the SCAN_START port number plus the SCAN_COUNT should not exceed 65535.
Parent topic: Using agent configuration keywords to customize auditing
To determine its physical IP address, the IBM Guardium S-TAP for IMS agent uses the z/OS getaddrinfo function and passes it to the LPAR name specified in the
CVTSNAME field of the z/OS CVT control block. The getaddrinfo function uses the DNS resolver table to map the agent's LPAR name to its physical IP address. The DNS
resolver table should contain entries that associate each LPAR within the sysplex to its physical IP address. If there is no association found, the agent (AUIASTC) uses the
z/OS gethostname and getaddrinfo services to obtain the physical IP address of its own LPAR; but the IP addresses of other LPARs in the sysplex cannot be determined. In
that case, inter-address space communication is not possible and events that occur on other LPARs are not reported to the Guardium appliance. Similarly, inter-address
space communications can fail if users of Dynamic Virtual IP Addressing (VIPA) attempt to associate multiple IP addresses to a single VIPA token.
To determine if the LPAR name, in the CVTSNAME field, is included in the DNS table:
1. Run the Rexx executable that is located in the SAUISAMP data set of member AUIPING.
2. If the ping is successful, the LPAR name is defined in the DNS table and no further action is required.
3. If the ping fails due to an unknown host error, the LPAR name was not found in the DNS table. Contact your network administrator to request the addition of the
LPAR name and the associated IP address to the DNS table.
cvts_lpar_name(dns_name)
Required if AUIHOST DD is specified.
Default: None.
Description: Translates the CVTSNAME to the name in the DNS table.
lpar_name
Found in the z/OS CVTSNAME field.
Use the AUIPING REXX exec found in the SAUISAMP data set to obtain that name.
The lpar_name value can be from 1 -- 8 bytes in length.
dns_name
Found in the DNS table that associates the LPAR with an IP address.
The DNS_NAME value must conform to the following z/OS TCP/IP HOSTNAME rules:
Example: PRODA(SYSTEM_1)
wherein:
PRODA is the LPAR name found in the CVTSNAME field of your z/OS system
SYSTEM_1 is the mnemonic used in your DNS table to relate this LPAR to a TCP/IP address.
It must be a sequential file, or a member of a Partitioned Data Set (PDS) or Extended Partitioned Data Set (PDSE).
It must be defined with a Fixed Blocked (FB) Record Format (RECFM).
It must have a Logical Record Length (LRECL) of 80 bytes.
Commented lines can be indicated by an asterisk (*) in column one or by a slash-asterisk (/*) in columns one and two.
It must contain all host definitions on one line.
Up to 16 DNS names can be specified.
MYLPAR20(MYLPAR20.mycompany.com)
MYLPAR21(MYLPAR21.mycompany.com)
MYLPAR22(MYLPAR22.mycompany.com)
MYLPAR23(MYLPAR23.mycompany.com)
MYLPAR24(MYLPAR24.mycompany.com)
MYLPAR25(MYLPAR25.mycompany.com)
MYLPAR26(MYLPAR26.mycompany.com)
IBM Guardium S-TAP for IMS allows you to specify informational, warning, or error messages to be written to the operator console. This allows an automated operations
product to take some predefined action or provide a higher level of operator visibility to these messages. You can use the WTO_MSG to specify which messages should be
write-to-operated.
WTO_MSG(AUIF507E)
WTO_MSG(AUIT013I)
You can specify one message ID per WTO_MSG instance. Messages originating from the AUIASTC, AUIFSTC, AUILSTC, and AUIUSTC address spaces are supported.
Parent topic: Using agent configuration keywords to customize auditing
Short-term communication outages between the agent address spaces and the IBM Guardium system can be handled by using a z/OS data space spill area. Use of the
spill area can prevent the loss of audited data by allowing the z/OS agent to save audited data until the connection to the IBM Guardium system is restored. The
restoration of the communications link results in the flushing of the data space contents to the IBM Guardium system.
Use the OUTAGE_SPILL_AREA_SIZE keyword and parameter to indicate the size in megabytes to allocate for the spill area: OUTAGE_SPILL_AREA_SIZE(megabytes). If
you specify zero or omit this keyword, the spill area will not be allocated or used. The maximum value you can specify is 1024 MB.
For any IMS systems to be audited by this agent, you can disable audit events that are determined by reading IMS System Log Data Sets (SLDS). To disable the auditing
process that uses IMS SLDS records, specify the following keyword with the value of zero: IMSL_CYCLE_INTERVAL(0). The agent address space (AUIASTC) will not start
the IMS SLDS auditing address space (AUILSTC).
Controlling the frequency with which IMS System Log Data Sets are allocated and read
You can specify the frequency of IMS RECON data set queries by specifying the IMSL_CYCLE_INTERVAL keyword.
For the product to determine if any new, unread IMS System Log Data Sets LDS data sets have been created by the IMS Online system, the IMSL processing address space
(AUILSTC) periodically performs a query against the IMS RECON data sets, looking for new SLDS. This query is performed when the AUILSTC task is started, and then by
default, every 15 minutes. The frequency can be changed by providing a value in minutes by using the IMSL_CYCLE_INTERVAL keyword: IMSL_CYCLE_INTERVAL(time in
minutes)
A value of zero will cause the IMS SLDS auditing feature to be disabled.
AUILSTC is the name of the JCL that is used to audit data sets using IMS SLDS records. AUILSTC is provided in the product installation sample data set (SAUISAMP). If this
name conflicts with your site's naming convention standards, or if more than one agent is being used, you can change the name of this JCL.
Use the IMSL_PROC_NAME keyword to change the member name from AUILSTC to a name of your choice: IMSL_PROC_NAME(new name)
Ensure that this new JCL is in a procedure data set (PROCLIB) that allows the z/OS START command S taskname to be used.
When you enable auditing by using IMS SLDS records, the default is to provide auditing for all of the following event types:
To audit only some of these events, you can specify each event type to be audited using the IMSL_AUDIT_LEVELS keyword: IMSL_AUDIT_LEVELS
(ALL|CTL_STRT|USERS|DBOPN|DB_PSB).
This keyword governs the IMS SLDS auditing level for all IMS subsystems that are controlled by this agent. For example, if user signon/signoff is not included in the
IMSL_AUDIT_LEVELS parameter, then no signon or signoff events will be reported from any of the IMS environments that are audited using the agent.
Note: You can separate parameters for the collection of different event types. For example, to audit CTL_STRT and DBOPN events, include the CTL_STRT and DBOPN
records as follows:
IMSL_AUDIT_LEVELS(CTL_STRT)
IMSL_AUDIT_LEVELS(DBOPN)
instead of:
IMSL_AUDIT_LEVELS(CTL_STRT|DBOPN)
Changing the name of the Common Memory Management address space JCL
Use the AUIU_PROC_NAME keyword to change the member name from AUIUSTC to a name of your choice.
AUIUSTC is the name of the JCL that is used to build filtering criteria in E/CSA on all LPARS of the SYSPLEX. AUIUSTC is provided in the product installation sample data set
(SAUISAMP). If this name conflicts with your site's naming convention standards, or if more than one agent is being used, you can change the name of this JCL.
Use the AUIU_PROC_NAME keyword to change the member name from AUIUSTC to a name of your choice: AUIU_PROC_NAME(new name).
Ensure that this JCL resides in a procedure data set (PROCLIB) that allows the z/OS START command S taskname to be used.
By default, the IBM Guardium S-TAP for IMS agent creates Common Memory Management address spaces (AUIUSTC) on all LPAR members of a SYSPLEX. This allocates
E/CSA memory, and inserts DLI call filtering criteria across all LPARS. A single agent monitors IMS control regions and DLI/DBB batch jobs running on any LPAR of the
SYSPLEX.
The LPAR where the agent is running cannot be excluded. All other LPARS can be excluded by using the *ALL option in place of the LPAR name.
The agent address space (AUIASTC) and subordinate address spaces (AUIFSTC and AUILSTC) communicate by using a shared memory segment and communications
port. Multiple agents require multiple unique shared memory segments and port values to ensure correct inter-address space communications. If you need to have two or
more IBM Guardium S-TAP for IMS agents available on one SYSPLEX, the following keywords provide a method of uniquely identifying the shared memory segment and
port for each agent environment:
ADS_SHM_ID(100010)
ADS_LISTENER_PORT(16055)
Specification of the ADS_SHM_ID and ADS_LISTENER_PORT requires the addition of a //AUICONFG DD statement to the AUIFSTC and AUILSTC address space JCLs. This
DD statement should point to the same data set and member as the AUIASTC and AUIUSTC JCLs for the agent, to ensure that communications between all participant
address spaces use the correct memory object and ports.
See Customizing the agent by using agent parameter keywords for complete descriptions of all valid parameters, including the ADS_SHM_ID and ADS_LISTENER_PORT
keywords.
Restricting auditing to specific IMS systems when multiple IMS systems share RECON data
sets
If multiple unrelated IMS systems share RECON data sets, and you want to audit only on one or more specific IMS systems, use the keyword IMSNAME_EQ_IMSSSID(Y) to
isolate auditing to the desired IMS system.
The default option, IMSNAME_EQ_IMSSSID(N), causes only the IMS RECON data sets to be used when IBM Guardium S-TAP for IMS attempts to find and match IMS
systems to active audit policies.
Specifying IMSNAME_EQ_IMSSSID(Y) causes both the IMS RECON data sets, and the 8-byte IMS subsystem/DBCTL RSENAME to be used when IBM Guardium S-TAP for
IMS attempts to find and match IMS systems to active audit policies.
RECON data sets A.B.C1/C2/C3 contain information for IMSA and IMSB. Auditing is only desired for IMSB. Policy AUDIT_ALL is installed by using IMS appliance definition
MY_IMS, which references RECON data sets A.B.C1/C2/C3.
If subsystems IMSA and IMSB both use RECON data sets that are referenced by the policy, AUDIT_ALL, and associated with the IMS definition, MY_IMS, then both IMSA
and IMSB are audited when the default, IMSNAME_EQ_IMSSSID(N), is specified.
As a result, IMSB is audited with the criteria that is set in policy AUDIT_ALL, and IMSA is not audited.
Note: DLI batch jobs (DLI/DBB) might not be tightly associated with an IMSID, therefore IBM Guardium S-TAP for IMS will report on all DLI batch jobs that use the audited
RECON data set. The IMSNAME_EQ_IMSSID parameter does not affect DLI/DBB batch job auditing.
Parent topic: Using agent configuration keywords to customize auditing
To use a zIIP in the IMS Online Control region, add a //AUIZIIP DD DUMMY to the IMS control region JCL.
To use a zIIP in the agent address space, use the ZIIP_AGENT_DLI keyword with the Y parameter to the configuration file that is pointed to by the AUICONFG DD
statement in the agent JCL (AUIASTC).
Parent topic: Using agent configuration keywords to customize auditing
When a primary IBM Guardium system goes offline, the IBM Guardium S-TAP for IMS agent automatically establishes a connection to a secondary IBM Guardium
system, and the audited data is sent to the secondary system.
When a primary IBM Guardium system comes back online, the IBM Guardium S-TAP for IMS agent detects it, and reestablishes the connection to the primary IBM
Guardium system and restarts, sending data to the primary system.
This allows the use of any IBM Guardium system as a short-term backup, while always attempting to use the primary system as the main data storage medium.
In the following example failover scenario, where none of the systems are online, the IBM Guardium S-TAP for IMS agent attempts to connect to the primary IBM
Guardium system at a regular interval and follows the usual failover logic if the primary IBM Guardium system is offline. A connection is reestablished to any of the
configured appliances as soon as one becomes available.
APPLIANCE_SERVER_FAILOVER_1(IP address 1)
APPLIANCE_SERVER_FAILOVER_2(host name 2)
APPLIANCE_SERVER_FAILOVER_3(IP address 3)
APPLIANCE_SERVER_FAILOVER_4(IP address 4)
APPLIANCE_SERVER_FAILOVER_5(host name 5)
The TCP/IP connection from the IBM Guardium S-TAP for IMS agent to the primary IBM Guardium system fails.
The TCP/IP connection from the IBM Guardium S-TAP for IMS agent to the primary IBM Guardium system is reestablished.
The IBM Guardium S-TAP for IMS agent and IBM Guardium system B disconnect.
IBM Guardium S-TAP for IMS sends events to a single appliance until a ping occurs, or the number of records that is specified by MEGABUFFER_COUNT is reached.
Audited DLI events are distributed amongst additional appliances in a round-robin sequence.
To enable multistreaming, you must specify MULTI_STREAM when you configure the APPLIANCE_SERVER_LIST parameter. The APPLIANCE_SERVER and
APPLIANCE_SERVER_[MULTI_STREAM]_[1-5] parameters specify the appliances to which you intend to stream events. The appliance that is specified by
APPLIANCE_SERVER provides the policy that is used for event matching.
Specify up to 5 additional IBM Guardium system IP addresses or host names. For example:
APPLIANCE_SERVER_MULTI_STREAM_1(IP address 1)
APPLIANCE_SERVER_MULTI_STREAM_2(host name 2)
APPLIANCE_SERVER_MULTI_STREAM_3(IP address 3)
APPLIANCE_SERVER_MULTI_STREAM_4(IP address 4)
APPLIANCE_SERVER_MULTI_STREAM_5(host name 5)
If the primary appliance becomes unavailable and failover occurs, the appliance policy that was originally pushed from the primary appliance continues to be active. When
all Guardium appliances are connected, the status of each appliance connection, listed in the Guardium interface, is green.
IBM Security Guardium S-TAP for IMS on z/OS agent reference information
The IBM Guardium S-TAP for IMS agent provides access to database and appliance services, in support of the product's remote clients. The agent also reads audited DLI
events placed in the z/OS System Logger log streams by the IMS Online and DLI/DBB batch Data collectors and sends the DLI events to the IBM Guardium system using
TCP/IP connections.
Agent environment
The agent must be running before you can use product functions related to the IMS subsystems monitored by that agent.
Important: Before the agent is started, system services should be started, and completely available for use. Examples of system services include JES, TCP/IP and the
associated DNS RESOLVER, XCF, and the z/OS System Logger.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS agent reference information
APF authorization
For security, the agent must be APF-authorized before it can be run.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS agent reference information
In the event of exceptional conditions, additional messages might be written to the SYSOUT DD. If an abend occurs, dump information can be written to the CEEDUMP and
SYSUDUMP DDs, if they are supplied. That information can be used in diagnosis by product support.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS agent reference information
Important: System services, such as but not limited to the following, should remain available for use until the agent has completed termination: JES, TCP/IP and
associated DNS RESOLVER, XCF and the z/OS System Logger.
From SDSF (or anywhere else that you can issue commands), you can issue one of these commands to the agent:
/STOP agent-job-name
This is the recommended command to use to stop the agent. It initiates a graceful agent shutdown, which causes the agent to:
/MODIFY agent-job-name,STOP
Performs the same function as the /STOP agent-job-name command.
/MODIFY agent-job-name,FORCE
This initiates an agent hard stop which causes the agent to:
Commands to start and stop the SMF data collector address space
Note: The following commands should be used against the agent's primary address space.
Optionally, the STOP command may be used to stop the SMF address space:
/STOP <jobname>
Commands to start and stop the IMS Archive Log Data collector
There is no z/OS command to start the address space because the IMS Archive Log data collector address space is specific to an IMS definition with an active collection.
The AUILSTC address is started by the agent address space, or activation of a collection.
Stopping a specific AUILSTC address space requires the use of the /STOP <jobname>.<token> command. The <token> value to be used can be found during AUILSTC
startup in the AGENT JOBLOG.
/S AUILRS22.AAAAAAAC
Or, when viewing the AUILSTC task in TSO SDFS, the token is displayed as the STEPNAME.
Parent topic: IBM Security Guardium S-TAP for IMS on z/OS agent reference information
Data collection
The collection process involves the gathering of audit event data at run time. Specify various filtering criteria to capture all relevant events and limit the amount of data
that is collected and stored.
IBM Guardium S-TAP for IMS gathers audited events from the following sources:
IMS database DLI calls performed from within IMS Online Control regions and DLI/DBB batch jobs
SMF records
IMS Log records from IMS System Log Data Sets (SLDS).
A single policy containing selection criteria that indicates the events to be audited, is applied to each source.
Note: Database DLI calls that do not result in a DBPCB status code of blanks, GA, or GK, are not audited unless the IMS policy indicates that one or more non-blank DLI
codes should be reported. DLI calls performed using an IOPCB or TPPCB are not audited.
Database DLI calls issued from specific PSBs and user IDs can be included or excluded from auditing. PSB names and user IDs can be specified for auditing using fully
qualified names, or by using wildcard characters.
Further filtering can be performed by including or excluding specific database and segment names. Wildcard support is available for both the database and segment name.
When auditing IMS DLI calls, you can obtain the concatenated key value of segments that are audited for all or specific database DLI calls, as well as the segment data for
UPDATE, and INSERT calls. The segment data can also be obtained for READ and UPDATE calls where these calls are logically linked in the Guardium appliance to provide
a before and after image of updated segments.
SMF records
IBM Guardium S-TAP for IMS allows the filtering of audit events generated by access methods outside of IMS DLI services, including z/OS access methods such as VSAM
or QSAM requests generated from z/OS batch jobs or TSO.
Some IMS Database Batch Utilities access IMS databases using access methods other the IMS Database DLI calls. As a result, the source of auditing records for these
batch jobs will be the SMF records produced.
Database names are relevant because SMF data is based on data set names (part of the process that converts a policy to a filter, examines the IMS RECON data sets for
artifacts in the RECON which relate to the INCLUDED database). These artifacts include database data set names (DSG/AREA/ADS) and database image copy data sets for
each database data set. The AUIFSTC tasks also audit other IMS related data sets.
By default, these data sets have been included because changes to these data sets can have an effect on data integrity:
It is possible to ignore the auditing of these data set types, as well as the database image copy data sets, by adding a DUMMY DD statement to the AUIFSTC JCL.
This table lists the data sets and corresponding DD DUMMY statement to include in the AUIFSTC JCL if you want to exclude the auditing of each of these types.
Table 1. Data sets and DD DUMMY statements
Data set Type IMS RECONS IMS LOGS IMS OLDS DB Image Copies
DD NAME AUINRCN AUINLOG AUINOLD AUINICS
Specify filtering of SMF events at the agent level, using access type or security violation, with the use of the SMF_AUDIT_LEVELS keyword in the configuration file. The
keyword is pointed to by the AUICONFG DD statement of the agent (AUIASTC) JCL. Data set accesses to be audited are:
The auditing of these accesses can be specified at the agent level (for example, all IMS systems defined to the agent), or at the IMS level. See the Changing the type of
events audited using SMF records section for more details.
Policy criteria input for IMS Log data auditing is the same as for IMS DLI calls, but is used differently because of the nature of IMS log data:
DLI calls types are not relevant and therefore not used.
Segment names are not relevant and therefore not used.
PSB names are checked only when relevant to the event being examined.
User IDs are checked only when relevant to the event being examined.
DBD names are checked only when relevant to the event being examined.
In addition to filtering performed using the policy criteria, you can further filter IMS log data by event types, using the Guardium user interface. Using the
IMSL_AUDIT_LEVELS keyword, you can set specific events to be audited, including:
Occurrences of the DB DBDUMP command can also be audited. Auditing of these events can be specified at the agent level (for example, all IMS systems defined to the
agent), or at the IMS level (for example, only for a specific IMS system). For more information, see Changing the types of events audited using IMS SLDS records.
Filtering stages
Stage 0, Stage 1, and Stage 2 filtering is available for Collector Agent audit event collection when processing DLI calls.
Filtering occurs at one or more of the stages, 0, 1, and 2, depending on what fields are included in your filter. As many audit events as possible are filtered at the earliest
possible stage (0, 1, or 2). You can control filtering performance by the fields you include in the rules for the active collection profile.
Stage 0 filtering
Stage 0 filtering occurs immediately after IMS executes the DLI call and it is determined that the call is a candidate for auditing, meaning one of the supported DLI
call types and blanks, or another acceptable DLI status code, is returned.
Stage 1 filtering
Stage 1 filtering occurs through the use of USERID and PSB name values.
Stage 2 filtering
Stage 2 filtering occurs through the use of a filtering program that is compiled at the time of policy installation, using the criteria specified in the policy.
IBM Guardium S-TAP for IMS checks for an active policy for the IMS subsystem and determines if any rules governed by the active policy require the auditing of the DLI
call type. If no policy is active, or no rules require the auditing of the DLI call type, processing control is returned to the application program. This is the most efficient form
of filtering and should be used when possible.
In this example, the READ DLI call is performed, and returns a status code of blanks. Since IBM Guardium S-TAP for IMS determines that no rules in the policy can
reference a READ, processing control returns to the application program.
If the event that the DLI call performed in the example was an INSERT request, Stage 1 filtering would be invoked.
Stage 1 filtering
Stage 1 filtering occurs through the use of USERID and PSB name values.
For Stage 1 filtering to occur, all rules of the active policy must contain identical USERID and PSB name values. Any inconsistencies in these values between rules prevents
Stage 1 filtering from occurring.
Stage 1 filtering allows DLI calls that should be rejected, due to USERID or PSB name, to be excluded from the list of values to be audited. This can be due to the items not
being included, or being intentionally excluded.
The determination that the USERID or PSB is causing the DLI call to be rejected is made by call to the Stage 2 compiled filters. The call to the Stage 2 complied filters is
made when the USERID or PSB name of the current DLI call is not the same as the USERID or PSB name of the previous DLI call made in the same processing region.
The first DLI call is made and passes through Stage 0 processing.
Stage 2 filtering is invoked, and it is determined that DLI calls from this USERID should not be audited. The DLI call is not audited, and control is returned to the
application program.
The next DLI call is made, and the USERID is the same as the previous DLI call in the region. The previous DLI call was not audited due to the USERID value,
therefore this DLI call will not be audited.
This process continues until the BMP STEP terminates with only one DLI call going through to Stage 2 filtering, and the remaining DLI calls are rejected during Stage
1 processing.
The same benefit can be seen with DLI and DBB batch jobs, because the USERID and PSB will not change during the execution step.
This process benefits online transactions and other processing threads where multiple DLI calls are performed from within a single unit-of-work, as well as when DLI calls
are performed using C and D IMS command codes where multiple segments are affected by a single DLI call and auditing might be required on more than one segment
within the hierarchical path.
Stage 2 filtering
Stage 2 filtering occurs through the use of a filtering program that is compiled at the time of policy installation, using the criteria specified in the policy.
All DLI calls that are not rejected by Stage 0 and Stage 1 filtering are processed by the compiled filter. The compiled filter determines if the DLI call is to be audited based
on all the policy criteria including DBD and segment name.
If the DLI call is to be audited, additional information is returned by the compiled filter, such as if the segment data and concatenated key should be included in the
audited data block.
Policy pushdown
This topic describes the policy pushdown process of mapping policies to an IBM Guardium S-TAP for IMS collection profile.
When the IBM Guardium S-TAP for IMS agent starts, it establishes a dedicated connection to the Guardium appliance for the reading of installed policies. Immediately
after the connection is established, any installed policies are pushed down to the IBM Guardium S-TAP for IMS agent by the Guardium appliance. The Guardium appliance
pushes down a full policy to all connected IBM Guardium S-TAP for IMS agents each time a policy is installed or uninstalled from the Guardium appliance.
Upon receipt of a policy, the IBM Guardium S-TAP for IMS agent compares the applicable rules with the existing collections, and performs a differential install.
Differential install
A differential install of the policy indicates that only policies that have been modified since the last install are acted upon.
The following processing occurs in the IBM Guardium S-TAP for IMS agent upon receipt of a policy:
The new policy is compared to the currently active policy if the new policy contains one or more rules.
Procedure
1. From the Administration Console tab, select the Local Taps menu.
2. Select the IMS Definitions option.
IMS Name
*IMS Entry Name
A unique 1 - 8 character name to identify this IMS entry.
Description
An optional description of the IMS Entry.
*Agent Name
The name of the agent that audits this IMS entry.
RECONs
The RECON data set names are used to logically link the IMS definition, the active policy, the IMS Online Control region, and the DLI/DBB batch jobs that are running on
z/OS, to audit the correct IMS instances.
or, by using both the formula and the time interval since the last AUII050I message was issued.
Auditing Levels
Auditing levels can be set for both IMS Log and SMF events. For an explanation of the levels of auditing that are available for IMS Log and SMF events, see Configuration
overview for a description of the IMSL_AUDIT_LEVELS and SMF_AUDIT_LEVELS configuration keywords.
In an IMS data sharing environment where only a subset of databases is shared, an IMS definition must be created for each IMS subsystem with nonshared databases to
be audited.
XRF Considerations
Only one IMS definition is required in an IMS XRF environment. IBM Security Guardium S-TAP for IMS on z/OS is not sensitive to which XRF partner is currently active. The
product continues to produce audit data in the event of an XRF ACTIVE/BACKUP switch.
Procedure
1. From the IMS Definitions List, select the Add symbol, indicated by a plus sign, to the list of defined IMS systems.
Enter the information in the IMS Definitions panel to define the new IMS environment to be audited.
2. Select Apply to save the new IMS definition.
Procedure
1. Select the entry that you want to modify.
2. Modify the IMS definition fields.
3. Select Apply to save your changes.
Procedure
1. From the IMS Definitions List, select the IMS Definition that you want to delete.
2. Click the Delete icon.
Click OK in the confirmation message to confirm the IMS entry deletion.
Reference information
This chapter provides IBM Guardium S-TAP for IMS reference information.
The list of IMS artifacts to be monitored during IMS Archived Log collection is derived from the data collection policy you create, by using the Guardium system.
As the processing of the IMS Archived Log sets is deferred, the data collection policy in force at the time that the IMS Archived Log data sets are read is the
collection policy used (as opposed to the data collection policy in effect when the IMS Archived Log event as written to the IMS log data set).
The IMS Archived Log Collector periodically queries the DBRC RECON data sets that are associated with an IMS that is defined to IBM Guardium S-TAP for IMS to
determine if new SLDS data sets were created since the last RECON data set query. New data sets that are found are dynamically allocated and read. Audited
events are sent to the IBM Guardium system by using a TCP/IP connection.
The IMS Archive Log Data Collector can be configured to audit only a subset of events, by using the options available when configuring the agent and defining the
IMS appliance through the Guardium system interface. The IMS Archived Log Data Collector is run as a started task under the control of the agent. An example of
the JCL for this started task can be found in the SAUISAMP data set in the AUILSTC member.
IBM Guardium S-TAP for IMS starts one AUILSTC task for each set of RECON data sets that is actively monitored with a data collection policy.
If an IMS data sharing environment with five IMS subsystems that share a single set of RECON data sets exists, only one AUILSTC task is started.
If two separate IMS subsystems by using two separate sets of RECON data sets are being monitored, two separate AUILSTC tasks are started.
Note: To collect events from the IMS archived logs, the DFSSLOGP (Primary Output SLDS) data set must be created and cataloged by your IMS Log Archive
Utility process (program DFSUARC0).
IBM Guardium S-TAP for IMS dynamically starts and stops the appropriate number of AUILSTC tasks as required.
IMS Missing Log Utility
The IMS Missing Log Utility analyzes IMS RECON data sets to confirm the existence of SLDS/RLDS data sets. This function can be included or excluded, as well as
scheduled without regard to the execution cycle setting for the AUILSTC task. This utility is run by a job or started task (see SAUISAMP member AUIMLOG for an
example). It processes the RECON data sets of IMS systems with active policies audited by the agent and pointed to by the configuration member that is defined in
the AUICONFG DD statement in the AUIMLOG JCL. The IMS RECON data sets are analyzed in search of IMS SLDS and RLDS data sets. If these are found, the z/OS
appliance catalog is queried by using the SLDS/RLDS data set name. If the SLDS/RLDS data set is not found, a missing log event is sent to the IBM Guardium
system.
Note: The AUIMLOG utility must be run under the same user ID, and on the same LPAR, as the AUIASTC task.
Common Storage Management Utility
IBM Guardium S-TAP for IMS uses memory in E/CSA to provide information regarding active data collection policies to the IMS Batch and Online Activity Monitors.
An IBM Guardium S-TAP for IMS agent can be called to monitor IMS Online regions or DL/I batch jobs on many LPARS within a SYSPLEX. A started task is generated
for execution on all LPARS of a SYSPLEX to read all active data collection policies and build the appropriate E/CSA control blocks. This started task is run when the
IBM Guardium S-TAP for IMS agent starts and stops, as well as when a change is made to the state of any collection policy. An example of the JCL for this started
task can be found in the SAUISAMP data set in the AUIUSTC member.
The LPARs where the AUIUSTC task is run might be limited by adding the AUIU_EXCLUDE_LPAR keyword and LPAR names to the configuration file, which is
specified by the AUICONFG DD statement in the AUIASTC JCL.
IMS Log types and SMF record types that are collected by IBM Guardium S-TAP for IMS
The following tables show the IMS log types and SMF records types and descriptions that are collected by IBM Guardium S-TAP for IMS.
SMF is used to obtain additional data set activity that is related to the monitored IMS databases and image copies.
FD
FW
GA
GB
GD
GE
GK
L2
LB
LS
NI
UC
US
UX
AER
BMP
CICS
DBCTL
IFP
MPP
ODB
In the Guardium interface, click the pencil icon alongside the Region Types to Exclude field to open a set of check-boxes that enable you to remove regions from
auditing
Sizing the z/OS System Logger Log Stream for IBM Guardium S-TAP for IMS
This section details the process of sizing the z/OS System Logger Log Streams. The z/OS System Logger Log Stream is used to transport audited DLI call data from the IMS
control region or DLI/DBB batch jobs to the IBM® Guardium® S-TAP® for IMS agent (AUIAxxxx address space) where it is reformatted to a PROTOBUF protocol and sent
to the target Guardium appliance.
For most users, the size of the log stream that is provided with the LS_SIZE parameter of the AUILSTR2/3 log stream definition member (LS_SIZE(100)) is appropriate to
use when auditing accesses to sensitive data or when auditing DLI calls performed by a group, or groups, of users who have access to all databases for diagnostic
purposes.
There might be instances where an LS_SIZE parameter value of 13500 (LS_SIZE(13500)) might be used, such as:
Note: Log stream sizing can be an iterative process. When attempting to audit many DLI calls, the CPU, memory, and disk storage capacity of the Guardium appliance
should be considered.
Parent topic: Sizing the z/OS System Logger Log Stream for IBM Guardium S-TAP for IMS
Considerations
There are several variables that must be considered when sizing the log stream(s), including:
The average number of IXGWRITEs that are performed and the average number of bytes/average buffer size is determined by the volume of audited DLI calls and the size
of the DLI call event data that is being captured.
IBM® Guardium® S-TAP® for IMS uses a set of 35K buffers to hold the audited DLI call data. Each buffer is written to the log stream when it fills to capacity, or every
five seconds. The time interval is used to ensure that audited DLI call data is sent to the Guardium appliance in a timely manner. Therefore, the frequency of IXGWRITEs
can vary greatly depending on the IMS Policy and databases that are being accessed.
The log stream data is deleted by using the IXGDELETE call after every three blocks of data are successfully read and sent to the Guardium appliance. This ensures that
audited data is not lost in the event of a communication loss between the IBM Guardium S-TAP for IMS agent and the Guardium appliance.
Parent topic: Sizing the z/OS System Logger Log Stream for IBM Guardium S-TAP for IMS
You can perform an analysis of the performance and efficiency of the initial log stream size by running program IBM program IXGRPT1 and JCL IXGRPT2 found in
'SYS1.PROCLIB’. This program uses the SMF88 records to help with log stream capacity planning.
SMF88 records can be collected by z/OS by providing the 88 value in the SMPRM parmlib member prior to a system IPL, or by using the z/OS command, “SET
SMF=xx†(where xx is the suffix of the parmlib member).
Example:
SYS(TYPE(30,70:79,88,89,100,101,110)),
The IXGRPT1 utility will assemble sub-routine IXGRA1 and compile and link program IXGRPT1, which can be used to extract SMP88 records in preparation for analysis.
The IXGRPT2 JCL can be used to produce other SMP88/log stream reports.
Parent topic: Sizing the z/OS System Logger Log Stream for IBM Guardium S-TAP for IMS
The BYT WRITTN TO INTERIM STORAGE value (bytes written to interim storage) indicates the amount of data being written to the log stream during the SMP
interval. This value can provide insight as to the volume of data being written to the log stream.
The BYT WRITTN to DASD value (bytes written to DASD offload data sets) indicates the number of bytes that were written to the DASD offload/overflow VSAM data
sets.
This number indicates that the interim storage filled up, and in order to retain the data, a set of VSAM files are being used as overflow buffers. This number can rise
and fall during the day as the volume of audited DLI calls increase and decrease.
Some use of the overflow VSAM files can be acceptable because spikes in audited DLI call data can certainly be expected due to the nature of IMS POLICY filtering.
However, constant or extensive use of the VSAM overflow files indicate that the log stream should be sized larger.
STRC FULL
The STRC FULL (Structure Full) value indicates the number of times that the capacity of the CF structure was filled up without an offload occurring. This number
should be zero in a properly sized log stream. Structure such as this can indicate that the volume of data written exceeds the offload capability of the IMS STAP
agent to read, process, and delete audited data, and a larger structure size should be considered.
An abundance of Structure Full conditions will result in a degradation of performance when collecting audited DLI call data, and if not rectified, might result in data
loss. This condition might result in IXGWRITE 0866 errors being issued in the IMS Control region address space.
Parent topic: Sizing the z/OS System Logger Log Stream for IBM Guardium S-TAP for IMS
Additional Resources
IBM provides a spreadsheet utility to assist in the analysis of the log stream SMF88 data and provide suggestions on how to define the log stream for more efficient use in
your environment.
You can access the spreadsheet utility with the following link: ftp://www.redbooks.ibm.com/redbooks/SG246898. Read the disclaimer.txt file before using the tool.
Parent topic: Sizing the z/OS System Logger Log Stream for IBM Guardium S-TAP for IMS
XML convention
Start of tag data
See Sample XML file for an example of the XML representation of a valid policy.
IMS-specific statements
Table 1. IMS-specific XML statements
XML statement Definition
<install-info> Beginning of relevant policy information.
<artifacts> Start of IMS definitions.
<ims> Start of individual IMS-specific information.
<name> Name of IMS as specified in the Guardium appliance policy.
<agent> Name of the agent to which IMS is connected.
<description> Appliance IMS description text.
<version> Currently a value of zero (0).
<plexname> Not populated.
<recons> Start of the IMS-specific RECON data set list.
<recon seq="1"> RECON1 data set name. DSN terminated by </recon>.
<recon seq="2"> RECON2 data set name. DSN terminated by </recon>.
<recon seq="3"> RECON3 data set name. DSN terminated by </recon>.
<reslibs> Start of IMS-specific RESLIB data sets.
<reslib seq="1"> RESLIB 1 in IMS STEPLIB concatenation. DSN terminated by </reslib>.
Log-specific statements
Table 2. Log-specific XML statements
XML statement Definition
<dbdlibs/> Not populated.
<psblibs/> Not populated.
<thresholds-050i> Start of message AUII050I message frequency parameters.
<max-count> Number of DLI calls needed to prompt message AUII050I.
<max-time> Max time interval (HHMM) between AUII050I messages.
<audit-levels> Start of IMS Logger and SMF auditing criteria.
<collector name="ims"> Start of IMS Logger auditing criteria. Terminated by </collector>.
<audit-level> Start of audit level criteria.
<signon-signoff value="true"/> Audit IMS user sign-ons and sign-offs.
<signon-signoff value="false"/> Do not audit IMS user sign-ons and sign-offs.
<start-stop value="true"/> Audit IMS Control Region starts and stops.
<start-stop value="false"/> Do not audit IMS Control Region starts and stops.
<db-open-close value "true"/> Audit DBD Opens and Closes.
<db-open-close value "false"/> Do not audit DBD Opens and Closes.
<dbd-psb value="true/" Audit DBD/PSB/Dump/Start/Stop/Lock/Unlock
<dbd-psb value="false/" Audit DBD/PSB/Dump/Start/Stop/Lock/Unlock
SMF-specific statements
Table 3. SMF-specific XML statements
XML statement Definition
<collector name="smf"> Start of SMF auditing criteria. Terminated by </collector>.
<audit-level> Start of audit-level criteria.
<read value="true"/> Audit data sets when they are opened with READ intent.
<read value="false"/> Do not audit data sets when they are opened with READ intent.
<update value="true"/> Audit data sets when opened with UPDATE intent.
<update value="false"/> Do not audit data sets when opened with UPDATE intent.
<delete value="true"/> Audit data set DELETEs.
<delete value="false"/> Do not audit data set DELETEs.
<create value="true"/> Audit data set CREATEs.
<create value="false"/> Do not audit data set CREATEs.
<alter value="true"/> Audit VSAM data set ALTERs.
<alter value="false"/> Do not audit VSAM data set ALTERs.
<racf-violations value "true"/> Audit RACF security violations against data sets.
<racf-violations value "false"/> Do not audit RACF security violations against data sets.
Policy-specific statements
Table 4. Policy-specific XML statements
XML statement Definition
Database/segment-specific statements
Table 5. Database/segment-specific XML statements
XML statement Definition
<targets> Start of DBD/SEGMENT instances within the rule.
<segment-target> Start of list of databases/segments to be INCLUDED/EXCLUDED.
<type> Value will be INCLUDE (audit) or EXCLUDE (ignore).
<database-name> Database to be audited or ignored.
<segment-name> Segment to be audited or ignored.
<audit-get> INCLUDE DLI GET calls.
<audit-insert> INCLUDE DLI INSERT calls.
<audit-update> INCLUDE DLI UPDATE (REPL) calls.
<audit-delete> INCLUDE DLI DELETE (DLET) calls.
<capture-before-image> INCLUDE link between DLI GH and DLI REPL calls.
<capture-segment-data> INCLUDE segment data when segment is audited.
<hlvl-filter enabled="false"> Do not report hierarchical parent segment during DLI command calls.
<excluded-regions> Do not audit DLI calls from these region types.
Collection-specific statements
Table 6. Collection-specific XML statements
XML statement Definition
<collections> A grouping of the individual <collection> XML tags.
<ims> IMS name connection to the collection.
<agent-name> Agent name connection to the collection.
<name> IMS name connection to the collection.
<collection-profile> For each agent name and IMS name, IBM Guardium S-TAP for IMS establishes a connection to the collection profile.
<name> Constructed name of the policy (policy_IMS_NAME).
<dli-status-codes> Two-character DLI status codes to be audited. Terminated by FF value.
Quarantine information
Quarantine XML is only sent from the appliance when the quarantine is triggered by audited events that are sent to the appliance by the agent, and the quarantine is
deemed to be in effect. This causes AI status codes (error opening database) to be returned to the application program in the DLI Status code PCB field (DBPCBSTC), and
message AUIJ252W to appear in the IMS region or batch job.
Quarantine only works with full-function DLI calls because the AUI hook for Fast-Path occurs after the DLI call has completed. (The DLI call cannot be preempted.)
<install-info>
<artifacts>
<ims>
<name>IMSV14AH</name>
<agent>AUI15A</agent>
<description>IMS V14 Test IEACRX AUI10A27</description>
<version>0</version>
<plexname></plexname>
<recons>
<recon seq="1">IMSEA1.RECON1</recon>
<recon seq="2">IMSEA1.RECON2</recon>
<recon seq="3">IMSEA1.RECON3</recon>
</recons>
<reslibs>
<reslib seq="1">IMSEA1.SDFSRESL</reslib>
</reslibs>
<dbdlibs/>
<psblibs/>
<thresholds-050i>
<max-count>1K</max-count>
<max-time>0015</max-time>
</thresholds-050i>
<audit-levels>
<collector name="ims">
<audit-level>
<signon-signoff value="true"/>
<start-stop value="true"/>
<db-open-close value="true"/>
<dbd-psb value="true"/>
</audit-level>
</collector>
<collector name="smf">
<audit-level>
<read value="true"/>
<update value="true"/>
<delete value="true"/>
<create value="true"/>
<alter value="true"/>
<racf-violations value="true"/>
</audit-level>
</collector>
</audit-levels>
</ims>
</artifacts>
<policies>
<collection-profile>
<name>policy_IMSV14AH</name>
<description>---: Log Full Details With Values,Auv - Event All,IEA1_ALL_ST_AH</description>
<rules>
<rule>
<active>true</active>
<filters/>
<targets>
<segment-target>
<type>include</type>
<database-name>%</database-name>
<segment-name>%</segment-name>
<audit-get>true</audit-get>
<audit-insert>true</audit-insert>
<audit-update>true</audit-update>
<audit-delete>true</audit-delete>
<capture-before-image>false</capture-before-image>
<capture-segment-data>true</capture-segment-data>
</segment-target>
</targets>
<audit/>
<excluded-regions></excluded-regions>
</rule>
</rules>
</collection-profile>
</policies>
<collections>
<collection>
<ims>
<agent-name>AUI15A</agent-name>
<name>IMSV14AH</name>
</ims>
<collection-profile>
<name>policy_IMSV14AH</name>
</collection-profile>
<dli-status-codes>FDFWGAGBGDGEGKL2LBLSNIUCUSUXFFFF</dli-status-codes>
</collection>
</collections>
<quarantine-lists/>
</install-info>
Response
If this message occurs several times without successful policy XML echoes, check to see that any running user report program is using the data set correctly or
whether a TSO user might be editing the data set.
A dynamic allocation error occurred. Data set <LOCATION>, info code: <info-code>, error code: <error-code>.
Explanation
An attempt to dynamically allocate the data set failed. The specified information and error codes reflect the return and reason codes from the z/OS dynamic
allocation services.
Response
Use the info code and error code to determine the cause of the dynamic allocation failure by referring to the z/OS MVS Programming: Authorized Assembler Services
Guide in the IBM Knowledge Center. Correct the error and restart the agent.
Data set "<LOCATION>" could not be deleted, info code: <code>, error code <code>.
Explanation
An attempt was made to delete and reallocate a non-VSAM non-GDG data set in the catalog.
Response
Ensure that the agent task has RACF (or other security product) authority to delete a data set that contains a high-level qualifier. Attempt to correct the security
problem and restart the agent. If the error persists, contact IBM Support and provide the info and error codes.
Troubleshooting
Use the following topics to diagnose and correct problems that you experience with IBM Guardium S-TAP for IMS.
Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
This information documents the messages and error codes issued by Security Guardium S-TAP for IMS. Messages are presented in ascending alphabetical and numerical
order.
Note: To set a z/OS message alert for messages that begin with AUII, or messages AUIJ250I and AUIJ252W, use single-dash formatting between the message number
and message text. For all other messages, use a double-dash. For example:
AUIA003E
Address Space <name> failed to start successfully on <LPAR name>.
AUIA004E
Address Space <name> (<job number>) failed to stop successfully on <LPAR name> within the timeout period and was abandoned.
AUIA005I
Starting address space <name> on <LPAR name>.
AUIA006I
Address Space <name> (<job number>) is online on <LPAR name>.
AUIA007I
Stopping address space <name> (<job number>) on <LPAR name>.
AUIA008I
Address Space <name> (<job number>) on <LPAR name> is offline.
AUIA009E
Address space <name> is not active.
AUIA010E
Address Space <name> is already active.
AUIA021I
MODIFY command <command text> sent to Address Space <name>.
AUIA022I
<Collector name> collector is disabled: interval is set to <value>.
AUIA023I
<Collector name> collector is disabled: proc name for the collector address space has not been specified in the configuration.
AUIA024I
<Collector name> collector is disabled: not configured.
AUIA027E
Abend occurred while validating <log stream>. Abend code = <code>, RSN = <reason>.
AUIA028S
Agent agent-name on PLEX name for S-TAP version S-TAP version is already online. (ADS_SHM_ID=<Memory Segment ID>)
AUIA029I
collector collector is disabled: no Audit IMS Log Events are selected for IMS source IMS.
AUIA030I
collector collector started successfully.
AUIA031I
collector collector stopped successfully.
AUIA033I
(GDM) Attempting to establish link with the appliance.
AUIA034S
(GDM) An attempt to establish the link to the appliance failed.
AUIA035W
(GDM) Link failed over to a secondary appliance. [host=host, port=port]
AUIA036I
(GDM) Link to primary appliance established. [host=host, port=port]
AUIA037I
(GDM) Link to primary appliance restored. [host=host, port=port]
AUIA038S
(GDM) Link to the appliance lost.
AUIA041I
Guardium policy processing failed due to prior errors.
AUIA042W
The Guardium policy is not applicable.
AUIA043I
The Guardium policy reader thread started.
AUIA044I
The Guardium policy reader thread is terminating.
AUIA045I
The guardium policy reader thread is terminating due to prior errors.
AUIA048I
auiu_taskname is configured to start only on lpar-name.
AUIA049W
auiu_task_name is configured to not start on lpar_name but will be started on lpar_name because aui_agent_name runs on lpar_name
AUIA050W
auiu_task_name is configured to not start on lpar_name but no such system exists.
AUIA051I
auiu_task_name is configured to not start on lpar_name and will not be started on lpar_name.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
An attempt by the agent to start the named support address space has failed.
User response
Check the named address space logs to identify why it was not able to start. In most cases, this occurs if an address space with that name is already online, there was a
JCL error, or there was an issue resolving the loopback address host name. If further assistance is required, contact IBM Software Support.
AUIA004E Address Space <name> (<job number>) failed to stop successfully on <LPAR name>
within the timeout period and was abandoned.
Explanation
The specified address space did not stop within the time out period and was consequently abandoned by the master address space.
User response
Check the named address space logs to identify why it did not stop. If further assistance is needed, contact IBM® Software Support.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent has automatically started the support address named.
User response
This is an informational message only.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent has successfully started the support address space named.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The named address space has successfully stopped.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The specified address space that the master address space was attempting to control is not online.
User response
Correct and retry.
Explanation
This message indicates that the address space with the specified name is active already and was expected to be. This message occurs when starting the BATCH (or SMF)
collector if they are already running.
User response
Verify that the address space is already running. If the address space is not online and the message occurs, contact IBM® Software Support.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The MODIFY command <command text> sent to address space named.
User response
No action is required.
Explanation
Named collector is disabled because the interval value is less than or equal to zero.
User response
If this was not intentional, fix the interval value and restart the agent address space.
AUIA023I <Collector name> collector is disabled: proc name for the collector address space
has not been specified in the configuration.
User response
To enable this collector, specify the procedure name for collector address space. If the procedure name is specified and this message still occurs, contact IBM® Software
Support.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The specified collector is disabled because it has not been configured.
User response
To enable this collector, configure it using the Guardium user interface. If the specified collector is configured and the message still occurs, contact IBM® Software
Support.
Parent topic: Error messages and codes: AUIAxxxx
AUIA027E Abend occurred while validating <log stream>. Abend code = <code>, RSN =
<reason>.
Explanation
The Log Stream log stream validation failed with abend code code and reason code reason.
User response
Contact IBM Software Support.
Parent topic: Error messages and codes: AUIAxxxx
AUIA028S Agent agent-name on PLEX name for S-TAP version S-TAP version is already online.
(ADS_SHM_ID=<Memory Segment ID>)
Explanation
The specified agent is already online. Agent names must be unique per sysplex.
User response
Change the agent-name and restart the agent, or shut down the other agent.
Parent topic: Error messages and codes: AUIAxxxx
AUIA029I collector collector is disabled: no Audit IMS Log Events are selected for IMS source
IMS.
Explanation
An Audit IMS Log Event must be selected for the IMS source IMS for the collector to be enabled.
User response
To enable the collector, select an Audit IMS Log Event for the IMS source.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The specified collector started.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The specified collector stopped.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent is attempting to establish a connection to one of the appliances specified in the agent configuration.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
User response
Contact your network administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent lost connection to the primary appliance and switched to the specified secondary appliance.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent has connected to the specified primary appliance.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent has reconnected to the specified primary appliance.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
System action
Any new policies defined in the appliance will not be pushed down to the IBM® Guardium® S-TAP® for IMS agent.
User response
Verify network connectivity to the appliance. Contact your network administrator or IBM Software Support.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The Guardium policies could not be processed.
User response
Check the log for previous errors.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
One or more of the policy rules cannot be used by the current agent.
User response
Check the log for previous errors to determine why the policy is not applicable and fix the policy definition.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The Guardium policy reader thread started.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The Guardium policy reader thread is stopping.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
AUIA045I The guardium policy reader thread is terminating due to prior errors.
Explanation
The policy reader thread is stopping due to previously reported errors.
User response
Check the previously issued messages to determine why the policy reader is terminating.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
System action
The AUIUSTC task is scheduled only on the home LPAR where the agent is running.
User response
To schedule the AUIUSTC task for another LPAR, remove or correct the AUIU_EXCLUDE_LPAR statement.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The AUIU_EXCLUDE_LPAR configuration parameter, found in the AUICONFG SAMPLIB member, was used in an attempt to prevent the AUIU task from executing on the
LPAR named.
System action
The request to exclude this LPAR from AUIU processing is ignored because the specified LPAR is also where the agent is executing.
User response
Remove the LPAR name from the AUICONFG samplib member’s AUIU_EXCLUDE_LPAR parameter. The change will be implemented at the next restart of the agent.
Parent topic: Error messages and codes: AUIAxxxx
AUIA050W auiu_task_name is configured to not start on lpar_name but no such system exists.
Explanation
The specified lpar_name has been included as part of the LPARS that are specified in the AUIU_EXCLUDE_LPAR configuration keyword. The specified lpar_name was not
found in the list of members of either the SYSJES or lpar_name XCF groups.
System action
Processing continues.
User response
This message might indicate that the lpar_name is not available or that there is an error in the specified lpar_name.
Parent topic: Error messages and codes: AUIAxxxx
AUIA051I auiu_task_name is configured to not start on lpar_name and will not be started on
lpar_name.
Explanation
The AUIU_EXCLUDE_LPAR configuration, parameter found in the AUICONFG SAMPLIB member, was used in an attempt to prevent the AUIU task from executing on the
specified LPAR.
System action
An instance of the AUIU task is not routed to the excluded LPAR.
User response
None.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
This LPAR name was found as a member of the XCF group when performing a z/OS IXCQUERY on the PLEXNAME of SYSJES XCF GROUPS.
System action
Processing continues
Explanation
This message indicates that command such as: /f AUIASTC,SET CONFIG <option> ON/OFF processed successfully.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
This message indicates that command such as: /f AUIASTC,GET CONFIG <option> processed successfully.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
AUIA055I The agent is waiting for start-up information from the appliance.
Explanation
The agent has determined that there is no checkpoint information available for this agent in E/CSA, and is awaiting this data to be sent from the appliance.
System action
The agent waits up to 30 seconds for the checkpoint information, and if none is received, processing continues by using default checkpoint values, such as current blocks
from the z/OS log-streams, and SMF and SLDS data sets that were created no earlier than the previous day.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent is starting the auditing threads.
System action
The agent starts the DLIO/DLIB/AUIL/AUIF auditing threads.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
A command, such as /f AUIASTC,STATUS, has been issued for processing.
User response
No action is required.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
If the XML_ECHO_AUILOG(Y) keyword exists in the AUICONFG, this message will be followed by the echo of all active XML policies on the AUILOG.
System action
As an example, the first three lines of the echo appear as follows:
User response
For more information, see Echoed XML statement definitions.
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The XML of the policy that was installed from the Security Guardium® system was not echoed to the specified location due to the specified message. If the
&Data_Set_Name parameter contains z/OS system variables, <LOCATION> reflects the data set name after symbol substitution has been done.
The installed policy has not been changed. The echo is skipped if the newly installed policy has not changed since it was last installed.
The data set location is not valid. Incorrect use of a system symbol in the &Data_Set_Name parameter can invalidate the location. Additional requirements:
The data set name must not exceed 44 characters.
The segment length must be greater than zero and less than or equal to 8.
The first character in each segment must be a letter (A – Z), #, @, $, or hyphen.
System action
Processing continues.
User response
Correct the &Data_Set_Name parameter and restart the agent. If the error persists, see Additional causes of AUIA060W .
Parent topic: Error messages and codes: AUIAxxxx
Explanation
The agent has completed the XML echo of all active policies that were installed from the Security Guardium® system. <LOCATION> is the data set name specified by the
&Data_Set_name parameter of the XML_ECHO_DATASET keyword.
System action
The data set name reflects the z/OS system variable substitution and the Generation Data Group extension if either exists in the &Data_Set_name parameter.
User response
No action is required.
Parent topic: Error messages and codes: AUIAxxxx
AUIB300I
CONNECTION TO z/OS® SYSTEM type LOG STREAM WAS SUCCESSFUL - LOG STREAM NAME: log_stream_name, LOG STREAM TYPE: XCF-BASED|DASD_ONLY,
CHECKPOINT VALUE: check_point_value, CHECKPOINT PTR: address_of_checkpoint
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
AUIB300I CONNECTION TO z/OS® SYSTEM type LOG STREAM WAS SUCCESSFUL - LOG
STREAM NAME: log_stream_name, LOG STREAM TYPE: XCF-BASED|DASD_ONLY, CHECKPOINT
VALUE: check_point_value, CHECKPOINT PTR: address_of_checkpoint
Explanation
The connection to the log-stream name (log_stream_name) configured to process log_stream_type events completed successfully.
System action
Processing continues
User response
No action is required.
Parent topic: Error messages and codes: AUIBxxxx
AUIB302I DRAIN REQUEST FOR type LOG STREAM HAS COMPLETED. LOG STREAM: name.
Explanation
A DRAIN request, which reads all data from the z/OS® log stream, has completed.
System action
The AUIASTC tasks prepare to terminate.
User response
No action is required.
Parent topic: Error messages and codes: AUIBxxxx
Explanation
A DRAIN request used to flush read all existing events from the log-stream-name indicated has completed successfully
System action
The log-stream reader thread will start the termination phase.
User response
No action is required.
Parent topic: Error messages and codes: AUIBxxxx
AUIB306E INVALID RECORD FOUND IN log-stream LOG STREAM -RECORD IMAGE SNAPPED
TO AUI$NAP DD
Explanation
When reading DLI call audit records from the z/OS System log stream, a malformed audit record was encountered or the version of the audit record was not recognized.
System action
Processing continues after writing a SNAP/DUMP of the offending record to the AUI$NAP DD.
User response
Explanation
This message provides the highest block ID for the log stream. This is used as the starting checkpoint for processing data from this log stream.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIBxxxx
AUIF002I
SMF log reader interval set to <n> minutes.
AUIF003E
Command <command> failed; interval value must be between <lower-bound> and <upper-bound>.
AUIF501I
NO NEW CATALOGED SMF DATA SETS FOUND FOR SMF MASK: smf_mask_value
AUIF502I
PROCESSING SMF DATA SET: smf_data_set_name
AUIF503I
PROCESSING COMPLETE FOR SMF DATA SET: smf_data_set_name
AUIF505I
SMF AUDITING IS DISABLED AT THE AGENT LEVEL
AUIF506I
SMF AUDITING IS DISABLED AT THE IMS LEVEL. IMS NAME: ims_name
AUIF507E
PROCESSING FAILED FOR SMF DATA SET: data set name
AUIF508I
SCANNING RECON DATA SETS FOR IMS ARTIFACT DATA SETS. RECON1: recon1_dsn RECON2: recon2_dsn RECON3: recon3_dsn
AUIF702I
SMF MASK CHECKPOINT INFORMATION - MASK VALUE : SMF_mask - LAST DSN READ: SMF_dsn - LAST UPDATED (UTC): date_time
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
The subtask that reads event data from SMF log data sets is scheduled to perform every <n> minutes.
User response
No action is required.
Parent topic: Error messages and codes: AUIFxxxx
AUIF003E Command <command> failed; interval value must be between <lower-bound> and
<upper-bound>.
Explanation
This message indicates that <command> such as:
failed because of incorrect <number> value. Correct value must be between <lower-bound> and <upper-bound>.
User response
Use an interval value between <lower-bound> and <upper-bound>. If that does not resolve the issue, contact IBM® Software Support.
Parent topic: Error messages and codes: AUIFxxxx
Explanation
When scanning the z/OS® catalog for new data sets that meet the indicated SMF mask value (smf_mask_value) and have not been processed by the product, it was
determined that no z/OS data sets meet that criteria.
System action
The process will continue to examine other SMF Mask values.
User response
No action is required.
Parent topic: Error messages and codes: AUIFxxxx
Explanation
Processing has started for a SMF data set.
System action
Events will be obtained from the SMF data set based on collection profile criteria.
User response
None.
Parent topic: Error messages and codes: AUIFxxxx
Explanation
Processing of the SMF data set has completed.
System action
Processing continues with other candidate SMF data sets.
User response
No action is required.
Parent topic: Error messages and codes: AUIFxxxx
Explanation
Auditing of SMF events has been disabled at the agent level, as instructed by the settings chosen in the Guardium user interface.
System action
The auditing of events sourced from SMF data sets is not performed.
User response
No action is required.
Parent topic: Error messages and codes: AUIFxxxx
AUIF506I SMF AUDITING IS DISABLED AT THE IMS LEVEL. IMS NAME: ims_name
Explanation
Auditing of SMF events has been disabled at the IMS level for the IMS named (ims_name) by use of the Guardium interface and the IMS Auditing Levels editor.
System action
The auditing of events sourced from SMF for the IMS named is not performed.
AUIF507E PROCESSING FAILED FOR SMF DATA SET: data set name
Explanation
Processing failed during the reading of the data set, specified by name in the message text.
System action
The collection process terminates.
User response
Determine the cause of the failure and correct it by reviewing previously issued S-TAP and z/OS messages.
Parent topic: Error messages and codes: AUIFxxxx
AUIF508I SCANNING RECON DATA SETS FOR IMS ARTIFACT DATA SETS. RECON1: recon1_dsn
RECON2: recon2_dsn RECON3: recon3_dsn
Explanation
The AUIFSTC task has started to scan the RECON data sets looking for database data sets, Image copy data sets and optionally IMS SLDS to be audited using SMF records.
System action
The RECON data sets are read using the specified DSN.
User response
No action is required.
Parent topic: Error messages and codes: AUIFxxxx
AUIF702I SMF MASK CHECKPOINT INFORMATION - MASK VALUE : SMF_mask - LAST DSN
READ: SMF_dsn - LAST UPDATED (UTC): date_time
Explanation
This message provides the SMF data set mask (SMF_mask) and the last SMF data set read (SMF_dsn) that matched that mask. This information is used as a checkpoint to
indicate which SMF data sets have already been processed, and should not be re-read by the AUIFstc tasks.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIFxxxx
AUIG001S
An unexpected error occurred (/path/to/file.c, linenum).
AUIG002S
An unexpected error occurred with token "token1" (/path/to/file.c,linenum).
AUIG003S
An unexpected error occurred with tokens "token1" and "token2" (/path/to/file.c,linenum).
AUIG004S
An unexpected error occurred with tokens "token1", "token2", "token3", and "token4" (/path/to/file.c,linenum).
AUIG005S
An unexpected error occurred with tokens "token1", "token2", and "token3" (/path/to/file.c,linenum).
AUIG006S
An unexpected error occurred with tokens "token1" and "token2" (/path/to/file.c,linenum).
AUIG014E
dataspace create return code = return-code-hex, reason = reason-code-hex
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
An unknown and unexpected internal error occurred in the product due to the specified tokens.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
An unknown and unexpected internal error occurred in the product due to the specified tokens.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
An unknown and unexpected internal error occurred in the product due to the specified tokens.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
AUIG004S An unexpected error occurred with tokens "token1", "token2", "token3", and
"token4" (/path/to/file.c,linenum).
User response
Contact IBM® Software Support
Parent topic: Error messages and codes: AUIGxxxx
AUIG005S An unexpected error occurred with tokens "token1", "token2", and "token3"
(/path/to/file.c,linenum).
Explanation
An unknown and unexpected internal error occurred in the product due to the specified tokens.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
An unknown and unexpected internal error occurred in the product due to the specified tokens.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
An attempt to create a data space for spill usage has failed. Spill capability might not be available.
User response
Examine the return code and reason code, and take appropriate action to ensure that data spaces can be created.
Parent topic: Error messages and codes: AUIGxxxx
AUIG015W MALLOC: big alloc coming memory_size from GDM Read Buffer
Explanation
More than 10,485,760 bytes was required in order to process collection policies pushed from the Security Guardium® system.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
Zero bytes was required in order to process collection policies pushed from the Security Guardium® system.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
Negative number of bytes required in order to process collection policies pushed from the Security Guardium® system.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
AUIG018S MALLOC failed, got NULL for size <memory_size> at site <site>.
Explanation
Attempt to allocate memory failed.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIGxxxx
AUIG045E Write failed, sd=bbbb desired write len length buffer at address, ret code xxxx
reason 0xyyyyzzzz
Explanation
An attempt to read or write to a socket has failed. This error might occur if Security Guardium® S-TAP® for IMS is connected to a peer that is offline.
System action
The system attempts to reestablish the connection to the peer in order to read or write the data.
User response
Identify the cause of the failure by using the z/OS® UNIX System Services Messages and Codes SA23-2284-xx manual to look up the return and return codes that are
provided in the message text, where bbbb is an internal code, xxxx is the return code, and yyyyzzzz is the reason code. Use the zzzz value to determine the error code, as
described in the Reason codes (errnojrs) section of the z/OS UNIX System Services Messages and Codes manual.
Parent topic: Error messages and codes: AUIGxxxx
AUIG046E Failure to resolve address for host 'HOST', ret code return-code, reason hex-value.
Explanation
An attempt to resolve the given hostname failed.
User response
Verify that the hostname is specified correctly and is resolvable. Contact IBM® Software Support if hostname is correct and resolvable.
Parent topic: Error messages and codes: AUIGxxxx
AUIG047E Set sockopt failed, level = hex-value, option = hex-value, ret code return-code,
reason hex-value.
Explanation
An attempt to set a socket option failed.
User response
Contact IBM® Software Support
Parent topic: Error messages and codes: AUIGxxxx
Explanation
An attempt to set a socket option failed.
Explanation
The system BPXFCT call failed while attempting to set socket blocking mode.
User response
See the MVS Programming: Authorized Assembler Services Guide for more information about the specified information and error codes.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
An attempt to read or write to a socket has failed. This error might occur if Security Guardium® S-TAP® for IMS is connected to a peer that is offline.
System action
The system attempts to reestablish the connection to the peer in order to read or write the data.
User response
Identify the cause of the failure by using the z/OS USS Return Codes and Reason Codes to look up the return and reason codes that are provided in the message text,
where xxxx is the return code and zzzzzzzz is the reason code.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
TCP/IP processing has been disabled.
System action
The Guardium appliance will not receive data.
User response
No action is required.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
TCP/IP buffer has been disabled.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
An unexpected string of data was received by the Security Guardium® S-TAP® for IMS agent from the Guardium appliance or associated firewall. The string does not
conform to the format that is normally associated with a pushed-down policy or other expected data.
System action
User response
If this message appears occasionally, no action is required. If this message appears frequently, contact IBM Support to diagnose whether a problem exists with the
Guardium appliance or firewall.
Parent topic: Error messages and codes: AUIGxxxx
AUIGF120I Trace Settings: Compilation 0, Requested Runtime 0, ECSA Flag 32, Actual
Runtime 0...
Explanation
This message is produced during the compilation of a filter, using the policy information that was specified.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
The collection profile compilation process found that the collection profile criteria will allow for Stage zero filtering of IMS DLI events based on USERIDs or PSB names.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIGxxxx
Explanation
The collection profile compilation process found that the collection profile criteria is not conducive to providing Stage 0 filtering for IMS DLI events. The reasons may
include:
System action
Processing continues without Stage Zero filtering capability.
User response
If Stage 0 filtering is desired, adjust the USERID and PSB specifications in each rule to be the same.
Parent topic: Error messages and codes: AUIGxxxx
Note: To set a z/OS message alert for messages that begin with AUII, use single-dash formatting between the message number and message text. For example:
AUII056I
- ZIIP PROCESSING ENABLED FOR IMS STAP
AUII017I
S-TAP® for V10.1.3 initialization complete using RECON1 DSN: recon1_dsn
AUII018E
IBM® Security Guardium® S-TAP for IMS on z/OS® initialization failed
AUII019E
IBM Security Guardium S-TAP for IMS on z/OS termination failed
AUII020E
UNABLE TO FIND RECON1 DATA SET NAME
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
AUII017I S-TAP® for V10.1.3 initialization complete using RECON1 DSN: recon1_dsn
Explanation
IBM® Guardium® S-TAP for IMS has initialized in the DLI/DBB batch job or IMS control region environment. For successful auditing to occur, the RECON1 DSN indicated
in this message should match the RECON1 DSN associated with the IMS definition you have created.
AUII018E IBM® Security Guardium® S-TAP® for IMS on z/OS® initialization failed
Explanation
IBM Guardium S-TAP for IMS was unable to initialize in this IMS Control region. The monitoring of IMS databases will not occur.
System action
IMS processing continues without auditing capabilities.
User response
Examine the JES log for other messages to determine the reason for the initialization failure.
Parent topic: Error messages and codes: AUIIxxxx
AUII019E IBM® Security Guardium® S-TAP® for IMS on z/OS® termination failed
Explanation
IBM Guardium S-TAP for IMS was unable to terminate cleanly.
System action
The termination of the IMS online region of DLI/DBB batch job step continues.
User response
This error indicates that an environmental error has occurred. Examine the JES log for other AUI messages to determine the reason for the termination failure.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt to find the RECON1 data set name used by the IMS Online control region or DLI/DBB batch job step has failed. The RECON1 data set name is critical to the
determination of the collection profile used to audit IMS events.
System action
IMS processing continues without the IMS auditing feature.
User response
Determine why the RECON1 data set name is not available for this IMS control region or DLI/DBB batch job step. An in-stream RECON1 DD statement must be present in
the JCL, or a RECON1 MDALIB member being present in the JOB/STEPLIB DD concatenation is required.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt to find a required processing module (module_name) has failed.
System action
IMS processing continues without auditing.
User response
Examine the STEPLIB/JOBLIB DD concatenation to ensure the SAUIIMOD product data set is included.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
System action
IMS processing continues without IMS auditing available.
User response
Increase the region size used by the job step (REGION=).
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The DIRLOAD IMS service has failed.
System action
IMS processing continues with auditing.
User response
Determine the cause of the error from the IMS Messages and Codes manual and correct the error. If necessary, contact IBM® Software Support.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt to locate the IMS SCD during product initialization has failed.
System action
IMS processing continues without auditing.
User response
Verify that you are attempting to run the product using a supported IMS release. Contact IBM® Software Support for further assistance.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt to locate the IMS SSCD Extension address has failed.
System action
IMS processing continues without auditing.
User response
Verify that you are attempting to run the product using a supported IMS release. Contact IBM Software Support for further assistance.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The IMS SCCT address cannot be located by the IMS S-TAP initialization process.
System action
IMS processing continues without auditing capabilities.
User response
Contact IBM Software Support.
Parent topic: Error messages and codes: AUIIxxxx
System action
IMS processing continues without auditing.
User response
Investigate E/CSA usage on the LPAR.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt to LOAD module module_name using IMS services has failed.
System action
An attempt to LOAD module module_name using IMS services has failed.
User response
Verify that the SAUIIMOD product data set is available in the STEPLIB/JOBLIB data set concatenation. Contact IBM® Software Support for further assistance.
Parent topic: Error messages and codes: AUIIxxxx
System action
IMS processing continues without auditing.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
Security Guardium® S-TAP® for IMS initialization found a logic error.
System action
IMS processing continues without auditing.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
DA call to the DFSCIR IMS service to create an ITASK has failed.
System action
IMS processing continues without auditing.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt to LOAD IMS module DFSISSI0 has failed.
System action
IMS processing with auditing continues. The product will be unable to determine the correct USERID for events driven from ODBA threads.
User response
Contact Software Support.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt to locate a hook point in the indicated module (module_name) has failed.
System action
IMS processing with auditing continues. The product will be unable to determine the correct USERID for events driven from ODBA threads. An output DD: AUI$NAP is
dynamically allocated to SYSOUT, and the area where the hook point was to be located is snapped out to this AUI$NAP DD.
User response
Provide the AUI$NAP output to IBM Software Support.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The AUIZIIP DD statement has been found in the IMS Control Region JCL, which indicates that the zIIP processor should be considered for use when filtering DLI calls
and writing to the z/OS® System Logger. IMS STAP has determined that zIIP processing is not available on this LPAR.
System action
Processing continues exclusively using general processors.
User response
Remove the AUIZIIP DD statement and restart the IMS sub-system.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
A request to process DLI call filtering and z/OS® System Logger writes on a zIIP processor has been rejected as the IMS sub-system is not connected to the z/OS
Workload Manager.
System action
Processing continues exclusively using general processors.
User response
No action is required.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
A request to process DLI call filtering and z/OS® System Logger writes on a zIIP processor has been rejected.
System action
Processing continues exclusively using general processors.
Explanation
An attempt to drive the z/OS® name/token service has failed.
System action
IMS processing continues without auditing.
User response
Contact IBM® Software Support
Parent topic: Error messages and codes: AUIIxxxx
Explanation
An attempt insert product code in the DEDB call analysis area has failed.
System action
IMS processing with DEDB event auditing disabled. An output DD: AUI$NAP is dynamically allocated to SYSOUT, and the area where the code insertion was to be located
is snapped out to this AUI$NAP DD.
User response
Provide the AUI$NAP output to IBM® Software Support.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
This message provides statistics regarding the number of DLI events which have been processed. This message is issued when:
The number of DLI calls specified in the message frequency section of the Guardium client's IMS Data Set definition screen has been reached.
The time specified in the AUII050I message frequency section of the Guardium client's IMS Data Set definition screen has elapsed.
The collection profile for the IMS is made in active.
The DLI/DBB batch job or IMS Online Control Region terminates.
DLI PATH calls which effect multiple segments within a hierarchical path are treated and counted as individual DLI calls.
DLI calls types which are not included in any RULE of the active collection profile are not counted as they are immediately rejected.
Could not be placed into a log-stream data buffer (indicated by the issuance of message AUIJ307A).
Audited events already in the data buffer could not be written to the z/OS System Logger Log-Stream using the IXGWRITE call and the collection profile for
the IMS has been deactivated or the DLI/DBB batch job or IMS Online Control region has been terminated (indicated by the issuance of message AUIJ304E).
System action
Processing continues.
User response
Explanation
These messages are issued by the IMS S-TAP® code in the IMS Control region during startup to broadcast the maintenance level of the programs that are in use by
Security Guardium® S-TAP for IMS.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The AUIZIIP DD statement has been found in the IMS Control Region JCL, which indicates that the zIIP processor should be considered for use when filtering DLI calls
and writing to the z/OS® System Logger.
System action
IMS STAP attempts to create an environment to support zIIP processing.
User response
If this was not intended, remove the AUIZIIP DD statement and restart the IMS sub-system.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The request for zIIP support for IMS STAP and this IMS Control Region has been acted on and all initialization processes have completed successfully.
System action
IMS STAP will schedule DLI call filtering and writes to the z/OS® System Logger as a zIIP eligible enclave SRB.
User response
If this was not intended, remove the AUIZIIP DD statement and restart the IMS sub-system.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The AUIZIIP DD statement has been found in the IMS Control Region JCL, which indicates that the zIIP processor should be considered for use when filtering DLI calls
and writing to the z/OS® System Logger. A process (process_type) used to enable zIIP processing has failed.
System action
The request to enable zIIP processing is rejected and general processor will be used.
User response
Review IBM® supplied documentation for the process which failed using the return and reason codes (return_code/reason_code) to determine the cause of the failure.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
User response
Contact IBM® Software Support
Parent topic: Error messages and codes: AUIIxxxx
Explanation
This warning message indicates that IBM® Guardium® S-TAP® for IMS has detected a dependent region that has been waiting for an event to be audited for at least 15
seconds. The dependent region is identified by the PST address xxxxxxxx. The PST# value specified as yyyy is the region number in hexadecimal format.
System action
IBM Guardium S-TAP for IMS attempts to process the dependent region.
User response
If the dependent region continues processing, then no action is required. If the dependent region remains in a wait state, then it must be stopped or cancelled. Before you
stop or cancel the dependent region, take an SVC dump of the IMS Control region and provide it to IBM Software Support for analysis.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
This message is a response to message AUII060W (Potential Waited PST xxxxxxxx (PST#= zzzz)). This message indicates that the corresponding IPOST was performed,
and the PST is no longer in a WAIT state.
System action
IMS Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
Initialization has completed successfully for Security Guardium® S-TAP® for IMS, but no collections were found that pertain to this batch job or IMS control region.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIIxxxx
AUII172I AUIprogram LOADED EXIT imsexit FROM DATA SET: data set name
Explanation
The AUIprogram named found an occurrence of the imsexit later within the JOBLIB/STEPLIB concatenation, and has loaded it.
System action
The imsexit will be invoked with R13 pointing to the save area originally provided by IMS, as well as its own 512 byte work area, provided in the SXPLAWRK field of the IMS
Standard User Exit Parameter list, immediately following each execution of AUIprogram.
User response
For the imsexit to run, no action is required. If the imsexit should not be run in this environment, remove the data set from the JOBLIB/STEPLIB concatenation and restart
the IMS control region or batch job.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The IMS release being used is not support by this version of the product.
System action
IMS processing continues without auditing.
User response
Review supported IMS releases for the release of this product.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
LOAD OF SERVICE MODULE module_name FAILED RC = return_code
User response
Ensure that the SAUIIMOD product data set is included in the STEPLIB/JOBLIB DD concatenation.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The exit_name indicated returned a non-zero return code value of return_code as specified.
System action
The return code value is returned to IMS.
User response
Correct the exit_name program if the non-zero value was returned in error. Review the IMS Customization Guide or IMS Exit Routine Reference for more information.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
The service_type invoked by the specified module_name has failed.
System action
IMS processing continues without auditing.
User response
Review all subsequent AUI error messages to diagnose the problem.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
Program module_name had the RENT/REUS attribute on in a non-APF-Authorized environment. Security Guardium® S-TAP® for IMS is unable to load the program.
System action
Processing continues with the exit cascading feature disabled.
User response
Re-link the exit with the NOREUSE attribute.
Parent topic: Error messages and codes: AUIIxxxx
Explanation
This message is issued in conjunction with a previous message (for example, AUII176E) to indicate an associated data set.
User response
Check the log for the previously issued, associated message and take the action that is advised in that message.
Parent topic: Error messages and codes: AUIIxxxx
AUIJ005W
UNABLE TO LOAD MESSAGE TABLE table_name RSN: reason_code WILL USE AUIMGENU
AUIJ006E
LOAD FAILED FOR MESSAGE TABLE table_name RSN: reason_code
AUIJ007E
PROGRAM program_name IS NOT EXECUTING APF-AUTHORIZED
AUIJ008I
ATTEMPTING TO CONNECT TO THE GUARDIUM S-TAP® APPLIANCE. TCP/IP Address: ip_address, PORT: port_number, PING RATE: ping_rate
AUIJ009E
LOAD FAILED FOR MODULE module_name. R1: abend_code R15: reason_code
AUIJ010I
IMS STAP ver HAS STARTED.
AUIJ011I
function_type CALL TO GUARDUIM S-TAP APPLIANCE SUCCESSFUL
AUIJ012I
NUMBER OF event_type EVENTS SENT TO APPLIANCE: counter
AUIJ013E
stap_call TO GUARDUIM S-TAP APPLIANCE FAILED (call source) IP ADDRESS: ip_address STAP_RC = rc1 STAP_RS = rs1 GDM_RC = rc2 PB_RC = rc3 GDML_RC = rc4
GDML_RS = rs2
AUIJ014E
OPEN FAILED FOR DD dd_name
AUIJ015E
THIS IMS RELEASE IS NOT SUPPORTED. IMS NAME: ims-name, VRL: ims_version
AUIJ016E
UNABLE TO INITIALIZE APPLIANCE INTERFACE (connection_type)
AUIJ017I
PRIMARY STAP CONNECTION RESTORED (connection_type) - SUCCESSFULLY CONNECTED TO IP ADDRESS: ip_address - PORT : port
AUIJ018W
PREVIOUS STAP CONNECTION FAILED (connection_type) - SUCCESSFULLY CONNECTED TO IP ADDRESS: ip_address - PORT : port
AUIJ019E
STAP CONNECTION FAILED: NO CONNECTIONS AVAILABLE (connection_type) - IP ADDRESS: ip-address - PORT : port
AUIJ020I
All EVENTS HAVE BEEN WRITTEN FROM SPILL AREA TO APPLIANCE (connection_type)
AUIJ021W
EVENTS ARE BEING WRITTEN TO THE SPILL AREA (connection_type)
AUIJ022W
SPILL AREA IS FULL: EVENT DATA IS BEING LOST (connection_type)
AUIJ023E
SPILL AREA IS NOT AVAILABLE (connection_type)
AUIJ024W
NUMBER OF type EVENTS LOST count
AUIJ042W
ZIIP PROCESSING NOT AVAILABLE ON THIS LPAR (type)
AUIJ044W
ZIIP PROCESSING REQUEST HAS BEEN REJECTED (connection_type)
AUIJ055I
ZIIP PROCESSING REQUESTED FOR type PROCESSING
AUIJ056I
ZIIP PROCESSING ENABLED FOR type PROCESSING, ENCLAVE TOKEN: value
AUIJ057W
ZIIP PROCESSING FOR type EVENTS HAS BEEN DISABLED DUE TO ERRORS - PROCESSING WILL CONTINUE USING GCPU
AUIJ058W
ZIIP PROCESSING FOR type EVENTS HAS BEEN DISABLED - TRACING IS ENABLED BY THE USE OF THE AUI$NAP JCL STATEMENT
AUIJ201E
VSAM ERROR ENCOUNTERED
AUIJ202E
VSAM ERROR ENCOUNTERED
AUIJ203E
VSAM ERROR ENCOUNTERED
AUIJ250I
AUDITING IMS EVENTS. COLLECTION PROFILE NAME: collection_profile_name IMS NAME: ims_name AGENT NAME: agent name EXCLUDED REGIONS:
region_types
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
An attempt to perform a z/OS® LOAD of the message table named (table_name) failed. The reason for the failure is described in the reason code field (reason_code). The
default U.S. English message table will be used. This message follows the AUI006E message.
System action
Processing continues while using the U.S. English message table.
User response
Determine and correct the cause of the message table load failure.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
A z/OS® LOAD attempt failed for the message table (table_name) indicated.
System action
If the table name is the U.S. English message table, (AUIMGENU) processing will terminate. Other table names will cause the product to attempt to use the U.S. English
message table after issuing the AUIJ005W message continue processing.
User response
Determine and correct the cause of the message table load failure.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The program specified requires APF-Authorization to perform its function.
System action
The program terminates.
User response
Ensure that all data sets included within the STEPLIB DD concatenation of the JCL where this message appeared are APF authorized.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt is being made to establish a connection with the Guardium® S-TAP appliance using the named TCP/IP address (ip_address) and PORT number (port_number).
PING RATE (ping_rate) indicates how often a message is sent to the appliance to provide the appliance with confirmation that the connection is active. The PINGS are sent
at the rate indicated (ping_rate) which is shown in hour, minutes, and second (hh:mm:ss) format.
System action
The connection to the Guardium S-TAP appliance is attempted.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ009E LOAD FAILED FOR MODULE module_name. R1: abend_code R15: reason_code
Explanation
System action
The function terminates.
User response
Ensure that all required product data sets are included in the STEPLIB DD concatenation of the JCL where this message appeared. The value in R1 (abend-code) indicates
the ABEND code that would have occurred if the failure had not been trapped by the product. The value in R15 (reason_code) indicates the reason code associated with
the abend. Documentation regarding the abend codes and possible resolutions can be found in the IBM® z/OS MVS™ System Code manual or equivalent.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The Security Guardium® S-TAP® for IMS agent component, using the specified base code level, has started.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The function request (function_type) to the Guardium® S-TAP® appliance completed successfully. This message usually follows the AUIJ008I message indicating that
the connection request has been initiated.
INIT-DLIB
Connection request from the tasks which transmits DLI/DBB batch events.
INIT-DLIO
Connection request from the task which transmits IMS Online DLI events.
INIT_LOG
Connection request from the task which transmits IMS Archive log events.
INIT-SMF
Connection request from the task which transmits SMF events.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
By default, this message is issued every 100,000 events sent to the appliance or approximately every 18 minutes. You can modify this frequency by using the agent
parameter keyword DLIFREQ. This message provides a status of data being collected and sent to the Guardium® S-TAP® appliance. The count provided (counter) is the
number of events since the last message was issued. The type of events (event_type) can include DLIB (events captured from IMS DLI/DBB batch jobs), DLIO (events
captured from IMS Online regions) SMF (events captured from SMF auditing), IMSL (events captured from IMS archive log processing), and MLOG (missing IMS logs found
during IMS Archive log processing).
System action
Processing continues.
User response
None action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The requested call (call_type) to the Guardium® S-TAP appliance has failed. A non-zero value GDM_RC field indicates an error.
System action
The process terminates.
User response
Determine the cause of the failure by checking the return and reason code.
If GDM_RC is not zero, one or more of the PB_RC, GDML_RC and GDML_RS will be set.
If STAP_RC and STAP_RS are zero but GCM_RC or PB_RC is not zero, an internal error is indicated. Contact IBM® Software Support.
If STAP_RC and STAP_RS are not zero, contact IBM Software Support.
Explanation
A z/OS® OPEN of the data set(s) referenced by the DD named (dd_name) failed.
System action
Processing terminates.
User response
Examine the JES log for z/OS issued IEA messages issued regarding this DD statement and take appropriate action.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ015E THIS IMS RELEASE IS NOT SUPPORTED. IMS NAME: ims-name, VRL: ims_version
Explanation
The IMS named (ims-name) was found to be of a release which is not supported by this version of the product.
System action
Processing terminates.
User response
Review the software requirements documented in this user's guide for a list of IMS releases that are supported by this version of the product.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt to establish a connection with the appliance has failed.
System action
Processing terminates.
User response
This error is usually due to the TCP/IP address specified in the <appliance-server> parameter of the AUICONFG or other member used in the AUICONFG DD statement
used to provide the agent with configuration information being incorrect. This error can also occur if the target of the TCP/IP address is unresponsive.
Parent topic: Error messages and codes: AUIJxxxx
System action
Processing continues sending data to the primary appliance.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
Multiple appliances are defined to the IMS STAP the connection to the active appliance has failed. This message indicates that another secondary appliance (ip_address +
port) is now active.
System action
Processing continues sending data to the secondary appliance.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The connection to the active appliance (ip_address + port) has failed and there are no secondary appliances available for use.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ020I All EVENTS HAVE BEEN WRITTEN FROM SPILL AREA TO APPLIANCE
(connection_type)
Explanation
All audited events that were buffered to the spill area have been sent to the appliance.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
A connection to the appliance has been interrupted, and the spill area is being used to buffer audited events until the appliance connection can be reestablished
System action
Processing continues. Audited events are buffered in the spill area.
User response
Investigate the cause of the appliance connection interruption and correct.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
System action
Processing continues. Audited events are discarded.
User response
Investigate the cause of the appliance connection interruption and correct. Look for message AUIJ024W, which is issued at task termination or when a connection is
reestablished, for the number of lost events.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt to use the spill area to buffer audited events is unsuccessful.
System action
Processing continues. Audited events are discarded.
User response
Specify a value of 1 through 1024 in the SAUISAMP AUICONFG member <SPILL-SIZE> parameter. Review any z/OS error or warning messages that might indicate why the
spill area allocation failed.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
Attempts to buffer audited events in the spill area have failed. This message indicates the type of audited events (DLIO, DLIB, SMF etc) which were lost (type), and the
number that were lost (count).
System action
Processing continues. Audited events are discarded.
User response
Investigate the cause of the appliance connection interruption and correct.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
A request to process data, using a zIIP enabled enclave, has failed because the Workload Manager feature is not available.
System action
Processing continues, using GCPU (General Central Processor Unit) services.
User response
Remove the ZIIP_AGENT_DLI(Y) keyword from the configuration file that is in use, or change the parameter from Y to N.
Parent topic: Error messages and codes: AUIJxxxx
System action
Processing continues using GCPU services.
User response
Determine the cause of the failure by reviewing previously issued AUIJ0331E messages and take corrective action.
Explanation
The use of a zIIP enabled enclave has been requested by the use of the ZIIP_AGENT_DLI(Y) configuration file keyword.
System action
An attempt is made to create the enclave.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ056I ZIIP PROCESSING ENABLED FOR type PROCESSING, ENCLAVE TOKEN: value
Explanation
A zIIP enabled enclave has been requested and successfully created.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ057W ZIIP PROCESSING FOR type EVENTS HAS BEEN DISABLED DUE TO ERRORS -
PROCESSING WILL CONTINUE USING GCPU
Explanation
zIIP processing was requested, however due to previously reported errors, this mode of processing could not be enabled.
System action
Processing continues using General Central Processing Unit (GCPU) resources only.
User response
Review the processing log looking for error and warning messages that were issued prior to this message to help determine why zIIP processing could not be initiated.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ058W ZIIP PROCESSING FOR type EVENTS HAS BEEN DISABLED - TRACING IS ENABLED
BY THE USE OF THE AUI$NAP JCL STATEMENT
Explanation
Event tracing has been enabled through the addition of the AUI$NAP DD SYSOUT=* JCL statement in the agent JCL. The use of zIIP processing has been disabled because
event tracing cannot coexist with the zIIP environment.
System action
All processing continues with event tracing on. Processing occurs on the General Central Processing Unit (GCPU).
User response
If the addition of the AUI$NAP DD statement was not intentional, remove it from the agent JCL.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
FUNCTION
While accessing the VSAM repository, an internal logic error was encountered.
System action
Processing terminates.
User response
There are no user actions available for this failure. Contact IBM® Software Support with the content of this message.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
While accessing the VSAM repository, an internal logic error was encountered.
FUNCTION:
vsam_function
R15:
return_code
ACBOFLGS:
acboflag_value
CSI-CALL:
function_call
SUBRTN:
pgm_routine
System action
Processing terminates.
User response
There are no user actions available for this failure. Contact IBM® Software Support with the content of this message.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
While accessing the VSAM repository, an internal logic error was encountered.
FUNCTION:
vsam_function
RPL/RECORD TYPE
rpl/record_value
FDBWD:
rpl_fdbwd
OPTCD:
rpl_optcd
CSI-CALL:
function_call
SUBRTN:
pgm_routine
System action
Processing terminates.
User response
There are no user actions available for this failure. Contact IBM Software Support with the content of this message.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The auditing of IMS events proceeds by using the collection profile (collection_profile_name) that is associated with the IMS definition (ims_name). The agent name
indicates which agent is processing the audited data. Various region types might have been excluded from auditing, such as AER, BMP, CICS, DBCTL, IFP, MPP, ODBA, or
NONE.
System action
Auditing continues.
User response
No action is required.
Note: To set a z/OS message alert for this message, use single-dash formatting between the message number and message text; for example, AUIJ250I - AUDITING IMS
EVENTS.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt at building a compiled filter using the collection profile named (collection_profile_name) failed.
System action
Processing terminates, auditing will not be performed.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The Guardium appliance has detected a list of users for whom access is to be restricted for a period of time. This list is based on policy rules and criteria that are set by the
Guardium administrator who maintains the auditing rules in your environment.
System action
Processing continues. If a user in the list of quarantined user IDs attempts to issue DB/DLI calls, the DLI call fails. A DB PCB status code of AI, or an AIB return/reason
code of 110/C, is returned to the application program.
User response
If access to IMS databases terminate with a DB PCB status code of AI, or an AIB return/reason code of 110/C, contact the Guardium administrator who maintains the
auditing rules in your environment to obtain the reason for the quarantine.
Note: To set a z/OS message alert for this message, use single-dash formatting between the message number and message text; for example, AUIJ252W - GUARDIUM
QUARANTINE IS IN EFFECT
Parent topic: Error messages and codes: AUIJxxxx
Explanation
This message echoes message AUII050I, which is generated by the S-TAP code, and can appear in the IMS control region and the DLI/DBB batch job output. This
message only appears in the agent if the DISPLAY_IMSMSG_DLIx(Y) configuration option is coded in the AUICONFG file.
System action
Processing continues.
User response
No action is required. See the explanation for message AUII050I for details regarding the available output fields.
Explanation
This message echoes message AUIJ250I, which is generated by the S-TAP code, and can appear in the IMS control region and the DLI/DBB batch job output. This
message only appears in the agent if the DISPLAY_IMSMSG_DLIx(Y) configuration option is coded in the AUICONFG file.
System action
Processing continues.
User response
No action is required. See the explanation for message AUIJ250I for details regarding the available output fields.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
This message echoes message AUII120I, which is generated by the S-TAP code, and can appear in the IMS control region and the DLI/DBB batch job output. This
message only appears in the agent if the DISPLAY_IMSMSG_DLIx(Y) configuration option is coded in the AUICONFG file.
System action
Processing continues.
User response
No action is required. See the explanation for message AUII120I for details regarding the available output fields.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
This message echoes message AUII052I, which is generated by the S-TAP code, and can appear in the IMS control region and the DLI/DBB batch job output. This
message only appears in the agent if the DISPLAY_IMSMSG_DLIx(Y) configuration option is coded in the AUICONFG file.
System action
Processing continues.
User response
No action is required. See the explanation for message AUII052I for details regarding the available output fields.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ259I JOBNAME job_name USING IMS STAP V10.1.3 MODULE: pgm_name APAR:
fix_number DATE: fix_date
Explanation
This message echoes message AUII052I, which is generated by the S-TAP code, and can appear in the IMS control region. This message appears in the agent if the
DISPLAY_IMSMSG_DLIx(Y) configuration option is coded in the AUICONFG file.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
System action
Processing will continue with the request being retried.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt to connect to the z/OS System Logger log-stream, by using the IXGCONN function, has failed.
System action
Auditing is disabled, but IMS continues processing.
User response
Correct the issue that has caused the IXGCONN failure; then, uninstall and reinstall the policy to cause IMS to reattempt the connection. Or, correct the issue; then, stop
and restart the Security Guardium® S-TAP® for IMS agent to cause IMS to reattempt the IXGCONN call.
Parent topic: Error messages and codes: AUIJxxxx
System action
One occurrence of this message is issued once per error type (RC + RSN) within the each issuance of message AUII050I. IXGWRITE calls continues until the collection
policy for the IMS system is uninstalled, or the DLI/DBB batch job or IMS control region terminates.
User response
Examine the description of the IXGWRITE error using the RC and RSN codes provided in the IBM® z/OS MVS™ Programming: Assembler Services Reference, Vol. 2 (IAR-
XCT) or equivalent, under the IXGWRITE Macro description, and take corrective action. The most common reason for the appearance of this message is the volume and
the rate (number of events per second) of DLI events exceeds the capacity of the current z/OS System Logger log stream definition.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ307A AUDITED EVENTS ARE BEING LOST DUE TO IXGWRITE ERRORS AND/OR BUFFER
SHORTAGES
Explanation
A number of attempts to write audited events to the z/OS® System Logger Log-stream have failed which has caused has resulted in available space in the data buffers
being exhausted. This has resulted in DLI events which are to be audited to be discarded.
System action
DLI events continue to be audited at attempts to write exiting data buffers to the z/OS System Logger Log-stream until. The number of DLI events which were rejected are
noted in subsequent AUII050I message.
User response
Review any AUIJ304E messages which have been issued to determine the cause of the z/OS System Logger Log-stream Write failures.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
System action
Processing that is associated with this thread will not occur.
User response
Examine previously issued error or abend messages to determine the corrective action to be taken. Then, restart the agent.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ330E REQUIRED DATA SET IS NOT CATALOGED. - TYPE: dsn_type, DSN: data_set_name
Explanation
The data set name indicated (data_set_name) was not found in the z/OS® catalog.
System action
Processing terminates
User response
Specify the name of a cataloged data set.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
A z/OS® service (service_name) failed when executed.
System action
Processing terminates.
User response
Determine the cause of the failure by using the return and reason codes provided. Contact IBM® Software Support for additional assistance.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ332E DATA SET IS NOT VALID WITHIN CONTEXT USED - TYPE: data_set_type, DSN:
data_set_name, REASON: reason
Explanation
The data set indicated (data_set_name) is not of a type valid for use where it is defined. The reason for the rejection of this data set is found in the REASON field (reason).
System action
Processing terminates
User response
Specify a data set of the correct type.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ333E Service SERVICE FAILED for DATA SET: dsn - R15: return_code
Explanation
A z/OS LOCATE or OBTAIN service failed when it was run against the specified data set dsn.
System action
Processing terminates.
User response
Ensure that the data set names exists, and has not been migrated. Determine the cause of the failure by examining the LOCATE/OBTAIN MACRO return codes found in the
IBM DFSMSdfp Advanced Services manual. Contact IBM Software Support for additional assistance
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The AUIFstc task has encountered a DD in the JCL that prevents a specific type of data set from being audited by SMF.
System action
Accesses to the data set types that are specified in the text of this message are not audited.
User response
If you want to audit accesses to these types of data sets, remove the DD statement. See the Data sets and DD DUMMY statements table in the SMF records section of this
user's guide for information on which DDs affect which data set types.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt at obtaining memory in program (module_name) has failed due to insufficient memory being available.
System action
Processing terminates
User response
Increase the region size of the started task where this message appeared. Restart the started task and retry the request.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt to perform a z/OS® ATTACH of the program_name by module module_name has failed.
System action
Processing terminates.
User response
Determine the cause of the failure by using the return code (return_code) provided. Correct and restart the task that issued the message. Contact IBM® Software Support
for further assistance if need.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt use the catalog interface has failed.
System action
Processing terminates
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
IBM Security Guardium V10.1 1135
An attempt to issue a dynamic allocation function (function_code) using the data set name indicated (data_set_name) has failed.
System action
Processing terminates.
User response
Using the return_code and reason_code determine the cause for the failure. Correct and retry the request.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An attempt to issue a dynamic allocation function (function_code) using the DD name indicated (dd_name) has failed.
System action
Processing terminates.
User response
Using the return_code and reason_code determine the cause for the failure. Correct and retry the request.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ406W TOO MANY RULES SPECIFIED IN POLICY, REQUEST HAS BEEN TRUNCATED.
POLICY: policy_name. RULE LIMIT: max_number_of rules_allowed
Explanation
Preprocessing of the rules associated with the indicated policy (policy_name) determined that the number of rules that were specified in the policy exceeded the rule limit
of max_number_of rules_allowed. Allowing an excessive number of rules causes memory constraint and performance issues.
System action
The contents of subsequent rules are discarded. Processing continues using all previous rule content.
User response
Review the rules that are included in the policy, and edit the policy to combine the rule content where permissible. If the resulting policy still requires a greater number of
rules than the rule limit permits, contact IBM Software Support.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
This message provides the number of data set names that are used as input when building the compiled filter for SMF processing.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ408E POLICY name RESULTED IN OVER 102400 DATA SETS TO BE AUDITED; DATA SET
RESULT SET HAS BEEN TRUNCATED
Explanation
The specified policy has found over 102,400 data sets to audit based on the databases that are specified in the policy rules and the IMS system log data set (SLDS) and
recovery log data set (RLDS) RECON entries. Due to memory constraints, the data set occurrence limit per policy is 102,400 per IMS definition.
System action
User response
Change the policy rules to audit fewer databases, or modify the rules to reduce or avoid multiple rules from auditing the same databases.
Review the IMS RECON data set that is looking for IMS SLDS and RLDS, database image copy data sets, or database data set group (DSG)/area data sets, which no
longer physically exist but remain listed in the RECON. Delete the RECON references that are no longer needed.
Explanation
The task is starting the processing cycle specified.
System action
Processing starts for the cycle specified.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ501I NO NEW CATALOGED SMF DATA SETS FOUND FOR SMF MASK: - smf_mask_value
Explanation
The SMF processing cycle has determined that no new, unprocessed data sets which meet the SMF mask value have been found.
System action
The task waits for the start of the next cycle.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The cycle has completed.
System action
The task waits for the start of the next cycle.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
A critical E/CSA control block was not found.
System action
Processing terminates.
User response
Contact Software Support.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
The AUIARCN DD was found in the JCL. The imsname that was used when installing the active IMS policy was found in the AUIARCN file, along with alternate RECON data
sets names (alt_dsn_1/2/3).
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
When attempting to validate the alt_dsn value, the data set was not found in the catalog.
System action
Processing continues to validate other specified data set names.
User response
Correct the data set name or catalog the data set.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
When attempting to validate the alt_dsn value, the data set was found to in a format invalid for processing. The data set name must be in VSAM format.
System action
Processing continues to validate other specified data set names.
User response
Correct the data set name or catalog the data set.
Parent topic: Error messages and codes: AUIJxxxx
AUIJ513E NO VALID ALTERNATE RECON DATA SETS FOUND FOR IMS imsname;
PROCESSING TERMINATED
Explanation
The data set validation was completed, and no valid alternate RECON data set names found for the IMSNAME.
System action
Processing terminates.
User response
Add or correct valid RECON data set names.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
System action
Processing terminates.
User response
Determine the cause of the E/CSA shortage.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
If the excluded_by value is AGENT, then the reporting of event_types is excluded due to the specification of certain configuration keywords. If the excluded_by value is
IMS, these events are excluded as directed by the IMS definition.
System action
Occurrences of these event types are not reported.
User response
If you want to view reports of this event type, review and modify the agent configuration file (SMF_AUDIT_LEVELS or IMSL_AUDIT_LEVELS keywords) or the Guardium
system IMS definition, using the Auditing Levels tab.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
A critical error has occurred due to a missing DD statement.
System action
Processing terminates.
User response
This message occurs if a product JCL has been edited and a DD statement has been deleted or omitted. If this is not the case, check for any dynamic allocation error
messages. If none are present, or are not user resolvable, contact IBM® Software Support.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
When validating the VSAM repository, an allocation definition error was found.
System action
Processing terminates.
User response
The VSAM repository requires specific values for the attribute, LRECL, key length and key position. Review the SAUISAMP product distribution data set member AUISJ001
for the correct file definition specifications.
Parent topic: Error messages and codes: AUIJxxxx
Explanation
An internal logic error has occurred.
System action
Processing terminates
User response
AUIL002I
Archive log reader interval set to <number> <time interval in hours/minutes>.
AUIL003E
Command <command-text>failed; interval value must be between <lower-bound> and <upper-bound>.
AUIL600I
NO NEW CATALOGED IMS LOG DATA SETS FOUND
AUIL601I
PROCESSING IMS LOG DATA SET: ims_log_data_set_name
AUIL602I
PROCESSING COMPLETE FOR IMS LOG DATA SET: ims_log_data_set_name
AUIL603I
SCANNING RECON DATA SETS FOR IMS LOGS TO PROCESS. RECON1: recon1_dsn - RECON2: recon2_dsn - RECON3: recon3_dsn
AUIL605I
RECON DATA SET SCAN COMPLETE
AUIL606W
RECON HAS NOCATDS SPECIFIED, RESULTS MAY NOT BE ACCURATE
AUIL607W
THERE ARE NO ACTIVE IMS POLICIES FOR AGENT agent_name
AUIL701I
IMS LOG CHECKPOINT INFORMATION - IMSID: IMS_name_from_policy - RECON1 DSN: dsn_of_RECON1 - CREATING SSID: SSID_from_PRILOG - LAST DSN READ:
dsn_of_SLDS - LAST UPDATED (UTC): date_time
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
AUIL002I Archive log reader interval set to <number> <time interval in hours/minutes>.
Explanation
The Archive log reader is scheduled to process archive logs as specified.
User response
No action is required.
Parent topic: Error messages and codes: AUILxxxx
Explanation
This message indicates that <command>, such as: /f AUILSTC,SET INTERVAL number failed because of incorrect number value. Correct values must be between <lower-
bound> and <upper-bound>.
User response
Use an interval value between <lower-bound> and <upper-bound>. If that does not resolve the issue, contact IBM® Software Support.
Parent topic: Error messages and codes: AUILxxxx
Explanation
After examining the RECON data sets, it has been determined that no new IMS SLDS data sets were found that have yet to be processed by the product.
User response
No action is required.
Parent topic: Error messages and codes: AUILxxxx
Explanation
Processing has started for the IMS SLDS data set indicated (ims_log_data_set_name)
User response
No action is required.
Parent topic: Error messages and codes: AUILxxxx
Explanation
Processing of the IMS SLDS data set has completed.
System action
Processing continues with other candidate IMS SLDS data sets.
User response
No action is required.
Parent topic: Error messages and codes: AUILxxxx
AUIL603I SCANNING RECON DATA SETS FOR IMS LOGS TO PROCESS. RECON1: recon1_dsn -
RECON2: recon2_dsn - RECON3: recon3_dsn
Explanation
To determine the candidate IMS SLDS data sets to be read, the IMS RECON data sets must be queried. This message indicates that this query process has started.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUILxxxx
Explanation
This message follows the AUIL603I message and indicates that the scan of the RECON data sets is complete.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUILxxxx
Explanation
When examining the RECON data sets the NOCATDS option was found to be on, meaning any log data sets found might not be cataloged.
System action
Processing continues.
User response
The function that produces this message relies on the log data sets existing in the z/OS® catalog or having been in the z/OS catalog at one time. Having the NOCATDS
option on in the RECON data sets might negate the validity of further processing, if the SLDS data sets are not cataloged.
Parent topic: Error messages and codes: AUILxxxx
Explanation
A request to query the RECON data sets of IMS systems defined under the named agent found that there were no IMS systems audited by the agent with an active profile.
The function that produces this message relies on having at least one IMS system with an active collection policy.
System action
Processing terminates.
User response
Install a collection policy for an IMS under of the control the agent.
Parent topic: Error messages and codes: AUILxxxx
Explanation
This message provides the name of the IMS SLDS that was last read when processing data for the SSID (SSID_from_PRILOG) found in the set of the DBRC RECON data
sets (dsn_of_RECON1). This information is used as a checkpoint to indicate which SLDS data sets have already been processed, and should not be re-read by the AUILstc
tasks.
System action
Processing continues.
User response
No action is required.
Parent topic: Error messages and codes: AUILxxxx
AUIP001E
A protobuf message schema violation was detected; value value is not a valid boolean value.
AUIP002E
A protobuf message schema violation was detected; value value is not a valid double value.
AUIP003E
A protobuf message schema violation was detected; value value is not a valid integer value.
AUIP004E
A protobuf message schema violation was detected; required message message property property is not present.
AUIP005E
A protobuf message schema violation was detected; required message message sub-message submessage is not present.
AUIP006S
A severe error occurred during protobuf message parsing; an unknown exception occurred.
AUIP007E
A protobuf message schema violation was detected; property name property is invalid.
AUIP008E
A protobuf message schema violation was detected; property property value value is invalid.
AUIP009E
A protobuf message schema violation was detected; message name 'name' is invalid.
AUIP010E
A protobuf message schema violation was detected; message name name is invalid (expected expected name).
AUIP011E
A protobuf message schema violation was detected; value value is not a valid bytes value.
AUIP012E
A protobuf message schema violation was detected; value value is not a valid unsigned integer value.
AUIP013E
An error occurred while parsing item text: String is empty.
AUIP014E
An error occurred while parsing item text: text.
AUIP015E
Failed to send error message to appliance: host/port.
AUIP016E
Policy rule <rule> was ignored: IMS name is empty.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
The specified value is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP002E A protobuf message schema violation was detected; value value is not a valid
double value.
Explanation
The specified value is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP003E A protobuf message schema violation was detected; value value is not a valid
integer value.
Explanation
The specified value is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP004E A protobuf message schema violation was detected; required message message
property property is not present.
Explanation
The specified message property is not present.
User response
Contact your administrator or IBM® Software Support
Parent topic: Error messages and codes: AUIPxxxx
AUIP005E A protobuf message schema violation was detected; required message message
sub-message submessage is not present.
Explanation
The specified message submessage is not present.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP006S A severe error occurred during protobuf message parsing; an unknown exception
occurred.
Explanation
An error occurred while parsing a protobuf message.
AUIP007E A protobuf message schema violation was detected; property name property is
invalid.
Explanation
The specified property name is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP008E A protobuf message schema violation was detected; property property value value
is invalid.
Explanation
The specified property value is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP009E A protobuf message schema violation was detected; message name 'name' is
invalid.
Explanation
The specified message name is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP010E A protobuf message schema violation was detected; message name name is invalid
(expected expected name).
Explanation
The specified message name is not valued.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIP011E A protobuf message schema violation was detected; value value is not a valid bytes
value.
Explanation
The specified value is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
Explanation
The specified value is not valid.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
Explanation
A policy message contained an item field with an empty value.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
Explanation
The IBM® Guardium® S-TAP® for IMS agent was unable to send the error message to the specified appliance.
User response
Contact your administrator or IBM Software Support.
Parent topic: Error messages and codes: AUIPxxxx
Explanation
The specified policy rule was ignored because it does not apply to any IMS subsystem, or the IMS name is empty.
User response
Contact your administrator or IBM® Software Support.
Parent topic: Error messages and codes: AUIPxxxx
AUIR002E
The provided parameter 'value' is too long; should be less than or equal to maximum length characters.
AUIR004E
A maximum of maximum data sets are allowed for the names libs and a total of libs-count were specified.
AUIR006E
The parameter parameter can't be empty.
AUIR007W
Policy_rule_item <item-name> for Policy_rule <rule-name> has conflicting <value-name> values.
AUIR008W
IMS 050i Max Time threshold was changed from "2460" to "2359".
AUIR002E The provided parameter 'value' is too long; should be less than or equal to maximum
length characters.
Explanation
The value of the specified parameter exceeds the maximum length maximum length.
User response
Specify a shorter value that does not exceed the specified limit for the parameter.
Parent topic: Error messages and codes: AUIRxxxx
AUIR004E A maximum of maximum data sets are allowed for the names libs and a total of libs-
count were specified.
Explanation
The maximum number of data sets was exceeded for the libs specified.
User response
Limit the number of data sets for the specified libs to maximum.
Parent topic: Error messages and codes: AUIRxxxx
Explanation
The parameter value must be specified in the agent configuration.
User response
Update agent configuration, or contact your administrator.
Parent topic: Error messages and codes: AUIRxxxx
Explanation
The Guardium policy was processed but there are conflicting fields in the definition. Only one of the policies has been applied.
User response
Check the policy definition, and change the specified values to eliminate the conflict.
Parent topic: Error messages and codes: AUIRxxxx
AUIR008W IMS 050i Max Time threshold was changed from "2460" to "2359".
Explanation
An invalid time value was supplied through the use of the Message AUII050I Frequency field of the IMS definition screen of the Guardium® appliance. The invalid value
was automatically corrected by the agent.
System action
Processing continues.
User response
When convenient, update the invalid time value in the IMS definition to a value within the range of 00:10 -- 23:59.
Parent topic: Error messages and codes: AUIRxxxx
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
AUIT001E The specified user ID userid is not defined or does not have an OMVS segment
defined.
Explanation
You specified a user ID that is not defined or does not have an OMVS segment defined.
User response
Security Guardium® S-TAP® for IMS was unable to authenticate the specified user. Either specify a valid user ID, or if the user ID is valid, see your security administrator
to have an OMVS segment defined for the user ID.
Explanation
Security Guardium® S-TAP® for IMS is not properly configured to authenticate users.
User response
An error occurred while authenticating a remote user request. The error code indicates that the installation configuration required to allow this authentication has not
been completed. See IBM Guardium S-TAP for IMS agent for more information about how to complete the required configuration.
Parent topic: Error messages and codes: AUITxxxx
AUIT008E The configuration file filename is invalid; the root element element is not <agent-
config>.
IBM Security Guardium V10.1 1147
Explanation
The configuration file identified in the message is invalid.
User response
The contents of the specified configuration file are invalid. Correct the file contents to specify <agent-config> as the root XML element.
AUIT010E An error occurred while opening the configuration file filename message text
Explanation
An error occurred while opening the configuration file identified in the message. Additional error information is also contained within the message.
User response
Use the specified message text to diagnose the error that occurred. Specify a valid configuration file that is not in use by any other process.
Explanation
The Security Guardium® S-TAP® for IMS agent is looking for available locations.
User response
No action is required.
Explanation
The Security Guardium S-TAP for IMS agent is terminating.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent task has connected to the S-TAP to the specified host and port.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent is attempting to connect to the specified host and port number.
User response
No action is required.
User response
No action is required.
AUIT019I Security Guardium® S-TAP® for IMS agent started on <lpar _name> (<lpar_ip>).
Explanation
The IBM® Guardium S-TAP for IMS agent has started.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent is starting the identified socket selector thread.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent has received a shutdown request.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent socket selector thread is terminating.
User response
No action is required.
Explanation
An unexpected return code was returned by the pthread_security_np() callable service.
User response
Ensure that the configuration required to use this service has been completed. See IBM Guardium S-TAP for IMS agent for more information about the required
configuration. Check the agent job log for additional messages which might be generated.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent received a STOP command.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent received a MODIFY command.
User response
No action is required.
AUIT034S Security Guardium® S-TAP® for IMS agent is terminating due to hard stop
request.
Explanation
Security Guardium S-TAP for IMS agent is terminating due to a user /MODIFY FORCE command.
User response
No action is required.
Explanation
The Security Guardium® S-TAP® for IMS agent task is unable to communicate with the Security Guardium S-TAP for IMS agent.
User response
Resolve any network connectivity issues, then try logging in again.
Parent topic: Error messages and codes: AUITxxxx
AUIT047E IBM® Security Guardium® S-TAP® for IMS on z/OS® agent ended with RC = [rc].
Explanation
Due to a prior error, the agent has ended with the specified return code.
User response
Contact IBM Software Support.
Parent topic: Error messages and codes: AUITxxxx
User response
No action is required.
Parent topic: Error messages and codes: AUITxxxx
Explanation
A command, such as /f AUIASTC,DUMP/DDX has processed successfully.
User response
No action is required.
Parent topic: Error messages and codes: AUITxxxx
AUIUR002I
Migrate Utility for IBM® Security Guardium® S-TAP® for IMS on z/OS® started.
AUIUR003I
Agent record <agent name> was not found in the repository.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
AUIUR002I Migrate Utility for IBM® Security Guardium® S-TAP® for IMS on z/OS® started.
Explanation
The utility to migrate the configuration of an older version of the product to the current product version has started.
User response
No action is required.
Parent topic: Error messages and codes: AUIUxxxx
AUIUR003I Agent record <agent name> was not found in the repository.
Explanation
An attempt to read an agent record from the repository while migration failed as the record was not found.
System action
The agent record migration fails, processing continues.
User response
Check the configuration file for agent and repository names and use the Guardium user interface to verify that the specified agent definition is presented in specified
repository.
Parent topic: Error messages and codes: AUIUxxxx
AUIX013E
A shared memory error occurred on "service name": error message.
AUIX014E
An XML schema violation was detected; value value is not a valid boolean value.
AUIX015E
An XML schema violation was detected; value value is not a valid double value.
AUIX016E
An XML schema violation was detected; value value is not a valid integer value.
AUIX017E
An XML syntax error was detected at offset offset; expected expected-value, found found-value.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
This error can occur in the primary agent address space. When the error occurs, the primary agent address space will shut down with a CC of 12. This startup error
indicates that attempts to create a shared memory segment failed because of an already existing shared memory segment that never belonged to, or currently does not
belong to, the primary agent address space.
This message can occur in the secondary address space if the <id> elements in the ADS_SHM_ID and ADS_LISTENER_PORT parameters do not match in the AUICONFG
configuration member that is used by the agent primary address space and the secondary address spaces.
User response
Edit SAUISAMP member AUICONFG (or the customized AUICONFG) and specify the correct <id> elements in the ADS_SHM_ID and ADS_LISTENER_PORT parameters.
Parent topic: Error messages and codes: AUIXxxxx
AUIX014E An XML schema violation was detected; value value is not a valid boolean value.
Explanation
An XML schema violation was detected; value value is not a valid boolean value.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX015E An XML schema violation was detected; value value is not a valid double value.
Explanation
An XML schema violation was detected; value value is not a valid double value.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX016E An XML schema violation was detected; value value is not a valid integer value.
Explanation
An XML schema violation was detected; value value is not a valid integer value.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX017E An XML syntax error was detected at offset offset; expected expected-value, found
found-value.
Explanation
An XML syntax error was detected at offset offset; expected expected-value, found found-value.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
Explanation
An XML schema violation was detected; required element element attribute attribute is not present.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX019E An XML schema violation was detected; required element <element> child <child-
element> is not present.
Explanation
The XML schema must contain the specified elements.
User response
Correct the XML schema and retry.
Parent topic: Error messages and codes: AUIXxxxx
Explanation
Memory allocation failed (number bytes).
User response
Contact IBM® Software Support.
AUIX021E An XML schema violation was detected; element element child child-number has
wrong type.
Explanation
An XML schema violation was detected; element element child child-number has wrong type.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
Explanation
An XML syntax error was detected; character reference character-reference is invalid.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX023E An XML syntax error was detected; entity reference entity-reference is invalid.
Explanation
An XML syntax error was detected; entity reference entity-reference is invalid.
AUIX024E An XML syntax error was detected; more than one element was found at the root of
the document.
Explanation
An XML syntax error was detected; more than one element was found at the root of the document.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX025E An XML syntax error was detected; no element was found at the root of the
document.
Explanation
An XML syntax error was detected; no element was found at the root of the document.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX026E An XML syntax error was detected; text was found at the root of the document.
Explanation
An XML syntax error was detected; text was found at the root of the document.
User response
Contact IBM® Software Support.
AUIX027S A severe error occurred during XML parsing; an unknown exception occurred.
Explanation
A severe error occurred during XML parsing; an unknown exception occurred.
User response
Contact IBM® Software Support.
Explanation
The command line option, which is specified in the message text, is invalid.
User response
Correct the command line option and retry the operation. Review the IBM® Guardium® S-TAP® for IMS client/server environment information for valid options.
AUIX034S A severe error occurred during command line processing; an unknown exception
occurred.
1156 IBM Security Guardium V10.1
Explanation
A severe error occurred during command line processing; an unknown exception occurred.
User response
Contact IBM® Software Support.
Explanation
The operation completed successfully.
User response
No action is required.
AUIX036E The address family is not supported by the protocol family ( socket-return-code).
Explanation
The address family is not supported by the protocol family ( socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The operation is still in progress (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
Permission is denied (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The network is down (socket-return-code).
User response
Contact IBM® Software Support.
User response
Contact IBM® Software Support.
Explanation
Too many sockets have been opened (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The protocol is not supported (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The WSAStartup routine was not called (socket-return-code).
User response
Contact IBM® Software Support.
AUIX044E The protocol is the wrong type for the socket (socket-return-code).
Explanation
The protocol is the wrong type for the socket (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The socket type is not supported (socket-return-code).
User response
Contact IBM® Software Support.
User response
Specify the correct host name or IP address.
Explanation
The socket handle is invalid (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The address is already in use (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The function call was interrupted (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The requested address is not available (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The connection was aborted (socket-return-code).
User response
Contact IBM® Software Support.
User response
Verify that the correct port number was specified, and that the partner application has been started and is available.
Explanation
The connection was reset by the partner (socket-return-code).
User response
The partner application ended the network connection. If this is unexpected, diagnose the partner application failure. Otherwise, no action is required.
Explanation
The network message is too long (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The network dropped the connection when reset (socket-return-code
User response
Contact IBM® Software Support.
Explanation
An invalid parameter was specified (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The socket is not connected (socket-return-code).
User response
Contact IBM® Software Support.
User response
Contact IBM® Software Support.
Explanation
The socket has been closed (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
The socket is already connected (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
An unknown error occurred (socket-return-code).
User response
Contact IBM® Software Support.
Explanation
A socket error occurred.
User response
Use the specified message text to diagnose the error.
Explanation
A socket select error occurred.
User response
Use the specified message text to diagnose the error.
Explanation
An XML schema violation was detected; expected root element element-expected , but found element-found instead.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX066E An XML schema violation was detected; element element value value is invalid.
Explanation
An XML schema violation was detected; element element value value is invalid.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX067E An XML schema violation was detected; element name element is invalid.
Explanation
An XML schema violation was detected; element name element is invalid.
User response
If the error occurred while reading the agent configuration file, correct the file contents. Otherwise, contact IBM® Software Support.
AUIX068E An XML schema violation was detected; element name element-found is invalid
(expected element-expected).
Explanation
An XML schema violation was detected; element name element-found is invalid (expected element-expected).
User response
Contact IBM® Software Support.
Explanation
This message indicates a callable service abend has occurred. Additional diagnostic information might be present in the message when applicable.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIXxxxx
AUIX076E An XML schema violation was detected; element element attribute attribute value
value is invalid.
Explanation
An XML schema violation was detected; element element attribute attribute value value is invalid.
User response
AUIX085E A dynamic allocation error occurred: info code = info-code, error code = error-code.
Explanation
A dynamic allocation error occurred: info code = info-code, error code = error-code.
User response
See the MVSâ„¢ Programming: Authorized Assembler Services Guide for more information about the specified information and error codes.
AUIX086E A dynamic concatenation error occurred: info code = info-code, error code = error-
code.
Explanation
A dynamic concatenation error occurred: info code = info-code, error code = error-code.
User response
See the MVSâ„¢ Programming: Authorized Assembler Services Guide for more information about the specified information and error codes.
AUIX087E A dynamic free error occurred: info code = info-code, error code = error-code.
Explanation
A dynamic free error occurred: info code = info-code, error code = error-code.
User response
See the MVSâ„¢ Programming: Authorized Assembler Services Guide for more information about the specified information and error codes.
Explanation
An invalid dynamic allocation parameter was specified: code = parm-code.
User response
Contact IBM® Software Support.
Explanation
An unexpected error occurred (file-name, line-number).
User response
Contact IBM® Software Support.
Explanation
An unexpected error occurred with token token, (file-name, line-number).
AUIX095S An unexpected error occurred with tokens token and token (file-name, line-number).
Explanation
An unexpected error occurred with tokens token and token (file-name, line-number).
User response
Contact IBM® Software Support.
AUIX096S An unexpected error occurred with tokens token, token and token ( file-name, line-
number).
Explanation
An unexpected error occurred with tokens token, token and token ( file-name, line-number).
User response
Contact IBM® Software Support.
AUIX097S An unexpected error occurred with tokens token, token, token, and token (file-name,
line-number.
Explanation
An unexpected error occurred with tokens token, token, token, and token (file-name, line-number.
User response
Contact IBM® Software Support.
Explanation
A thread error occurred on thread-operation : message-text.
User response
Use the specified message text to diagnose the error.
Explanation
An event error occurred on event-operation : message-text.
User response
Use the specified message text to diagnose the error.
User response
Use the specified message text to diagnose the error.
Explanation
A semaphore error occurred on semaphore-operation : message-text.
User response
Use the specified message text to diagnose the error.
Explanation
The network connection has been disconnected.
User response
No action is required.
AUIX114E A dynamic allocation query error occurred: info code = info-code, error code = error-
code.
Explanation
A dynamic allocation query error occurred: info code = info-code, error code = error-code.
User response
See the MVSâ„¢ Programming: Authorized Assembler Services Guide for more information about the specified info and error codes.
Explanation
An input command error occurred on \"command-operation\": message-text.
User response
Contact IBM® Customer Support.
Explanation
Received input command: command-text.
User response
No action is required.
Explanation
Build date component = date.
User response
No action is required.
Explanation
The action was cancelled.
User response
No action is required. The operation was cancelled due to user or administrator request.
Explanation
The task is not running APF-authorized.
User response
The Security Guardium® S-TAP® for IMS load library, and the load libraries for all of the IMS subsystems accessed, must be APF-authorized. See IBM Guardium S-TAP
for IMS agent for more information about the required configuration steps.
Explanation
A DLL error occurred on dll-operation : message-text
User response
Contact IBM® Customer Support.
Explanation
An error occurred while opening log file file-name.
User response
Contact IBM® Customer Support.
AUIX142E An XML schema violation was detected; element element value value is invalid:
expected min <min-value> and max <max value>.
Explanation
The element-value given for element-name is out of the range and must be within min-value and max-value.
User response
Correct the value for the element-name in the configuration.
AUIX143E An XML schema violation was detected; element element attribute value value value
is invalid: expected min <minimum> and max <maximum>.
Explanation
The element attribute value is not valid.
User response
If the error occurred while reading the agent configuration file, update the configuration. Otherwise, contact IBM® Software Support.
Parent topic: Error messages and codes: AUIXxxxx
Explanation
The data set specified in the message text has not been cataloged.
User response
Allocate the data set.
Parent topic: Error messages and codes: AUIXxxxx
AUIX150E Invalid data set 'data set': Data set name must not exceed 44 characters.
Explanation
MVSâ„¢ data sets cannot exceed 44 characters.
User response
Correct the data set entry, then retry.
Parent topic: Error messages and codes: AUIXxxxx
AUIX151E Invalid data set {'data set'}: The segment length must be greater than 0 and less
than or equal to 8.
Explanation
The specified data set name has one or more segments that are not between 1 and 8 characters.
User response
Specify a data set where each segment contains more than 0 characters and 8 or fewer characters.
Parent topic: Error messages and codes: AUIXxxxx
AUIX152E Invalid data set 'name': The first character in each segment must be alphabetic (A-
Z) or national (#, @, $).
Explanation
The data set name provided does not is not a valid name and does not satisfy the MVSâ„¢ data set naming requirements.
User response
Correct the data set name and try again.
Parent topic: Error messages and codes: AUIXxxxx
AUIX153E Invalid data set '<data set>': The non-first characters in the segments must be
alphabetic (A-Z), numeric, national (#, @, $), or hyphen.
Explanation
The non-first characters in the segments must be alphabetic (A-Z), numeric, national (#, @, $), or hyphen.
AUIX154E Invalid data set '<data set>': The non-first characters in the SMF segments must be
alphabetic (A -- Z), numeric, national (#, @, $), hyphen, asterisk (*) or percent (%).
Explanation
The non-first characters in the SMF segments must be alphabetic (A -- Z), numeric, national (#, @, $), hyphen, asterisk (*) or percent (%).
User response
Specify a data set where non-first characters in the SMF segments is alphabetic (A -- Z), numeric, national (#, @, $), hyphen, asterisk (*) or percent (%).
Parent topic: Error messages and codes: AUIXxxxx
Explanation
The specified data set requires APF authorization.
User response
The specified data set must be APF-authorized. See Configuration overview for more information about the required configuration steps.
Parent topic: Error messages and codes: AUIXxxxx
AUIX156E Invalid data set '<data set>': The first character in SMF segment must be alphabetic
(A -- Z) or national (#, @, $), asterisk (*) or percent (%).
Explanation
The first character in SMF segment must be alphabetic (A -- Z) or national (#, @, $), asterisk (*) or percent (%).
User response
Specify a data set where first character in SMF segments must be alphabetic (A -- Z) or national (#, @, $), asterisk (*) or percent (%).
Parent topic: Error messages and codes: AUIXxxxx
AUIX160E A dynamic allocation query error occurred: info code = <info-code>, error code =
<error-code>, DD name = <dd-name>.
Explanation
A dynamic allocation query error occurred with the specified information code, error code, and DD name.
User response
See the MVS Programming: Authorized Assembler Services Guide for more information about the specified info and error codes.
Parent topic: Error messages and codes: AUIXxxxx
AUIX183E The number of file descriptors (sockets) has exceeded maximum = <number>.
Explanation
The active program holds too many file or socket descriptors and exceeded system maximum = <number>.
User response
Contact your system administrator or IBM Software Support.
Parent topic: Error messages and codes: AUIXxxxx
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
This message indicates a callable service abend has occurred. Additional diagnostic information is be present in the message when applicable.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIYxxxx
Explanation
This message indicates an CSI abend has occurred. Additional diagnostic information is present in the message when applicable.
User response
No action is required.
Parent topic: Error messages and codes: AUIYxxxx
Explanation
This message indicates a CSI abend has occurred. Additional diagnostic information is present in the message when applicable.
User response
No action is required.
Parent topic: Error messages and codes: AUIYxxxx
Explanation
This message indicates a CSI abend has occurred. Additional diagnostic information is present in the message when applicable.
User response
No action is required.
Parent topic: Error messages and codes: AUIYxxxx
Explanation
This message indicates a CSI abend has occurred. Additional diagnostic information is present in the message when applicable.
User response
AUIY006E Callable service invocation failed with return code = return-code and reason code =
reason-code
Explanation
A service requested by the agent task has failed.
User response
View the JES log of the agent task to determine the data set name and reason for the error. Contact IBM® Software Support if you are unable to resolve the error.
Parent topic: Error messages and codes: AUIYxxxx
Explanation
The specified callable service has been invoked successfully.
User response
No action is required.
Parent topic: Error messages and codes: AUIYxxxx
Explanation
Returned from a callable service that is identified in the message.
User response
No action is required.
Explanation
The specified data set mask is not valid.
User response
Enter a valid data set mask and retry.
Parent topic: Error messages and codes: AUIYxxxx
AUIZ002E
dd-name DD has already been allocated.
AUIZ003W
Attached to existing shared memory segment.
AUIZ004S
Shared memory segment key verification failed ('key-value').
AUIZ005S
Shared memory segment eyecatcher 'value' invalid.
AUIZ007S
The master address space failed to respond to a connect request.
AUIZ008W
IBM® Security Guardium® S-TAP® for IMS on z/OS® agent failed to shut down properly last time.
AUIZ009S
Attempts to attach to shared memory segment segment key failed.
AUIZ010W
Configuration value for <parameter> is set below the allowed minimum of <limit>.
AUIZ011W
Configuration value for <parameter> is set above the allowed maximum of <limit>.
Parent topic: Messages and codes for IBM Security Guardium S-TAP for IMS on z/OS
Explanation
The dd-name DD needed for the task, has been previously allocated.
System action
The task terminates with a return code of 12.
User response
dd-name DD is dynamically allocated. Ensure that the dd-name DD is not present in the task JCL. If the dd-name is not present in the JCL, contact IBM® Software
Support.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
This message corresponds to message AUIZ008W. This message indicates that the memory segment has been cleaned, and is being reused.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
Shared memory segment validation failed. This usually implies that the shared memory segment is owned by another product or system.
User response
Change shared memory segment id and restart the agent:
ADS_SHR_MEM_ID
Explanation
Shared memory segment validation failed. This implies that the shared memory segment is owned by another product or system.
ADS_SHR_MEM_ID
Explanation
A secondary address space failed to connect to the master address space.
User response
Check the listener-port in the address-space-manager-config section of the configuration and verify that it matches in both AUICONFG and members of the primary
address space and secondary address spaces.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ008W IBM® Security Guardium® S-TAP® for IMS on z/OS® agent failed to shut down
properly last time.
Explanation
When the agent is restarting, the persistent memory object indicates that the agent was abnormally cancelled or terminated without going through the proper clean-up
routines, for example, Estae processing. This message might also indicate that another instance of the agent is currently executing.
User response
Verify that there is only one instance of this agent running.
Parent topic: Error messages and codes: AUIZxxxx
This startup error indicates that attempts to create a shared memory segment failed because of an already existing shared memory segment that never belonged to, or
currently does not belong to, the primary agent address space.
This message can occur in the secondary address space if the <id> elements in the <address-space-manager-config> parameters of the AUICONFG config member that is
used by the agent primary address space and the secondary address spaces(s) do not match.
User response
Edit SAUISAMP member AUICONFG (or the customized AUICONFG) and specify a different <id> element in the <address-space-manager-config> section.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ010W Configuration value for <parameter> is set below the allowed minimum of <limit>.
Explanation
Configuration parameter is not valid: <parameter> should be not less than <limit>.
User response
Change the parameter to comply with the requirements.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ011W Configuration value for <parameter> is set above the allowed maximum of <limit>.
Explanation
Configuration parameter is not valid: <parameter> should exceed the <limit>.
User response
Change the parameter to correspond to the requirements.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
Identifies the port number that the Log-server is listening to.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
No available port was found in specified range. This usually implies that the range of ports is used by other installations or products.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
This message indicates that an unexpected connection occurred from <client-ip> to log-server port.
System action
The connection is refused, and processing continues.
User response
This warning message can be produced during a system-level port security scan. If you do not want to receive this message, suppress it by using the configuration
parameters LOG_FILTER(E) and LOG_FILTER_MSGS_ID(AUIZ014W).
If a port scan was not active when this message was received, it indicates that an unknown message was received by the log-server port. Contact IBM Software Support.
Explanation
The specified configuration parameter parameter-name cannot contain a value that has already been specified for a related parameter.
User response
Fix the duplicate value specified-value and restart the agent.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The configuration parameter option contains an invalid value.
User response
Check the valid values for the option and correct the configuration file.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
No appliances were specified in the agent configuration, or all specified appliances were disabled.
Explanation
Specified appliance (host/port) are duplicates of another appliance specified in the configuration.
User response
Update or remove duplicate appliances in the agent configuration.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
Two or more appliances with duplicate priority (priority) were specified.
User response
Update or remove appliances with duplicate priorities in the agent configuration.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ025E Spill size can't be zero if more than one appliance is enabled.
Explanation
Spill size should be greater than zero if two or more active appliances are specified.
User response
Specify a valid spill size.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ026E Configuration parameter <option> value <value> is invalid; expected list <value-
list>.
Explanation
The configuration parameter <option> contains an invalid value.
User response
Check the valid values for the <option> and correct the configuration file.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
An attempt was made to determine the IP address of the host name that was indicated through the use of the z/OS getaddrinfo service. The attempt failed.
System action
If the host name is not the local LPAR, processing continues. The TCP/IP address for any events that occur on this LPAR will not be sent to the appliance for reporting. If
the host name is the local LPAR where the agent (AUIAstc task) is running, the local host name and IP address will be used for INTER and INTRA task communications.
User response
The z/OS network administrator must verify that the LPAR name exists in the DNS table.
Parent topic: Error messages and codes: AUIZxxxx
User response
Correct the value for the element-name in the configuration.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
A required property property-name could not be loaded from the configuration file because it has been incorrectly specified, specified multiple times, or not specified at
all.
User response
Update configuration file and add property-name with an appropriate value.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The configuration parameter identified by parameter-name contains an invalid value. The expected value should be of type long.
User response
Correct the configuration value.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The configuration parameter identified by parameter-name contains an invalid value. The expected value should be of type unsigned long.
User response
Correct the configuration value.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The configuration parameter identified by parameter-name contains an invalid value. The expected value should be of type short.
User response
Correct the configuration value.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The configuration parameter identified by parameter-name contains an invalid value. The expected value should be of type unsigned short.
User response
Correct the configuration value.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The configuration parameter identified by parameter-name contains an invalid value. The expected value should be of type boolean.
User response
Correct the configuration value.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The configuration parameter identified by parameter-name contains an invalid value. The expected value should be of type double.
User response
Correct the configuration value.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The element-value given for element-name is too long and its length must be within length-min and length-max.
User response
Correct the value for the element-name in the configuration file.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The specified collection profile uninstalled.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The specified collection profile installed.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The agent has received a policy message from the appliance and has started to process it.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The Guardium policy has been processed. The active, installed, and uninstalled values indicate the number of processed collection profiles.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ041E Profile for IMS source ims_name was ignored: unknown IMS.
Explanation
The agent received an IMS policy from the Security Guardium® system which does not relate to this agent instance.
System action
The policy is ignored by this agent.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ041W Profile for IMS source ims_name was ignored: unknown IMS.
Explanation
The agent received an IMS policy from the Security Guardium® system which does not relate to this agent instance.
System action
The policy is ignored by this agent.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
During policy pushdown, an ims-name was specified for one of the rules that does not exist in the Guardium® appliance.
User response
Contact IBM® Software Support.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ043E XCF callable service invocation failed: function function-name, RC = nn, reason code
= hhhhhhhh, AUIU proc name = proc-name, ADS_SHR_MEM ID = nn.
Explanation
An error occurred attempting to retrieve AUIU tokens from the CF.
User response
If the LPAR is not a sysplex member, no action is necessary. If the LPAR is a sysplex member, please contact IBM® Software Support.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ044S Shared memory segment version S-TAP version found is not compatible with
expected expected version.
User response
Verify and change the ADS_SHR_MEM_ID that is specified in the agent configuration.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The address space requires an AUICONFG DD to be specified in the JCL.
User response
Update the JCL for the address space to include an AUICONFG DD.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
Invocation of the specified module failed due to the specified return-code and reason-code.
User response
Contact IBM Software Support.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
During agent startup, the SMF spill file that is named in the configuration parameter SMF_SPILL_FILE(dsn) was not found.
System action
The agent terminates.
User response
Determine why the file cannot be located. Correct any errors, and restart the agent.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ048E Problem encountered for <spill>, <problem area>: required <req>, received <res>.
Explanation
This spill data set <spill> could not be validated. The <problem area> with the parameters <req> and <res> gives additional details.
User response
Fix the issue in the <problem area> using the required <req> value. If necessary, contact IBM® Software Support for additional help.
AUIZ049E z/OS call failure for <spill>, <problemarea>: RC= <rc>, RSN= <rsn>.
Explanation
An attempt to validate the spill data set has caused an error with the z/OS services. A <problemarea> value with return code <rc> and reason code <rsn> are returned. If
the <problemarea> value is OBTAIN, and the <rc> value is 4, the spill database in question might have been migrated. In that case, the spill database should be recalled
before processing continues.
User response
If a migrated data set is not the problem, contact IBM Software Support.
Explanation
The z/OS log stream name that was specified in the LOG_STREAM_DLIO or LOG_STREAM_DLIB AUICONFG DD input stream does not exist.
System action
The agent address space terminates.
User response
Correct the log stream name that you provided, or customize and run the AUILSTRx Log Stream definition jobs that are located in the SAUISAMP product data set.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
There was a failed attempt to validate the z/OS® System Logger Log-Stream, through the use of an IXGCONN call.
System action
Processing terminates.
User response
Determine the cause of the failure by examining the return and reason codes for the IXGCONN macro. These can be found in the manual, IBM® MVS™ Programming:
Authorized Assembler Services References.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ052E Abend occurred while validating <log stream>. Abend code = <code>, RSN=
<reason>.
Explanation
The Log Stream <log stream> validation failed with abend code <code> and reason code <reason>.
User response
Contact IBM® Software Support.
Explanation
This error can occur for several reasons. It is preceded by the specific occurrence that caused the logging subsystem to fail during initialization.
User response
Review previously issued error messages to determine the cause of the logging failure.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ054E The Batch DLI log Stream and Online DLI log stream names must be different.
Explanation
The log stream name specified for LOG_STREAM_DLIO and LOG_STREAM_DLIB must be different.
User response
Specify different log streams for batch and online in the agent configuration.
Parent topic: Error messages and codes: AUIZxxxx
User response
Check the available <shm-id> and update the confugration files. Contact IBM® Software Support if <shm-id> is set correctly.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ056E Shared memory segment ID segment_id is owned by agent agent_name and cannot
be attached.
Explanation
The shared memory segment that was identified by the <id> parameter within the address-space-manager-config section of the agent configuration file is already used by
the specified agent, agent_name.
System action
The agent terminates because it is unable to use the shared memory segment.
User response
To avoid a collision with other agents running on the LPAR, change or include the <id> value in the address-space-manager_config section of the agent configuration file.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ057E A configuration syntax error was detected at line <number>; expected "<token1>",
found "<token2>".
Explanation
An invalid value was found in the AUICONFG file and the indicated line.
System action
Processing terminates.
User response
Review Configuring the IBM Security Guardium S-TAP for IMS on z/OS agent for information about permissible configuration values. Correct the syntax error and restart
the agent.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The active collection profile <profile-name> has been updated during policy installation.
User response
No action is required.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ059E Configuration parameter <option> value <value> is invalid: the first character must
be alphabetic.
Explanation
The configuration parameter <option> contains an invalid value.
User response
Review the valid values for the <option> and correct the configuration file.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ060E The master address space did not respond within 60 seconds.
System action
The AUIUSTC task terminates with RC=12.
User response
Contact IBM Software Support.
Parent topic: Error messages and codes: AUIZxxxx
Explanation
The AUIHOST DD statement has been detected in the JCL.
System action
The IP address for participating LPARs are resolved by the information contained in this file and described by message AUIxxxI.
User response
If this was not intended, remove the DD statement.
Parent topic: Error messages and codes: AUIZxxxx
System action
The DNS_NAME is the value that is used to perform the gethostbyname call in order to obtain the relevant IP address.
User response
Verify that the supplied LPAR_NAME and DNS_NAME values are correct.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ063E AUIHOST file format is invalid. RECFM must be FB; LRECL must be 80.
Explanation
The file format that was provided by using the AUIHOST DD is incorrect.
System action
The address space terminates.
User response
Verify that the supplied file is a Fixed Block (FB) sequential file, has a logical record length (LRECL) of 80 bytes, and is a either a sequential file or a member of a
Partitioned Data Set (PDS or PDS/E). Correct the error and restart the address space.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ064E AUIHOST file contains invalid syntax <line number and string>
Explanation
The AUIHOST file supplied contains a record with invalid syntax.
System action
The address space terminates.
User response
AUIZ065W IMS STAP <name> TCP/IP streaming disabled due to user settings.
Explanation
Simulation mode is on because the STAP_STREAM_EVENTS parameter has been set to N.
System action
Events will not be streamed to the Guardium® system.
User response
To stream events to the Guardium system, set the STAP_STREAM_EVENTS parameter to Y.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ066E Configuration parameter "DLIFREQ" value value is invalid: expected 10K-999K, 1M-
10M.
Explanation
In the AUICONFG file, the DLIFREQ parameter value is outside of the permitted range. Valid values for the DLIFREQ parameter are 10K -- 999K, or 1M -- 10M.
System action
The AUIAxxx task terminates.
User response
Correct the DLIFREQ parameter value.
Parent topic: Error messages and codes: AUIZxxxx
AUIZ067W Configuration parameter <parameter> value <wrong value> is not valid. <Value>
will be used instead.
Explanation
Configuration parameter is not valid: <parameter> should match <value>.
User response
Change the parameter to correspond to the requirements.
Parent topic: Error messages and codes: AUIZxxxx
This information is designed to help database administrators, system programmers, and application programmers perform these tasks:
Plan for the installation of IBM Guardium S-TAP for Data Sets
Install and operate IBM Guardium S-TAP for Data Sets
Configure the IBM Guardium S-TAP for Data Sets environment
Diagnose and recover from IBM Guardium S-TAP for Data Sets problems
IBM Guardium S-TAP for Data Sets enables you to collect many different types of information, including:
Access to VSAM and non-VSAM data sets and security violations that are recorded by SMF.
Data set operations that are performed against VSAM data sets, such as delete or rename events, recorded by SMF.
Access to specific records within VSAM data sets, including key-sequenced data sets (KSDS) or relative record data sets (RRDS), captured as they occur.
Transaction information that is associated with a VSAM KSDS or RRDS logical record operation, performed within a transaction that runs on the Customer
Information Control System (CICS) Transaction Server.
Access to read and update events for a particular VSAM cluster (consisting of one or more physical data sets) for actions performed on the data set as a whole, or
actions performed at the individual level for records within the data set.
Parent topic: IBM Security Guardium S-TAP for Data Sets on z/OS
Enhanced reporting of partitioned data sets (PDS) and extended partitioned data sets (PDSE) member activity
IBM Guardium S-TAP for Data Sets can now report on the following types of activity:
Member Adds
Member Replaces
Member Renames
Member Deletes
STOW Initialization (PDSE directory clearing)
CICS Transaction Server 5.3 to capture Record Level Monitoring (RLM) data
8-character CICS local unit of work (with CICS Transaction Server 4.2 and later, until end of service)
Dynamic starting and stopping of RLM data collection with new IBM Guardium S-TAP for Data Sets SAMPLIB members
Parent topic: IBM Security Guardium S-TAP for Data Sets on z/OS overview
Guardium system
Provides the user interface, which processes your requests and displays the resulting information.
Enables you to create collection policies, which specify the types of data that are to be collected by the agent.
Stores the collected data.
Agent
The agent collects data from a single z/OS system. Monitoring can be performed at both the data set and record level:
For data set level monitoring, data is collected directly from SMF records, as presented to various SMF exits with which the agent interfaces.
For record level monitoring, data is collected when VSAM records are read or written.
Parent topic: IBM Security Guardium S-TAP for Data Sets on z/OS overview
Installation requirements for IBM Guardium S-TAP for Data Sets V10.1.3
Review the software and authorization prerequisites for installing IBM Guardium S-TAP for Data Sets V10.1.3.
Software prerequisites
IBM Guardium S-TAP for Data Sets requires z/OS Version 2 Release 2 or later, until end of service.
User ID authority requirements
To install the product, you must have the necessary z/OS user ID authorities.
Parent topic: IBM Security Guardium S-TAP for Data Sets on z/OS
Software prerequisites
IBM Guardium S-TAP for Data Sets requires z/OS® Version 2 Release 2 or later, until end of service.
Customer Information Control System (CICS®) Transaction Server support requires IBM CICS Transaction Server for z/OS V4 Release 2 or later, until end of service.
Parent topic: Installation requirements for IBM Guardium S-TAP for Data Sets V10.1.3
Define the appropriate SMF record collection parameters in the SMFPRMxx PARMLIB member and APF authorize the load library for the product.
Update the appropriate procedure library to include the agent started task.
If you choose to enable CICS support, you must also have the authority to:
Configuration overview
To configure the product, complete the required steps.
Security: Review and establish the security requirements. You must set up access controls in your security product in order to create, authorize, or update the
various data sets that are necessary for product configuration.
Review the required resource authorizations information, including:
APF authorizing the load library
Authorizing the z/OS agent started task for the control data set
Defining an OMVS segment
Planning your configuration: Review the steps that are required to plan your configuration.
Job cards for the sample JCL in the sample library: Provide valid job cards.
Allocating auxiliary storage: Ensure that data will not be lost in the event of an overflow.
Configuring the SMFPRMxx parameter library member: Ensure a complete audit by configuring the SMFPRMxx parameter library to collect the required SMF record
types.
IAM and ACF2 collection considerations: Review information about capturing IAM data set activity and ACF2 access failures.
Creating the control data set: Generate the initial partitioned data set members.
Specifying subsystem options: Review the subsystem changes that you can make to the options member in the control data set.
Configuring the started task JCL: Determine the location of the started task control job language (JCL), and follow configuration steps and tips.
CICS Transaction Server support: Review the requirements for enabling the CICS Transaction Server, and follow the instructions for Configuring CICS Transaction
Server support.
Parent topic: IBM Security Guardium S-TAP for Data Sets on z/OS
Security
IBM Guardium S-TAP for Data Sets requires access to various z/OS® data sets and system components. You must set up access controls in your security product in order
to create, authorize, or update the various data sets that are necessary for product configuration.
To provide IBM Guardium S-TAP for Data Sets with access to the necessary z/OS data sets and system components, you must APF authorize the load library, authorize the
z/OS started task for the control data set, and define an OMVS segment to your security product, as described in the following sections.
Security products can include various software tools that are currently available, such as IBM Resource Access Control Facility (RACF®), Computer Associates
International Top Secret, and Computer Associates International Access Control Facility (ACF2).
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
The product data set SAUVLOAD, which contains the product load modules that are required for operation, must be APF authorized on the system on which IBM Guardium
S-TAP for Data Sets will be run.
Refer to the z/OS MVS Programming Authorized Assembler Services Guide for guidelines and instructions for using APF.
Authorizing the z/OS agent started task for the control data set
Refer to your security product documentation for more information on authorizing the agent started task.
If you are using IBM RACF, refer to z/OS UNIX System Services Planning for guidelines and instructions about OMVS segment definitions. If you are using a security
product other than RACF, refer to your product’s instructions on how to define an OMVS segment.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
The OUTAGE_SPILLAREA_SIZE parameter option instructs the address space to allocate a data space equal in size to the value that you set for
OUTAGE_SPILLAREA_SIZE.
Verify that the current local page space can accommodate a new data space.
Example
Specifying OUTAGE_SPILLAREA_SIZE=64 instructs the address space to allocate 64 MB of data space.
Refer to the z/OS® MVS™ Initialization and Tuning guide for more information about sizing local page data sets.
Parent topic: Planning your configuration
The record types can be collected at the subsystem or system level. Maximum auditing of VSAM and non-VSAM data set activity can be achieved by ensuring that all
defined subsystems record all of the SMF record types that are required by the product.
The defaults used at the system level for those subsystems that are not explicitly defined should also specify collection of the required SMF record types. The required
SMF record types are 14, 15, 17, 18, 30, 42, 60, 61, 62, 64, 65, 66, and 80. If any required SMF record types are not defined for collection, message AUV1450W alerts you
to define them.
If the appropriate exit is not defined for the operating system level, SMF records will not be collected. Specify the SMF exits as follows:
For z/OS Version 2 Release 2 and earlier, specify the IEFU83, IEFU84, and IEFU85 SMF exits.
For z/OS Version 2 Release 3 and later, specify the IEFU86 SMF exit.
For more information about setting up and managing SMF, refer to the z/OS MVSâ„¢ System Management Facility (SMF) manual.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
Related reference
SMF record types and contexts
Innovation Access Method (IAM) from Innovation Data Processing provides capabilities beyond standard VSAM. IAM replaces VSAM access with a proprietary non-VSAM
access that simulates VSAM. Because the underlying data sets are non-VSAM, accesses to the IAM-simulated VSAM data sets do not generate VSAM SMF records, such as
the SMF type 62 (VSAM OPEN) and SMF type 64 (VSAM CLOSE).
For IAM data sets, IBM Guardium S-TAP for Data Sets does not report the following items:
Context records for OPEN and UPDATE for IAM data sets (because of the lack of the SMF type 62 records).
IAM simulation of alternate index and path processing (because of the lack of an IAM SMF CLOSE record).
The CLOSE record counters will report IAM data sets differently from native VSAM processing. Although the IAM CLOSE SMF record offers an extensive array of counters,
those corresponding to the VSAM SMF Type 64 record are included in the accumulated counts within the CLOSE context record.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
Procedure
1. Determine the user-specified SMF record ID that was selected for IAM.
2. Specify that value in the IBM Guardium S-TAP for Data Sets control data set IAM_SMF_RECORD_ID option.
Procedure
1. Determine the user-specified SMF record ID that was selected for ACF2.
2. Specify that value in the IBM Guardium S-TAP for Data Sets control data set ACF_SMF_RECORD_ID option.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
To specify IBM Guardium S-TAP for Data Sets subsystem options, modify the contents of the OPTIONS member as described.
ACF_SMF_RECORD_ID
If you are using Access Control Facility (ACF2) from Computer Associates International, you must provide product-specific information for your SMF data to be
processed. ACF2 records access failures to a unique record ID. Determine the user-specified SMF record ID that is selected for ACF2 and specify that ID in the IBM
Guardium S-TAP for Data Sets CONTROL data set ACF_SMF_RECORD_ID option if you want the product to report these failures.
ACF2 writes SMF access failure data to a user-defined SMF record ID. Specify a numeric value that identifies the SMF record identification number used by ACF2.
For ACF2 installations, contact your ACF2 administrator to determine the appropriate numeric value to include with this parameter.
Note:
For z/OS Version 2 Release 3 and later, valid values are 128 – 1151.
For z/OS Version 2 Release 2 and earlier, valid values are 128 – 255.
There is no product default value, however, the SAMPLIB member AUVSOPTS includes a default specification of 230.
APPLIANCE_CONNECT_RETRY_COUNT
Specify a numeric value that defines the number of times to retry communicating with the Guardium system when an error is encountered during initialization. If
the communication is still not successful after the number of retries as specified by this value has been completed, the communication is abandoned and no data is
sent. The process also terminates if the number of retries specified is reached with no successful connection.
Valid values are 0 -- 65535. The default value is 20.
APPLIANCE_NETWORK_REQUEST_TIMEOUT
Specify a numeric value that defines the number of seconds that must transpire before a timeout is recognized.
Valid values are 0 -- 65535. The default value, in seconds, is 0.
APPLIANCE_PING_RATE
Specify a numeric value that defines the number of seconds between pings to the Guardium system. The ping signals the Guardium system that the S-TAP is active
and available for communications.
Valid values are 1 -- 65535. The default value, in seconds, is 5.
APPLIANCE_PORT
Specify a numeric value that defines the TCP/IP port number for communication with the Guardium system by IBM Guardium S-TAP for Data Sets. Use port 16022
for the V10.1.3 system protocol.
The default value is 16022.
If port 16023 is used, encryption support is required for the connection to the appliance.
Note: Specifying this keyword and parameter designates the port on which the Guardium appliance is listening to the S-TAP. The port is dedicated to the IP address
of the appliance. Port 16022 or 16023 can also be in use on z/OS® by another application.
Valid values are 16022 and 16023.
APPLIANCE_RETRY_INTERVAL
Specify a numeric value that defines the number of seconds between retries when an error is encountered during an initial attempt to connect to the Guardium
system.
Valid values are 0 -- 65535. The default value, in seconds, is 10.
APPLIANCE_SERVER
Specify the TCP/IP address for the Guardium system with which IBM Guardium S-TAP for Data Sets is to communicate. In multistream processing scenarios, this
address specifies the first Guardium appliance that is to be used.
The address can be specified as a host name (security.guardiumvsam.net) or as four numbers separated by periods (for example, 188.128.6.42).
Maximum length is 53 characters. There is no default.
APPLIANCE_SERVER_[1-5]
Specify alternative TCP/IP addresses to use for failover recovery processing and multistream Guardium appliance destinations. Up to five alternative TCP/IP
addresses are supported.
APPPLIANCE_SERVER_1=addr
or
APPPLIANCE_SERVER_1(addr)
where 1 can be 1, 2, 3, 4, or 5.
Valid values are any valid TCP/IP address. There are no default values. If initialization does not detect this parameter, it does not activate the failover process.
Both the APPLIANCE_SERVER_[1-5] and APPLIANCE _SERVER_FAILOVER_[1-5] parameters can be used to designate servers for multistreaming or failover. Use
the APPLIANCE_SERVER_LIST parameter to designate how these parameters are used.
Maximum length is 51 characters.
APPLIANCE_SERVER_FAILOVER_[1-5]
Specify alternative TCP/IP addresses to use for failover and recovery processing. The product supports up to five alternative TCP/IP addresses. To specify one or
more entries, include this parameter with a numeric suffix from 1 - 5, each time providing a unique TCP/IP address.
The option syntax is as follows:
APPPLIANCE_SERVER_FAILOVER_1=addr
or
APPPLIANCE_SERVER_FAILOVER_1(addr)
where 1 can be 1, 2, 3, 4, or 5.
Valid values are any valid TCP/IP address. There are no default values. If initialization does not detect this parameter, it does not activate the failover process.
Both the APPLIANCE _SERVER_FAILOVER_[1-5] and APPLIANCE_SERVER_[1-5] parameters can be used to designate servers for multistreaming or failover. Use
the APPLIANCE_SERVER_LIST parameter to designate how these parameters are used.
Maximum length is 42 characters.
APPLIANCE_SERVER_LIST(MULTI_STREAM|FAILOVER|HOT_FAILOVER)
Set APPLIANCE_SERVER_LIST to MULTI_STREAM for a Guardium appliance connection to be established for each server that is identified by the
APPLIANCE_SERVER_n or APPLIANCE_SERVER_FAILOVER_n parameters.
If a connection is lost, S-TAP audit events continue to transmit over the remaining appliance connection.
Lost connections are retried at regular intervals that are determined by multiplying the APPLIANCE_CONNECT_RETRY_COUNT by the
APPLIANCE_PING_RATE.
Set APPLIANCE_SERVER_LIST to FAILOVER for one Guardium appliance connection to be active at a time.
If the connection to the primary appliance is lost, a failover action occurs, which results in an attempt to connect to the next available server. The next
available server is identified by the APPLIANCE_SERVER_n or APPLIANCE_SERVER_FAILOVER_n parameter.
After a failover action occurs, the connection to the primary server is retried at regular intervals that are determined by multiplying the
APPLIANCE_CONNECT_RETRY_COUNT by the APPLIANCE_PING_RATE.
Set APPLIANCE_SERVER_LIST to HOT_FAILOVER to keep each connected Guardium appliance active via pings. If the primary Guardium appliance (which is set by
the APPLIANCE_SERVER parameter) becomes unavailable and failover occurs, HOT_FAILOVER maintains the activity of the primary appliance policy.
With all settings of APPLIANCE_SERVER_LIST, if all connections fail, and a spill file is specified (parameter OUTAGE_SPILLAREA_SIZE), events are buffered to the
spill file until a connection becomes available. If no spill file is specified, and all connections are lost, data loss occurs.
The default is FAILOVER.
AUDIT
Specify a character string from one through 26 characters that defines the name of this IBM Guardium S-TAP for Data Sets agent.
There is no default.
CICS_SUPPORT
Enabling CICS® Transaction Server support activates additional reporting of CICS-specific information on record level events, including:
CICS File ID
CICS Function Code
CICS Program ID
CICS Region ID
CICS Terminal ID
CICS Transaction ID
CICS User ID
CICS Logical Unit of Work
There is no product default value; however, the SAMPLIB member AUVSOPTS includes a default specification of 201.
INTERNAL_BUFFER_SIZE
Specify the size of the internal buffer used.
To improve performance, data is stored in an internal buffer that is sent when the buffer is full or during a ping request. If the buffer reaches the
INTERNAL_BUFFER_SIZE, data is sent without waiting for the next ping request.
Specifying an INTERNAL_BUFFER_SIZE value that is too large for your environment can cause connection problems that are due to timing out while trying to send a
large amount of data. Specifying too small a value might cause unnecessary I/O requests.
Tip: Performance varies based on system load, network load, and the load on the Guardium system, so the correct value for your environment cannot be
predetermined. Begin with the default value, and make minor, incremental adjustments to improve performance, if necessary.
Valid values are 0 -- 2047 megabytes. The default is 8.
INITIAL_RULEDEF
You must not change this subsystem option unless IBM Software Support instructs you to do so. If instructed to modify this subsystem option, specify the name of
the rule definitions member to use at startup. The default rule definitions member name is RULEDEFS.
MEGABUFFER_COUNT
Specify the number of IBM Guardium S-TAP for Data Sets audit events that are buffered, prior to the product attempting a TCP/IP send operation.
The megabuffer is flushed when either of two conditions is met:
When MULTI_STREAM mode is enabled by parameter APPLIANCE_SERVER_LIST, and a megabuffer flush occurs, the audit event data stream is switched to the
next available Guardium appliance. The event data stream will switch from appliance to appliance in a round-robin sequence as each megabuffer is sent.
Valid values are 1 -- 8192. The default is 200.
OUTAGE_SPILLAREA_SIZE
Specify the size of the spill file to be used when a connection cannot be made.
If the product includes a spill file, and no secondary APPLIANCE_SERVER_FAILOVER address is specified, or none of the secondary
APPLIANCE_SERVER_FAILOVER addresses respond, it writes to the spill file. The spill file is meant for short-term outages only, because when a connection is
restored to any Guardium system, it clears the spill file content before continuing to send data.
Valid values are 0 -- 1024 megabytes. If a valid value is not specified, a spill file is not created.
PREFER_IPV4_STACK
Specify the request for an IPV4 address to be issued from the Domain Name Server (DNS). The default value is N.
Y causes a request to be issued to the DNS for an IPV4 address for the hostname that is specified in the APPLIANCE_SERVER parameter:
The DNS lookup request for an IPV4 address is attempted. If an IPV4 address is defined for the hostname, the DNS will respond with the value that
will be used to connect to the Guardium appliance.
If only an IPV6 address is defined at the DNS, then the DNS will respond with the IPV6 address that will be used to connect to the Guardium
appliance.
If both IPV4 and IPV6 addresses are defined at the Guardium appliance, the DNS will respond with both addresses, and the IPV4 address will be used
to connect to the appliance.
N or omitting this option from configuration causes a request for an IPV6 address to be issued to the DNS for the hostname that is specified by the
APPLIANCE_SERVER parameter.
The DNS lookup request for an IPV6 address is attempted. If an IPV6 address is defined for the hostname, the DNS will respond with the value that
will be used to connect to the Guardium appliance.
If only an IPV4 address is defined at the DNS, then the DNS will respond with the IPV6 address that will be used to connect to the Guardium
appliance.
If both IPV4 and IPV6 addresses are defined at the Guardium appliance, the DNS will respond with both addresses, and the IPV4 address will be used
to connect to the appliance.
Note: Whether or not this option is specified, if the address returned from the DNS is not valid for the hostname, it will result in failure to connect to the appliance,
and the IBM Guardium S-TAP for Data Sets started task will terminate.
RLM
Specify the initial status of RLM processing by setting the RLM parameter to either ENABLE or DISABLE. ENABLE enables record level monitoring. DISABLE disables
record level monitoring.
The default value is ENABLE.
SOCKET_CONNECT_TIMEOUT
Specify the length of time for socket connection attempts before failure or timeout.
Setting this value too low results in connection failures when the Guardium system is slow to respond. Setting this value too high causes problems in failover
scenarios.
Tip: Performance varies based on system load, network load, and the load on the Guardium system, so the correct value for your environment cannot be
predetermined. Begin with the default value, and make minor, incremental adjustments to improve performance, if necessary.
Valid values are 1 -- 65535. The default value, in seconds, is 3.
STAP_STREAM_EVENTS
Specify the initial streaming status by setting the STAP_STREAM_EVENTS parameter to either Y or N.
Y indicates that the IBM Guardium S-TAP for Data Sets agent address space will send data to the server in a manner that is consistent with the active policy.
N indicates that the agent address space will not send data to the server. It will perform all data collection processing in a manner that is consistent with the
active policy. The agent address space will issue message AUV1070I at startup: TCP/IP STREAMING DISABLED DUE TO USER SETTING. See Simulation
mode for more information.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
Procedure
1. Copy the IBM Guardium S-TAP for Data Sets started task JCL to your system PROCLIB from sample data set member AUVJSTC.
Tip: Name the IBM Guardium S-TAP for Data Sets started task member AUVSTAPV. This name is easily identifiable with the IBM Guardium S-TAP for Data Sets
product.
2. Verify that the statement: //AUVSTAPV PROC OPTSMBR=OPTIONS points to the default member name OPTIONS.
The default member name OPTIONS was created during creation of the control data set.
3. Configure the started task JCL that you copied to your system PROCLIB by replacing AUV.V10R1M3 with the high-level qualifier of the installed IBM Guardium S-
TAP for Data Sets load library.
Note: For operation of the product, policy activation, and correct processing of data, the following conditions must be met:
A DD statement with the DDNAME OPTIONS must be in the IBM Guardium S-TAP for Data Sets started task. This DD statement points to the subsystem
OPTIONS member of the IBM Guardium S-TAP for Data Sets control data set, which contains the global settings for the product. When the started task is
initiated, it references the data in the subsystem options member to establish global settings, including the subsystem identifier for this specific instance of
IBM Guardium S-TAP for Data Sets.
By default, the OPTIONS DD statement uses the same data set as the RULEDEFS and RULEDEFB DD statements. If necessary, you can specify a
different data set for the OPTIONS DD statement other than that which is used for the DD statements RULEDEFS and RULEDEFB. The OPTIONS
member must be present in the data set that is specified for the OPTIONS DD statement.
A DD statement with a DDNAME of CONTROL must be in the IBM Guardium S-TAP for Data Sets started task. For example: //CONTROL DD
DSN=AUV.V10R1M3.CONTROL,DISP=SHR. This DD statement points to the IBM Guardium S-TAP for Data Sets control data set that contains the collection
policy in the RULEDEFS member.
The two DD statements with the DDNAMES RULEDEFS and RULEDEFB must be present and must point to the same control data set name that was specified
in the CONTROL DD statement. The member names RULEDEFS and RULEDEFB must not be changed. If DDNAMES RULEDEFS and RULEDEFB are not
present, are changed, or do not point to the correct data set name, then the agent does not initiate correctly and is unable to collect data.
The high-level qualifier you specify for the control data set JCL when allocating the control data set must match the high-level qualifier you specify in the
started task JCL.
The started task must have the authority to read and update the control data set and load library.
4. After you configure the started task JCL, add it to the z/OS® PROCLIB data set for started task initiation.
Note:
IBM Guardium S-TAP for Data Sets accommodates the use of multistream and improves support for large policies by providing a default started task JCL region size
of 96 megabytes. When multistream is enabled, a buffer is created for each appliance, based on the INTERNAL_BUFFER_SIZE value. (Valid values are 0 - 2047
megabytes. The default value is 8.) The default started task JCL region size of 96 megabytes can accommodate large policies by providing space for up to six
connected appliances with a default INTERNAL_BUFFER_SIZE of 8 megabytes and approximately 150,000 values in a policy.
You might need to increase the started task JCL region size if:
the value specified for INTERNAL_BUFFER_SIZE is greater than 8 megabytes
an installed policy contains more than 150,000 values
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
IBM Guardium S-TAP for Data Sets must be running before CICS is started. If changes to a policy are made while a CICS file is open, the file must be closed and reopened
for RLM-related policy changes to take effect.
Verify that the agent is running and correctly configured, and the appropriate work area storage is available.
To capture data on files that are referenced within a transaction, the IBM Guardium S-TAP for Data Sets agent must be running and correctly configured to monitor
each system image on which data sets reside.
CICS support uses the XFCFROUT Global User Exit (GLUE).
The GLUE acquires an above-the-line work area from the extended CICS dynamic storage area (ECDSA) of approximately 1412 bytes for each active or suspended
transaction that performs at least one VSAM file operation. The work area is released at the end of the transaction.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
Procedure
1. Configure the CICS system options.
a. Specify the CICS_SUPPORT=ENABLE option, by using the subsystem options that are located in the OPTIONS member of the control data set.
2. Configure the CICS system initialization and system termination program list tables (PLTs), as shown in the example at the end of this topic.
a. Enter the program AUVPLTPI after the DFHDELIM PLT entry.
b. Enter the program AUVPLTPS before the DFHDELIM PLT entry.
c. After creating or modifying the CICS system initialization and system termination PLTs, you must assemble and link them. For more information about
creating a PLT, see the CICS Transaction Server for z/OS® Resource Definition Guide.
3. Specify autoinstall in the CICS system initialization parameters to automatically install the AUVPLTPI, AUVPLTPS, and AUVFROUT programs.
If you do not specify autoinstall in the CICS system initialization parameters, you must define AUVPLTPI, AUVPLTPS, and AUVFROUT in the CICS system definition
file (CSD). To install the program definitions in batch, sample JCL has been provided in member AUVCSDUP of the IBM Guardium S-TAP for Data Sets SAUVSAMP
library that can be modified and used for the CICS program DFHCSDUP. Alternatively, the CICS CEDA Resource Definition Online transaction can also be used to
perform the install of the program definitions. See the CICS Transaction Server for z/OS Resource Definition Guide for more information about installing resource
definitions.
a. Define the following attributes:
LANGUAGE (ASSEMBLER)
STATUS (ENABLED)
CEDF (NO)
DATALOCATION (BELOW)
EXECKEY (CICS)
EXECUTIONSET (FULLAPI)
RELOAD (NO)
For the load modules to be located, the AUVPLTPI, AUVPLTPS, and AUVFROUT programs must be located in a load library located in the CICS DFHRPL
concatenation within the CICS startup JCL.
4. Optional: The CICS facilities that implement RLM support, outside of normal CICS PLT initialization, can be enabled and disabled. To do so, define CICS transactions
accordingly by using the batch CICS program DFHCSDUP or the CICS CEDA Resource Definition Online transaction.
To enable the CICS facilities that are used to implement CICS RLM support, the following attributes must be assigned to the transaction:
Results
If you have configured CICS support, message AUV3004I is displayed during CICS initialization to indicate that the Global User Exit AUVPLTPI XFCFROUT was installed
and enabled.
Example
Enter the program AUVPLTPI after the DFHDELIM PLT entry in the CICS system initialization PLT:
*
* CICS PROGRAM LIST TABLE FOR CICS SYSTEM INITIALIZATION
*
DFHPLT TYPE=INITIAL,SUFFIX=I1
*
* ENTRIES AHEAD OF DFHDELIM ARE EXECUTED IN FIRST PASS OF PLTPI
* DURING THE SECOND PHASE OF CICS SYSTEM INITIALIZATION
*
DFHPLT TYPE=ENTRY,PROGRAM=DFHDELIM
*
* ENTRIES AFTERF DFHDELIM ARE EXECUTED IN SECOND PASS OF PLTPI
* DURING THE THIRD PHASE OF CICS SYSTEM INITIALIZATION
*
DFHPLT TYPE=ENTRY,PROGRAM=AUVPLTPI
*
DFHPLT TYPE=FINAL
*
END
Enter the program AUVPLTS before the DFHDELIM PLT entry in the CICS system termination PLT:
*
* CICS PROGRAM LIST TABLE FOR CICS SYSTEM TERMINATION
The suffix of the table that was created as the program initialization PLT must be referenced in the PLTPI parameter.
The suffix of the table that was created as the program termination PLT must be referenced in the PLTSD parameter.
Here is a sample set of system initialization parameters that specifies the PLTPI and PLTSD suffixes:
AICONS=YES,
XRF=NO,
AUXTR=OFF,
AUXTRSW=NO,
APPLID=CICSSYSA,
FCT=NO,
...
PLTPI=I1,
PLTSD=T1,
...
SYSIDNT=SYSA
Note:
Implementation of this facility requires changes to both CICS and RACF. After implementation, the resulting change to SMF type 80 processing results in the
SMF80USR field containing the CICS signon for specific file accesses. Consult your CICS and RACF security administrator when considering the implementation of
this facility.
This facility does not report the data set activity, only the security level for the requested access event.
The following steps are also documented in the RACF Security Guide. For more information, see the CICS Transaction Server for z/OS® RACF Security Guide.
Procedure
1. Specify RESSEC(YES) in the CSD resource definition of the transactions that access the files.
2. Using the CICS file names for identification, define the profiles to RACF in the FCICSFCT or HCICSFCT resource classes, or their equivalent if you have a user-
defined resource class names.
a. For example, use the following commands to define files in the FCICSFCT class, and authorize users to read from or write to the files:
3. To define files as members of a profile in the CICS file resource group class with an appropriate access list, use the following commands:
4. Specify SEC=YES as a CICS system initialization parameter, or SECPRFX if you define profiles with a prefix.
5. Specify XFCT=YES for the default resource class names of FCICSFCT and HCICSFCT, or XFCT=class_name for user-defined resource class names.
Results
RACF SMF type 80 records contain the CICS user signon in the SMF80USR field. The data is reported to the Guardium system records User ID field.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
Product initialization errors might occur if other products, which are known to intercept processing at the point of open, close, or record management functions for VSAM
data sets, are started before IBM Guardium S-TAP for Data Sets. Message AUV1196E will warn you of a product initialization order conflict.
1. Shut down IBM Guardium S-TAP for Data Sets and any similar products, including the previous version of this product
2. Close any data sets that are open under IBM Guardium S-TAP for Data Sets.
3. Start IBM Guardium S-TAP for Data Sets before starting similar products. IBM Guardium S-TAP for Data Sets must be running before CICS is started.
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
1. Start the agent started task by issuing the START command from the operator console, for example: START AUVSTAPV
2. Stop the agent started task by issuing the STOP command from the operator console, for example: STOP AUVSTAPV
You can configure the agent started task to start automatically during the z/OS® initial program load (IPL). To set automatic startup, add the appropriate command to the
COMMNDxx member in SYS1.PARMLIB, or contact your system administrator.
Parent topic: Starting the product
Procedure
1. You must install a policy on the IBM Guardium system with the characteristics listed below. Remember to replace <HLQ> with a valid high-level qualifier.
Note: To see specific records on the IBM Guardium system, you might need to install a policy on the appliance in the first position that specifies Actions: LOG FULL
DETAILS WITH VALUES.
2. Create a query on the IBM Guardium system that will report the events received from IBM Guardium S-TAP for Data Sets. Query characteristics are as follows:
Domain........: Access
Main Entity...: FULL SQL
Recommended Fields: IMS/DATA SET Event time
IMS/DATA SET Job Name
IMS/DATA SET Step Name
IMS/DATA SET Program Name
IMS/DATA SET Previous DSN
IMS/DATA SET Set Type
IMS/DATA SET Context
3. Start the IBM Guardium S-TAP for Data Sets started task.
4. Verify that the required SMF record types are enabled. Message AUV1450W in the Data Sets agent JESMSGLG log will alert you if any SMF record types are not
defined.
5. Verify that the IBM Guardium S-TAP for Data Sets agent is connected to the intended appliance. Message AUV2182I in the Data Sets agent JESMSGLG log indicates
a successful connection between the agent and the appliance.
6. Make the following modifications to the installation verification JCL in SAUVSAMP member AUVJIVP:
a. Add a valid job card.
b. Replace all occurrences of <HLQ> with the same high-level qualifier that was used in the policy as described in Step 1.
7. Submit the modified JCL in SAUVSAMP member AUVJIVP.
Results
Verify that the following data sets contexts appear on the appliance:
Table 1. Data set contexts for installation verification
Step Description Data set contexts
GENDATA Generate input data for subsequent job steps None
VSAM Define, load, rename and delete ESDS, KSDS, and RRDS data sets DATA SET ALTER
Member Add
PDSCOPY Copy a PDS member to another PDS member DATA SET CLOSE
Member Add
PDSREPL Copy over an existing PDS member DATA SET CLOSE
Member Replace
PDSTEST Rename a PDS member, create an alias, delete all PDS members, rename the PDS, and delete the PDS DATA SET CLOSE
Member Add
Member Delete
Member Rename
STOW Initialize
Parent topic: Configuring the IBM Guardium S-TAP for Data Sets agent
Parent topic: IBM Security Guardium S-TAP for Data Sets on z/OS
The IBM Guardium S-TAP for Data Sets TCP/IP connection must be configured.
At least one agent per z/OS® image must be specified. When you are configuring an agent instance:
Specify the host name or IP address on which the Guardium system is running. This value is specified by the APPLIANCE_SERVER element in the agent
configuration file. The complete name of this CONTROL member is OPTIONS.
When the agent is started, it uses the specified configuration information to connect to the Guardium system.
IBM Guardium S-TAP for Data Sets sends events to a single appliance until a ping occurs, or the number of records that is specified by MEGABUFFER_COUNT is reached.
To enable multistreaming, you must specify MULTI_STREAM when you configure the APPLIANCE_SERVER_LIST parameter in the OPTIONS member of the CONTROL data
set. Parameters APPLIANCE_SERVER and APPLIANCE_SERVER_[1-5] specify the appliances to which you intend to stream events. The appliance that is specified by
APPLIANCE_SERVER provides the policy that is used for event matching.
For more information about OPTIONS member parameters, see Specifying subsystem options.
If the primary appliance becomes unavailable and failover occurs, the appliance policy that was originally pushed from the primary appliance continues to be active. When
all Guardium appliances are connected, the status of each appliance connection, listed in the Guardium interface, is green.
Communicating with the IBM Guardium S-TAP for Data Sets started task
IBM Guardium S-TAP for Data Sets operator commands enable authorized users to perform selected operations. Several types of operator commands can be used to
display the status of IBM Guardium S-TAP for Data Sets, to enable and disable certain functions, and to dynamically alter processing without stopping or quiescing the
product.
Commands
Enter operator commands from an MVSâ„¢ operator console, or by using a facility that issues MVS commands, such as SDSF.
The command format is MODIFYstcname, where stcname is the name of the started task, followed by the DISPLAY command.
For example, for record level monitoring, you can enter: MODIFYstcname,DISPLAY RLM. You can also use the shorthand for MODIFY, which is F to enter
Fstcname,DISPLAY RLM.
The following table summarizes the commands for displaying monitoring status and for enabling or disabling monitoring:
Data collection
IBM Guardium S-TAP for Data Sets collects data from multiple sources. This section describes the data collection process, as well as filtering stages and their
performance impacts.
With few exceptions, you can use the same filtering criteria for both record level and SMF event monitoring.
Specify the minimal filtering criteria necessary for your policy. Filtering only on the data you require minimizes:
Data collection overhead
Event processing
Event reporting
CPU time
Memory usage
Record level monitoring creates the potential for the collection and reporting of large amounts of data. When constructing a policy and specifying filtering criteria,
carefully consider the potential amount of data to be collected and processed.
In the user interface, you can specify lists of elements for some filters, and use generic characters (wildcards) to create more flexibility in your filtering criteria.
Generic characters act as placeholders in the specification of a character-based operand, representative of one or more valid characters for the entity on which an
operation is performed.
The use of generic characters can reduce the total number of policy rules required, but an overly inclusive set of selected entities can ultimately reduce efficiency.
Excessive use of generic characters can increase the scope of selectivity during the qualification of records for processing, and dramatically reduce efficiency and
increase overhead.
SMF event monitoring can be controlled at a higher level through the specifications in the SMFPRMxx z/OS® system PARMLIB member.
Note:
Record level monitoring support for a data set is detected, filtered, and activated at OPEN time. Files that are open at the time of an initial or updated policy
activation will not be intercepted for RLM processing unless the application permits closing and reopening the file. This is of particular importance for CICS, which
typically opens files at initialization or at first-use of a file. If a policy is updated after a CICS file has already been opened, it must be closed and reopened to be
eligible for RLM processing.
Record level monitoring enables you to monitor VSAM file access based on key values. The VSAM key can contain Personally Identifying Information, such as
account number, last name, or Social Security number. When the FORCE_LOG_LIMITED option is enabled, IBM Guardium S-TAP for Data Sets does not monitor any
record level data. If the file is being monitored by a policy, then only file access is reported; monitoring and reporting of access to specific keys is suppressed.
Filtering stages
Both record level and SMF event monitoring are performed in stages. If a collected event does not pass the lowest filtering stage (0), further processing of that event is not
performed. Otherwise, the event is reevaluated during the next stage of filter processing, and IBM Guardium S-TAP for Data Sets determines whether the event should be
auditing and reporting.
*VSAM record organization is only available as a filtering criterion for record level monitoring. Only key-sequenced data set (KSDS) and relative record data set
(RRDS) organizations are supported.
Some of the possible filtering criteria for Stage 1 filtering include a wider scope of data than others. For example, a user ID can require a much larger subset of data
for processing than a data set name requires. You can define the minimum amount of data to be monitored, collected, and reported on by including or excluding
selection criteria, creating lists of elements, and specifying relational operators for most criteria.
Stage 1 filtering for record level monitoring: For record level monitoring, Stage 1 filtering occurs at OPEN time for KSDS and RRDS VSAM data sets.
Stage 1 filtering for SMF event monitoring: For SMF event monitoring, Stage 1 filtering occurs in the IBM Guardium S-TAP for Data Sets address space
immediately after a monitored SMF record type is obtained by the collector, located at the SMF User Exit collection point.
Stage 2 filtering
Stage 2 filtering for record level and SMF event monitoring applies to the following event types:
Default or specified event types are collected and passed on to the Guardium system.
Stage 2 filtering for record level monitoring can be based on the type of logical record access as well as one or more values for the key of the VSAM data set. The
types of record level access that can filtered on in Stage 2 are:
Record insert
Record delete
Record update
Record read
You can use a key value or list of key values, as well as a key range or list of key ranges, to further limit the amount and scope of data collected. The key data can be
specified in normal printable characters or in hexadecimal by using the EBCDIC character set.
For key values, you can use generic characters in the specification of the keys. Only those records that pass Stage 2 filtering are collected and passed on to the
Guardium system.
If CICS® support is enabled, you can filter the record level monitoring event data that is captured within a CICS transaction. CICS transaction data can be filtered
by:
CICS user ID
CICS transaction ID
CICS program ID
CICS file ID
CICS region ID
CICS terminal ID
CICS function code
Stage 3 filtering
Stage 3 filtering is performed by IBM Guardium S-TAP for Data Sets based on Stage 2 filtering criteria that you define. During policy pushdown and activation, an
analysis of the policy filtering criteria is performed. This analysis enables prefiltering processing determinations that can be performed across the product. Stage 3
prefiltering can be very efficient in eliminating certain types of data collection, and ultimately reducing the path length through the product to provide optimal
processing performance.
Record level monitoring: If no record level monitoring event types are specified in the policy, Stage 2 filtering is eliminated, which reduces overhead
significantly.
SMF event monitoring: The exclusion of certain SMF event monitoring types from your filtering criteria allows IBM Guardium S-TAP for Data Sets to bypass
collection very early in the SMF User Exit data collection, and eliminates all downstream processing for that SMF record type.
Exclusions
IBM Guardium S-TAP for Data Sets does not collect information on the following types of activities:
DFSMVRC0
CQSINIT0
HWSHWS00
IRTRRC00
DFSRRC00
DFSUARC0
DSPCINT0
DSPURI00
Job name
Job number
Program name
DD name
User ID
Group ID
Job type
Step name
Step number
To optionally suppress incomplete events from being sent to the appliance, use the SUPPRESS_INCOMPLETE_EVENTS parameter as described in Specifying subsystem
options.
Parent topic: IBM Guardium S-TAP for Data Sets administration
To provide flexibility in controlling the impact of record level monitoring, policy options can be used to limit the scope of monitoring. Carefully consider these options with
the goal of limiting record level monitoring to the logical record requests in specific data sets that must be monitored in your environment.
CICS user ID
CICS transaction ID
CICS program ID
CICS file ID
CICS region ID
CICS terminal ID
CICS function code
You can also limit the monitoring of records to particular keys or key ranges:
Limit the monitoring of record level requests by the type of logical requests, including:
Remember: Each monitored record that matches the various policy filters results in the processing, creation, and transmission of a record monitoring data element to the
Guardium system. Use the Guardium system interface to establish as restrictive a set of policy filters as possible. IBM Guardium S-TAP for Data Sets dynamically tunes
and minimizes processing based on the filtering criteria chosen. Effectively chosen filters allows for maximum efficiency of record level monitoring processing.
If a policy does not contain any of these filters, no additional overhead occurs at the logical record request level.
If a particular policy rule contains one or more of these filters, only the specific data set defined in the rule (or data sets associated with other policy filters defined
in the rule) incurs any additional monitoring overhead.
Record level monitoring is only valid for use with VSAM data sets (KSDS and RRDS only).
Filter down to each specific VSAM data set event with the following filters:
Filter down to each specific non-VSAM data set event with the following filters:
You can achieve optimal record level monitoring and SMF data set monitoring performance when you create and use a policy that defines only those events that are
required by your organization.
Policy pushdown
Policy pushdown is a method of controlling the data that is collected by the IBM Guardium S-TAP for Data Sets agent. Policy pushdown enables the agent to evaluate the
filtering criteria that you specified.
Evaluating a match
When the product is searching for a match for the filtering criteria that you have specified, an evaluation is performed through each data set level. Access rules are used
for processing a data set, when the filtering criteria of the following access types match the data:
Job name
Program
Data set name
Data set type
DD name
User ID
Group ID
SYSPLEX
SSID
SYS ID
RECORG*
Job type
*RECORG is valid only for the processing of VSAM record level monitoring.
The following values are not used to evaluate for a match on an access rule. They are used as subfiltering criteria after a match on a data set is found:
Key
Key range
Data set event
RLM event
CICS® user ID
CICS transaction ID
CICS program ID
CICS file ID
CICS region ID
CICS terminal ID
CICS function code
Multiple values are allowed in an access rule, as shown in the following example with two access rules:
Access Rule 1
Rule Type = INCLUDE
Job Name = JOBA
Key = "111111"
RLM Event = ALL
Access Rule 2
Rule Type = INCLUDE
Job Name = JOBA
Key = "222222"
RLM Event = ALL
When a match is found on Access Rule 1 for job JOBA, no further scanning of the Access Rules occurs. The keyword Key is not used as part of the Access Rule match. To
filter on keys "111111" and "222222" for a job that is named JOBA, code the Access Rules as follows:
Access Rule 1
Rule Type = INCLUDE
Job Name = JOBA
Key = "111111","222222"
RLM Event = ALL
This rule searches for a match on the job name JOBA. If a match on JOBA is found, the RLM Event and Key values are matched.
Parent topic: IBM Guardium S-TAP for Data Sets administration
All the fields are optional and most have a default behavior as described. All fields apply to both VSAM and non-VSAM monitoring, unless otherwise specified.
Rule Type
Indicates whether this rule indicates inclusion or exclusion for events that match the criteria.
Allowed values are: INCLUDE|EXCLUDE: Include collects events that satisfy the specified criteria; exclude does not collect those events. If nothing is specified, then
INCLUDE is used.
JOB
Jobs
STC
Started Task
TSU
Time Sharing User
APPC
Advanced Program-To-Program Communication
OMVS
Open MVS access to non-VSAM data sets, particularly that performed by FTP
SYS ID
Indicates the SMF System IDs to use when searching for a match.
1 - 4 character SMF System ID to match.
Can be optionally followed by a comma (,) and a relational operator. If no relational operator is provided, then EQ is assumed.
Valid wildcards are supported at any position. They are:
If left blank, then all SMF System IDs are considered a match.
Examples:
SS01
Matches events that occur on SS01
SS01,EQ
Matches that occur on SS01
SS%,EQ
Matches that occur on systems with SS as the first 2 characters in the SMF system ID
RECORG
Indicates the record organization type to match.
Applies only to VSAM record level monitoring collection.
Can contain zero or more of the following values, separated by a comma (,): KSDS|RRDS, where:
KSDS
Key-sequenced data set
RRDS
Relative record data set
If left blank, all record organization types for record level monitoring are considered a match.
Examples:
KSDS
Matches key-sequenced data set events
KSDS,RRDS
Matches key-sequenced data set, and relative record data set events
User ID
Indicates the user ID to use when searching for a match.
1 - 8 character user ID to match.
Can be optionally followed by a comma (,) and a relational operator. If no relational operator is provided, then EQ is assumed.
Wildcards are supported.
If left blank, then activities for all user IDs are considered a match.
Examples:
PDUSER01
Matches events that are caused by user PDUSER01
PDUSER01,EQ
Matches events that are caused by user PDUSER01
PDUSER%,EQ
Matches events that are caused by users with the prefix PDUSER
SSID
Indicates the AUV ID to use when searching for a match.
1 - 4 character AUV ID optionally followed by a comma (,) and a relational operator. If no relational operator is provided EQ is assumed.
Wildcards are supported.
If left blank, activities for all SSID are considered a match.
Examples:
AUV1
Matches events from systems with AUV ID of AUV1
AUV1,EQ
Matches events from systems with AUV ID of AUV1
AUV%,EQ
Matches events from systems with AUV ID prefix of AUV
SYSPLEX
Indicates the z/OS sysplex name to use when searching for a match.
SYSPLEX1
Matches events from systems on SYSPLEX1
SYSPLEX1,EQ
Matches events from systems on SYSPLEX1
SYSPLEX%,EQ
Matches events from systems on a plex beginning with SYSPLEX
Program
Indicates the program name to use when searching for a match.
1 - 8 character program name, optionally followed by a comma (,) and a relational operator. If no relational operator is provided, EQ is assumed.
Wildcards are supported.
If left blank, activities from all programs are considered a match.
Examples:
IDCAMS
Matches events that are accessed from IDCAMS
IDCAMS,EQ
Matches events that are accessed from IDCAMS
IDCAM%,EQ
Matches events that are accessed from programs beginning with IDCAM
Group ID
Indicates the group ID to use when searching for a match.
1 - 8 character representing the security system group ID optionally followed by a comma (,) and a relational operator. If no relational operator is provided EQ is
assumed.
Wildcards are supported.
If left blank, then activities from all groups are considered a match.
Examples:
GROUP1
Matches events that are caused by someone within GROUP1
GROUP1,EQ
Matches events that are caused by someone within GROUP1
GROUP%,EQ
Matches events that are caused by someone within a group ID beginning with GROUP
HLQ1.MLQ1.LLQ1
Matches events on HLQ1.MLQ1.LLQ1
HLQ1.MLQ1.LLQ1,EQ
Matches events on HLQ1.MLQ1.LLQ1
HLQ%.MLQ%.LLQ%.EQ
Matches events with the data set name mask HLQ%.MLQ%.LLQ%
%.%%,EQ
Matches all data sets with more than one qualifier
%,EQ
Matches all data sets with one qualifier
DD Name
Indicates the DD name to use when searching for a match.
1 - 8 character DD name, optionally followed by a comma (,) and a relational operator. If no relational operator is provided, EQ is assumed.
Wildcards are supported.
If left blank, activities for all DD names are considered a match.
Examples:
PAYFILE
Matches events that are accessed by DD name PAYFILE
PAYFILE,EQ
Matches events that are accessed by DD name PAYFILE
PAYFIL%,EQ
Matches events that are accessed by DD names beginning with PAYFIL
Job Name
Indicates the job name to use when searching for a match.
1 - 8 character name representing the job for which activity must be collected, optionally followed by a comma (,) and a relational operator. If no relational operator
is provided, EQ is assumed.
Wildcards are supported.
If left blank, then activities from all jobs are considered a match.
Examples:
Key
Indicates the keys to consider when searching for a match.
Only applies to VSAM record level monitoring collection.
One or more keys in plain text or hexadecimal format, representing the key for which to match event data during record level monitoring processing.
Multiple keys must be delimited by a comma (,) optionally followed by the comma character (,) and a relational operator. If no relational operator is provided, EQ is
assumed.
Plain text keys can be 1 - 255 characters long.
Hexadecimal keys can be 2 - 510 characters long and must always have an even number of characters.
An individual key must be surrounded in double quotation marks ('"').
If the key is in hexadecimal format, it must be prefixed with x' and suffixed with a single quotation mark ('). It must be placed inside double quotation marks, for
example: "x'F0F0F1'"
A backslash (\) can precede any character to escape the character. For example:
"\x'0123'"
Matches the plain text key "x'0123'" instead of a hexadecimal key. Both types can be supplied together.
Wildcards are supported. If a wildcard is supplied with a hexadecimal key, the wildcard must be in hexadecimal (6C for '%', 6E for '?').
If a provided key is greater than the actual length of the VSAM key, the key will be truncated. If the key provided is shorter than the VSAM key, it will be padded with
hex zeroes.
If the Key and Key Range fields are blank, activities for all keys are considered a match.
Examples:
"KEY01"
Matches record level monitoring events with a key of KEY01
"KEY01","KEY02"
Matches record level monitoring events with a key of KEY01 or KEY02
"x'F0F0'"
Matches record level monitoring events with a key that contains the hexadecimal value F0F0
"x'F0F0'","x'F0F1'"
Matches record level monitoring events with a key that contains the hexadecimal value of F0F0 or F0F1
"KEY01","x'F0F1'"
Matches record level monitoring events with a key of KEY01 or a key with the hexadecimal value of F0F1
"KEY0%"
Matches record level monitoring events with a key beginning with KEY0.
"x'F06C'â€
Matches record level monitoring events with a key with a hexadecimal value beginning with F0
"\x'F06C'â€
Matches record level monitoring events with a key of x'F06C'
Key Range
Indicates the range of keys to consider when searching for a match.
Only applies to VSAM record level monitoring collection.
A pair of keys in plain text, or a pair of keys in hexadecimal, representing the range to match for record level monitoring. This must be specified as <key1>,<key2>.
A pair of keys must both be in plain text, or both be in hexadecimal. Each plain text key in a plain text key pair can be 1 - 255 characters long. Each hexadecimal key
in a hexadecimal key pair can be 2 - 510 characters long and must have an even number of characters.
If the keys are in hexadecimal, they must begin with x' and end with a single quotation mark ('). All keys must be enclosed in double quotation marks.
A backslash (\) can precede any characters to escape the character.
There must be an even number of keys in this field.
All key pairs must have the smaller key in the first value and the larger key in the second value; otherwise the key pairs will be rejected.
Wildcards are not supported in this field.
If the provided key is greater than the actual length of the VSAM key, the provided key will be truncated. If the key provided is shorter than the VSAM key, it will be
padded with hex zeroes.
If the Key Range and Key fields are blank, activities for all keys are considered a match.
Examples:
"KEY01","KEY09â€
Matches record level monitoring events where the key is between KEY01 and KEY09
"KEY01","KEY09","KEY11","KEY19â€
Matches record level monitoring events where the key is between KEY01 and KEY09 or between KEY11 and KEY19
"x'F0F0'","x'F0F9'â€
Matches record level monitoring events where the key has a hexadecimal value between F0F0 and F0F9
"x'F0F0'","x'F0F9'","x'F1F0'","x'F1F9'"
Matches record level monitoring events where the key has a hexadecimal value between F0F0 and F0F9 or between F1F0 and F1F9
"\x'F0F0'","\x'F0F9'â€
Matches record level monitoring events where the key is between x'F0F0' and x'F0F9'
RLM Event
Indicates what type of record level monitoring events should be considered for a match.
Only applies to VSAM record level monitoring collection.
Must contain zero or more of the following values, separated by a comma (,): RINS|RDEL|RWRT|RGET|ALL|SKIP, where:
RINS
A record insert within a data set of a supported type
RDEL
A record delete within a data set of a supported type
RWRT
A record level update within a record of a supported type
ALL
If left blank, then SKIP is the default and nothing is considered a match
Examples:
RINS
Matches record level monitoring events where the operation was a record insert
RINS,RDEL
Matches record level monitoring events where the operation was a record insert or a record delete
DSCLI|DSCLO|DSOP|DSCL|DSUP|DSDL|DSRN|DSCR|DSALT|DSRAL|DSRCN|DSRRD|
DSRUP|DSRDF|MADD|MREP|MREN|STOWI|ALL|SKIP
where:
DSOP
An OPEN event against a supported data set type
DSCL
A CLOSE event against a supported data set type
DSCLI
A CLOSE event against a supported data set type that was opened for input
DSCLO
A CLOSE event against a supported data sets type that was opened for output
DSUP
An UPDATE event against a supported data set type
DSDL
A DELETE event against a supported data set type
DSRN
A RENAME event against a supported data set type
DSCR
A DEFINE or NEW ALLOCATION event of a supported data set type
DSALT
An ALTER of the attributes of a supported data set type
DSRAL
A security facility ALTER access of a supported data set type
DSRCN
A security facility CONTROL access of a supported data set type
DSRRD
A security facility READ access of a supported data set type
DSRUP
A security facility UPDATE access of a supported data set type
DSRDF
A security facility DEFINE access of a supported data set type
MADD
A member add event against a supported data set type
MREP
A member replace event against a supported data set type
MREN
A member rename event against a supported data set type
MDEL
A member delete event against a supported data set type
STOWI
A STOW initialize event against a supported data set type
ALL
Returns all data set level events
SKIP
Returns no data set level events
If left blank, ALL is the default and all types are considered a match.
Examples:
DSOP
Matches data set events where an open occurred.
DSOP,DSCL
Matches data set events where an open or a close occurred.
EQ (Equals)
NE (Does not equal)
GE (Greater than or equal to)
LE (Less than or equal to)
Note:
If you are using a relational operator with the Group of Values list, you must ensure that the operator is appended to the last field in the list, otherwise it will
be treated as an additional value for that field.
To use individual values along with those listed in the Group of Values list, the relational operator must be appended to the last field in the Group of Values
list, rather than to the individual field.
String comparisons are performed in lexicographical order. Because the strings are in EBCDIC, the order is lowercase, uppercase, and then numeric. Special
character positions depend on the hexadecimal value of the special character itself in relation to the other characters.
Data Set Type
Indicates the type of data sets that should be considered for a match.
Must contain zero or one of the following values:
VSAM|NONVSAM|ALL, where:
VSAM
VSAM data sets
NONVSAM
Non-VSAM data sets
All
Both VSAM and non-VSAM data sets
If nothing is specified, then only VSAM data set types are collected.
CICS User ID
Indicates the CICS logon user ID to use when searching for a match
1 - 8 character CICS logon user ID to match
The user ID can be followed by a comma (,) and a relational operator. If no relational operator is specified, then EQ is assumed.
Wildcards are supported.
If left blank, then activities for all CICS logon user IDs are considered a match
Examples:
CICUSR01
Matches events that are caused by CICS logon user CICUSR01
CICUSR01,EQ
Matches events that are caused by CICS logon user CICUSR01
CICUSR%,EQ
Matches events that are caused by CICS logon users with the prefix CICUSR
CICS Transaction ID
Indicates the CICS transaction ID to use when searching for a match
1 - 4 character CICS transaction ID to match
The transaction ID can be followed by a comma (,) and a relational operator. If no relational operator is provided, then EQ is assumed.
Wildcards are supported.
If left blank, then activities for all CICS transaction IDs are considered a match.
Examples:
VTAP
Matches events that occur within CICS transaction ID VTAP
VTAP,EQ
Matches events that occur within CICS transaction ID VTAP
VT%,EQ
Matches events that occur within CICS transaction IDs starting with the prefix VT
CICS Terminal ID
Indicates the CICS terminal ID to use when searching for a match
1 - 4 character CICS terminal ID to match
The terminal ID can be optionally followed by a comma (,) and a relational operator. If no relational operator is provided, then EQ is assumed.
Wildcards are supported.
If left blank, then activities for all CICS terminal IDs are considered a match.
Examples
VTAP
Matches events that occur within CICS transaction ID VTAP
VTAP,EQ
Matches events that occur within CICS transaction ID VTAP
VT%,EQ
Matches events that occur within CICS transaction IDs starting with the prefix VT
CICS Region ID
Examples:
CICA
Matches events that occur within the CICS region with an ID of CICA
CICA,EQ
Matches events that occur within the CICS region with an ID of CICA
CIC%,EQ
Matches events that occur within the CICS regions with a prefix of CIC
Examples:
PAYROLLA
Matches events that occur under control of the program that is named PAYROLLA
PAYROLLA,EQ
Matches events that occur under control of the program that is named PAYROLLA
PAYROLL%,EQ
Matches event