Professional Documents
Culture Documents
Livelink ECM - Archive Server - Administration Guide
Livelink ECM - Archive Server - Administration Guide
Administration Guide
AR0906-ACN-EN
Introduction 11
1 About this document .............................................................................. 11
2 Further information................................................................................. 12
3 Conventions ........................................................................................... 13
2 Concepts.................................................................................. 25
2.1 Archives and pools................................................................................. 25
Part 2 Configuration 29
19 Basics .....................................................................................245
19.1 Avoiding Problems ............................................................................... 245
19.2 Monitoring and administration tools ..................................................... 245
19.3 Deleting log files................................................................................... 246
24 Other tabs...............................................................................299
24.1 Jobs tab................................................................................................ 299
24.2 Notifications tab.................................................................................... 301
24.3 Archive Modes tab ............................................................................... 302
24.4 Users tab.............................................................................................. 302
24.5 Security tab .......................................................................................... 303
25 Utilities....................................................................................305
25.1 Base64 Encoding/Decoding................................................................. 306
25.2 View Installed Archive Server Patches ................................................ 307
GL Glossary 309
IX Index 317
Important
The Livelink ECM - Archive Server is not identical with the Livelink
Enterprise Server.
2 Further information
This manual The manual is delivered in PDF and HTML format on the software and
documentation CDs or can be downloaded from the ESC. You can print the PDF file
if you prefer to read longer text on paper. You can find information on usage and
printing documentation on the documentation CD.
Online help For all administration clients (Archive Administration, Document Pipeline Info,
Archive Web Monitor and Server Configuration) online help files are available. You
can open the online help via help menu, help button or F1.
Other manuals In addition to this Administration Guide, use the IXOS® ECR Server - Configuration
Parameters (AR-RCP-EN) guide for a reference of all configuration parameters which
can be modified in the Server Configuration.
To learn about Document Pipelines and their usage in document import scenarios,
refer to the guide Livelink ECM - Archive Server - Document Pipelines and Import
Interfaces (AR-CDP-EN).
3 Conventions
Please read the following conventions before you use this documentation.
Typography In general, this product documentation uses the following typographical
conventions:
• New terms
This format is used to introduce new terms, emphasize particular terms,
concepts, long product names, and to reference to other documentation.
• User interface
This format is used for elements of the graphical user interface (GUI), such as
buttons, names of icons, menu items, names of dialog boxes and fields.
• Filename, command, sample data
This format is used for filenames, paths, URLs and commands in the command
line. It is also used for example data, text to be entered in text boxes, and other
literals.
• Key names
Key names appear in ALL CAPS, for example:
Click CTRL+V.
• <Variable name>
The brackets < > are used to denote a variable or placeholder. Enter the correct
value for your situation, for example: Replace <server_name> with the name of
the relevant server, like serv01.
• Hyperlink and Weblink (http://www.ixos.com)
These formats are used for hyperlinks. In all document formats, these are active
references to other locations in the documentation (hyperlink) and on the
Important
Important information is identified in this way. If you ignore such
information, you may encounter major problems.
Caution
Cautions contain very important information that, if ignored, may cause
irreversible problems. Read this information carefully and follow all
instructions!
• Internal cross-references
Clicking on the colored part of a cross-reference takes, you directly to the target of
the reference. This applies to cross-references in the index and in the table of
contents as well.
• External cross-references in PDF documents
In PDF documents, external cross-references are references to other manuals. For
technical reasons, these external cross-references often do not refer to specific
chapters but to the manuals in general.
• External cross-references in HTML documents
External cross-references in HTML documents on the Documentation CD-ROM for
systems based on Livelink Enterprise Archive usually point to a specific chapter
within the referred manual and are active hyperlinks. Clicking a hyperlink takes
you to the particular chapter.
This also applies to HTML documents on other CD-ROMs or in the ESC, as long
as the manual that is referenced is on the same CD-ROM or in the same channel
of the ESC, respectively. In all other cases, external cross-references are similar to
those within PDF documents.
Tip: If a cross-reference in a PDF document does not lead to the required in-
formation, refer to the corresponding HTML version on the Documentation CD-
ROM for systems based on Livelink Enterprise Archive.
Archive database
The Document Service stores all the "technical" document attributes and media
attributes in the archive database. The Administration Server saves the following
data here:
• Configuration data for logical archives and pools
• Job data: scheduling, parameters and execution log
• Relations to other Archive Servers and leading applications
• Other configuration settings
Monitor Server
The Monitor Server gathers information from other server components
regarding the status of relevant processes, the file system, the size of the
database and available resources. The collected information can be displayed in
Archive Web Monitor.
Notification Server
The Notification Server sends notifications when certain server events occur. You
can define these notifications in the Archive Administration, Notifications tab.
Administration Server
The Administration Server administers all the data that is of importance for the
Archive Server itself: configured structures, users, relations to other servers and
the execution of recurrent tasks. The Administration Server consists of a single
server process (admSrv under UNIX or admsrv.exe under Windows) and uses
the server database to perform data backups. The Administration Server is
controlled via Archive Administration Client.
HTTP Interface Server (Web Server)
The Web Server is the HTTP interface to the Administration Server, the Monitor
Server (via Archive Web Monitor) and the accounting data. During Archive
Server installation, initial customizing is performed via this interface. In
addition, customer-specific projects can be implemented via this interface. This
interface uses the ports 4060 (HTTP) and 4061 (HTTPS).
The Document Service uses a separate HTTP interface via ports 8080 and 8081.
Document Pipeline
In the Document Pipeline a preparation process is executed for certain
documents before they are forwarded to the Document Service. During this pre-
processing stage, these documents are stored in the Document Pipeline
directory. The Document Pipeline consists of the following services: DocTools
(each DocTool executes a single process task) and the DocumentPipeliner (which
coordinates the process operation). The Document Pipeline is described in detail
in the Livelink ECM - Archive Server - Document Pipelines and Import Interfaces (AR-
CDP-EN) manual.
Storage Manager
The Storage Manager controls the connections with the storage system or
jukeboxes:
• It controls the movement of media within the jukebox.
• It communicates via SCSI with drives in the jukeboxes.
• It performs Document Service write and read requests.
• It generates the proprietary WORM file system structure.
• It handles virtual jukeboxes in storage systems.
Livelink ECM - Archive Administration Utilities
To administrate, configure and monitor the components mentioned above, you
use the following tools (not shown in the picture)
• Livelink ECM - Archive Administration (short term: Archive Administration) is
used to configure the Archive Server and monitor jobs.
• Livelink ECM - Document Pipeline Info (short term: Document Pipeline Info
monitors the processes in the Document Pipeline.
• Livelink ECM - Archive Web Monitor (short term: Archive Web Monitor
monitors the Archive Server resources.
• Livelink ECM - Server Configuration (short term: Server Configuration is used
to adapt the parameters stored in the registry or in configuration files easily
and quickly within a graphical user interface.
The first three clients are described in this documentation. The Server
Configuration is decribed in IXOS® ECR Server - Configuration Parameters (AR-
RCP).
Important
At the release time of version 9.6.0, the IFS pool type is not supported by
any storage system. For detailed and up-to-date information on
supported storage systems, refer to the Hardware Release Notes
https://esc.ixos.com/1138018759-507.
Figure 2-1: Dependencies between pool types, storage media and systems
Disk buffer All pool types except for HDSK (Write Thru) use a disk buffer. The disk buffer is a
special hard disk partition where the data is collected until the Write job writes it to
the final storage medium. In ISO and IFS pools, the documents are collected until
the amount of data is sufficient to write an ISO image. The Write job regularly
checks the amount of data and writes the image if there is sufficient data in the
buffer. In other pools, the Write job writes all data that has been arrived in the buffer
since the last run of the job. Sufficient free storage space must be available in the
disk buffer in order to accommodate new incoming documents. The documents
which have already been written to the storage media must therefore be deleted
from the disk buffer at regular intervals. This is usually done by the Purge Buffer
job.
Components of Documents often consist of a number of components: the body (document itself),
documents notes and annotations. Normally all document components are stored together in
one pool, that is on the same type of medium. However, it is also possible to
separate the components and store them in different pools. For example, you can
assign an IXW pool to the logical archive FI for the documents, and a hard disk pool
for the notes. Components are assigned to the pool as application types. Another
application type is Migration that is used for document migration within the
archive.
Caching Caches are used to speed up the read access to documents. The Archive Server can
use two cache types: the local cache partitions and the Archive Cache Server. The
local cache resides on the Archive Server host and can be configured, while the
Archive Cache Server is intended to reduce and speed up the data transfer in a
WAN. It is installed on its own host in a separate subnet.
3.1 Partitions
Local hard disk partitions on the Archive Server are used for disk buffers, for local
caches and local storage partitions. At first you create these partitions on operating
system level. The number and size depends onmany factors and is usually defined
together with Open Text/IXOS experts or partners when the installation is
prepared. Important factors are:
• Leading application and scenario
• Number and size of documents to be archived and accessed, per time unit
• Frequency of read access
• If the partition will be used as buffer:
• Pool and media type, in particular if ISO images are written:
The buffer must be large enough to accommodate the entire storage capacity
of the ISO image, and additionally the amount of data that has to be stored in
the buffer between two Write jobs.
• Use as cache:
If documents are retrieved after archiving, e.g. in Early Archiving scenarios,
they should stay on the hard disk for a while. You can configure and
schedule the Purge Buffer job accordingly (see “Local cache” on page 36), or
you can copy the data to cache. In the latter case there is no effect on buffer
size.
Partition name
Unique name of the partition.
Create as replicated partition
Enable this option if the partition is to become a replicate partition.
Select Partition
This button is only available if the option Create as replicated partition has
been enabled. Click here to select the name of the associated original
partition. The Select Replicated Partition dialog box opens (see “Select
Replicated Partition dialog box” on page 295).
Mount path
Mount path of the partition in the file system. The mount path is a drive
under Windows and a partition directory under UNIX/LINUX. Enter a
backslash in front of the root directory if you're using volume letters.
Disc device type
Select the storage medium or storage system to ensure correct handling of
documents and their retention.
hdsk
Local hard disk partition, documents are only written from the buffer to
the partition.
hdskro
Local hard disk partition read-only, documents are written from the
buffer to the partition and the read-only attribute is set.
netapp, samfs, trinity
Documents are written from the buffer to the corresponding storage
system with system-specific setting of the retention period.
6. Click OK.
Create as many hard disk partitions as you need. In the Devices directory under the
HardDisk device, you can see all hard disk partitions independently of their usage.
Further steps:
• “Creating a disk buffer” on page 34
• “Creating an HDSK pool” on page 57
3.2 Buffers
Disk buffers (short: buffers) are required for all pool types except for local HDSK
(Write Thru) pools. Documents are collected in the buffer before they are finally
written to the storage medium.
Preconditions The hard disks are partitioned on the operating system level and then created in the
Archive Administration, see “Creating a hard disk partition” on page 32.
Though you can also create a buffer when you create a pool, we recommend to
create partitions and buffers separately to keep track of the system configuration.
See also:
• “Pool type” on page 25
• “Disk buffer” on page 27
Buffer name
Name of the disk buffer. The name cannot be modified later.
Required avail. space
Minimum available storage space. The Purge Buffer job deletes data from the
buffer until the required percentage of storage space is available. This applies to
every hard disk partition that is assigned to the buffer.
If it is not possible to delete sufficient documents from the disk buffer because
these have not yet been written to storage media, the Purge Buffer job is
terminated without a message and the required minimum amount of storage
space is not available. You can check the free space in the disk buffers using the
Archive Web Monitor.
Cache before purging
The documents are written to the cache before being removed from the disk
buffer. This ensures fast access if documents still must be processed or retrieved
but have not yet been cached by the Write job.
See also: “Local cache” on page 36
Clear archived documents older than ... days
Specifies after which period of time - after beeing written to a storage medium -
documents are removed from the disk buffer.
Note: If both conditions Required avail. space and Clear archived documents
older than ... days are specified, the job runs in a way which satisfies both
conditions to the greatest possible extent. Documents which are older than n
days are also deleted even if the required storage space is available.
Conversely, documents which are more recent than n days are deleted until the
required percentage of storage space is free.
See also:
• “Creating a disk buffer” on page 34
See also:
• “Creating a disk buffer” on page 34
If no cache path is configured and assigned, the default cache partition is used. The
default cache partition is usually created during installation (initial configuration). If
neither a default cache partition nor a cache path is configured, no caching is
possible. You can modify the default cache partition: assign additional partitions, or
delete one.
Depending on the time when you want to cache documents, you select the
configuration setting:
Enable caching for the logical archive Caching option in the archive configuration, see
“Editing the archive configuration” on page 49
Caching when the document is writ- Copy to Cache flag in the Write job configuration,
ten see
“Write configuration for ISO pools” on page 58
“Write configuration for IXW pools” on page 61
“Write configuration for IFS pools” on page 63
“Write configuration for Single File (FS) pools” on
page 67
“Write configuration for Single File (VI) pools” on
page 66
Caching when the buffer is purged Cache before purging option in the Purge Buffer
job configuration, see “Purge Buffer settings” on
page 34
See also:
• “Adding a default cache partition” on page 39
• “Configuring a cache path” on page 38
• “Assigning a cache path” on page 54
1. Create the partitions for the cache on the operating system level.
2. In the Servers tab, right-click the Cache Pathes directory and choose Add Cache
Path.
To edit an existing cache path, right-click it and choose Edit Cache Path.
3. Define the cache path:
Name
Name for the cache path that is used to assign it to the archive(s).
Volumes
Click Add volume to select a partition that was created in step 1. You can
add several partitions and also remove them.
Important
If you change the partition assignment to a cache path, the content of
the chache is lost.
Next step:
• “Assigning a cache path” on page 54
See also:
• “Local cache” on page 36
• “Configuring a cache path” on page 38
• “Assigning a cache path” on page 54
Important
You can configure most storage systems for container file storage as well as
for single file storage. The configuration is completely different.
See also:
• Supported media, jukeboxes and storage systems: Hardware Release Notes
https://esc.ixos.com/1138018759-507
• Configuring jukeboxes with optical media: “Automatic detection of physical
jukeboxes” on page 40
• Configuring storage devices: https://esc.ixos.com/1139823053-661
The jukeboxes are listed with their name, robot ID, SCSI connection and
manufacturer data. In the Configured column you find the jukebox's
configuration status:
Configured
This jukebox has already been configured and can be reconfigured, e.g.
when an additional drive has been inserted.
Not configured
This jukebox was newly connected and has not yet been configured.
Unavailable
The jukebox is listed in the Storage Manager's configuration but no
connection could be made to it. If this jukebox is no longer used, you can
remove it from the configuration using Delete jukebox
4. Select the newly connected or changed jukebox and click (Re)Configure
jukebox.
5. Click OK.
6. Enter the name and description of the jukebox and select the startup type:
Automatic
The jukebox is automatically activated when the server starts.
Manual
After each server restart, the jukebox must be activated manually (see
“Activating and deactivating jukeboxes” on page 125).
7. Click OK and insert a test medium into the jukebox.
The status of the jukebox drive detection process is displayed in a message
window. As a result, a list of the drives and their SCSI IDs is displayed.
8. Click OK.
The jukebox configuration is saved and the Storage Manager is started again.
You can display the stored configuration in the Server Configuration under
Storage Manager / Device Configuration / Devices. Here you can only change
the description and the startup type.
Troubleshooting
The following problems can lead to errors in detecting the jukeboxes:
• Poor SCSI connection
• Faulty drives
• Problems with SCSI drivers
• Control problems on the jukebox concerning the robot arm movement, export,
import etc.
• Two jukebox drives are connected to the same SCSI bus and have the same ID.
• Signatures, SSL and restrictions for document deletion define the conditions for
document access.
• Timestamps and certificates for authentication ensure the security of documents.
• Compliance mode, retention and deletion define the end of the document
lifecycle.
Some of these settings are pure archive settings. Other settings depend on the
storage method which is defined in the pool type. The decision criterion for their
definition is: Single file archiving or container archiving.
Setting Default value Value for single file ar- Value for container
chiving archiving
pool types: HDSK, Single pool types: ISO, IXW,
file (FS), Single file (VI) IFS
Blobs Off Off On (possible)
Single instance Off Off On (possible)
Retention Off On (possible) Off (recommended)
See also:
• “Archives and pools” on page 25
Important
• The method is not available for archives containing pools for single file
archiving (HDSK, FS, VI).
• If you want to use SIA together with retention periods, consider
“Retention” on page 46.
If necessary, you can exclude application types and components from Single
Instance Archiving in the Server Configuration under Document Service (DS) /
Component settings / Single Instance Archiving / Component or Application
types that prohibit(!) single instance archiving. MS Exchange e-mails are excluded
by default because they are unique, but the attachments are archived with SIA.
4.1.3 Retention
Various regulations require the storage of documents for a defined retention period.
During this time, documents must not be modified nor deleted. When the retention
period is expired, documents can be deleted mainly for two reasons:
• to free storages space and thus to save costs,
• to get rid of documents that might cause liability of the company.
To facilitate compliance with regulations and meet the demand of companies, the
Archive Server can handle retention of documents in cooperation with the leading
application and the storage subsystem. The leading application manages the
retention of documents, and the Archive Server executes the requests or passes them
to the storage system. Thus, retention is handled top down and assures that a
document cannot be deleted or modified during its retention period. Notes and
annotations can be added, they are add-ons and do not change the document itself.1
The retention period - more precisely the expiration date of the retention period - is
a property of a document and is stored in the database and additionally – if possible
– together with the document on the storage medium. The document gets the
retention period in one of these ways:
• The client of the leading application sends the retention period explicitly.
• The retention period is set for the logical archive on Archive Server side.
• If both are given, the leading application has priority.
If supported by the storage subsystem, the retention period is propagated to it.
Changes of the retention period in the archive settings do not influence the archived
documents.
When the retention period has expired, the Archive Server allows the client to delete
the document. The leading application must send the deletion request. Two
retention independent settings can prevent deletion: document deletion settings for
the logical archive (see “Document deletion” on page 49) and the maintenance level
of the Archive Server (see “Setting the runlevel” on page 272). The deletion process
has two aspects:
• Delete the document logically, that means: Delete the information on the
document from the archive database so that retrieval is not possible any longer.
Only the information that the document was deleted is kept. This step is
executed as soon as the delete request arrives.
• Delete the document physically from the storage media. The time of this action
depends on the storage method:
• Documents that are stored as single files can be deleted immediately.
• Documents that are stored in containers (ISO images, blobs, finalized and
non-finalized IXW partitions) can be deleted physically only when the
1 All components that are defined as add-ons and that can be modified during the retention period are listed in the Server
Configuration under Document Service (DS) > HTTP settings > Configuration of IXOS addon components.
retention period of all documents in the container has expired and all
documents are deleted logically. The Delete_Empty_Partitions job checks
for such partitions and removes them, if the underlying storage system does
not prevent it.
Notes:
• If you use retention for archives with Single Instance Archiving (SIA), make
sure that documents with identical attachments are archived within a short
timeframe and the documents in one archive have similar retention periods.
See also: “Single instance” on page 45.
• You cannot export partitions containing at least one document with non
expired retention, or import partitions that are logically empty.
• As regulations may change in the course of time, you can adapt the
retention period of documents by means of a complete document
migration, see “Migration” on page 193.
See also:
• “Editing retention settings” on page 52
• “When the retention period has expired” on page 135
Next steps:
• Create at least one pool
• Configure the archive
Signature required to
This setting is required if the archive system is configured to support signed
URLs (SecKeys) and the archive is used by a leading application using URLs
with SecKeys.
The settings determine the access rights to documents in the selected archive
which were archived without a document protection level, or if document
protection is ignored. The document protection level is defined by the
leading application and archived with the document. It defines for which
operations on the document a valid SecKey is required.
See also: “Protection levels” on page 80 and “SecKeys / Signed URLs” on
page 79
Select the operations that you want to protect. Only users with a valid
SecKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
Use SSL
Specifies whether SSL is used in the selected archive for authorized,
encrypted HTTP communication between the Imaging Clients and Archive
Server (including cache servers).
• use: SSL must be used.
• don't use: SSL is not used.
• may use:
The use of SSL for the archive is allowed. The behavior depends on the
clients configuration parameter HTTP UseSSL (see Livelink® Archive
Windows Viewer and Livelink® DesktopLink - Configuration Parameters (CL-
RCP) manual).
The Java Viewer does not support SSL.
See also: “SSL communication” on page 84
Document deletion
Here you decide whether deletion requests from the leading application are
performed for documents in the selected archive, and what information is
given. You can also prohibit deletion of documents for all archives of the
Archive Server. This central setting has priority over the archive setting.
See also: “Setting the runlevel” on page 272
allow
Documents are deleted on request, if no maintenance mode is set and the
retention period is expired.
causes error
Documents are not deleted on request even if the retention period is
expired. A message informs the administrator about deletion requests.
ignore
Documents are not deleted on request even if the retention period is
expired. No information is given.
Caching
Activates the caching of documents to the DS cache at read access.
Compression
Activates data compression for the selected archive.
See also: “Data compression” on page 45
Blobs
Activates the processing of blobs (binary large objects).
Very small documents are gathered in a meta document (the blob) in the disk
buffer and are written to the storage medium together. The method
improves performance. If a document is stored in a blob, it can be destroyed
only when all documents of this blob are deleted. Thus, blobs are not
supported in single file storage scenarios and should not be used together
with retention periods.
Single instance
Enables single instance archiving.
See also: “Single instance” on page 45.
Encryption
Activates the data encryption to prevent that unauthorized persons can
access archived documents.
See also: “Encrypted document storage” on page 85.
ArchiSig timestamps
Activates the assignment of one timestamp to a number of documents that
are represented by a hash tree. Using timestamps, the system can verify that
documents have not been altered in the archive.
See also: “Timestamps” on page 87
Important
To ensure consistent usage of timestamps, you can only enable this
setting but not disable it later.
Timestamps (old)
The option is only available, if timestamps have been used for this archive
with former Archive Server version. It activates the assignment of time-
stamps to document components when they are archived in the selected ar-
chive. You can clear this option after migration to ArchiSig timestamps.
See also: “Timestamps” on page 87
Important
To ensure consistent usage of timestamps, you can only disable this
setting but not enable it later again.
Timestamp verification
Defines the verification of timestamps at document access for document
timestamps and ArchiSig timestamps.
Strict
The timestamps are verified. If the timestamp is not valid or does not
exist, the administrator is informed and the document is not delivered to
the client.
Relaxed
The timestamps are verified. If the timestamp is not valid, the
administrator is informed and the document is delivered in spite of this.
None
No timestamp verification
Deferred archiving
Select this option, if the documents should remain in the disk buffer until the
leading application allows the Archive Server to store them on final storage
media.
Example: The document arrives in the disk buffer without a retention period
and the leading application will provide the retention period shortly after.
The document must not be written to the storage media before it gets the
retention period. To ensure this processing, enable the Event based
retention option in the Edit Retention dialog box, see “Editing retention
settings” on page 52.
No retention
Use this option if the leading application does not support retention, or if
retention is not relevant for documents in the selected archive. Documents
can be deleted at any time if no other settings prevent it.
Event based retention
This methos is used if a retention period is required but at the time of
archiving, it is unknown when the retetion period will start. The leading
application must send the retention information after the archiving request.
When the retention information arrives, the retention period is calculated by
adding the given period to the event date. Until the document gets it
calculated retention period it is secured with maximum (infinite) retention.
You can use the option in two ways:
Together with the Deferred archiving option
The leading application sends the retention period separately from and
shortly after the archiving request (for example, in upcoming
Livelink® DocuLink for SAP® Solutions versions). The documents
should remain in the disk buffer until they get their retention period.
They are written to final storage media together with the calculated
retention period when the leading application requests it. To ensure this
scenario, enable the Deferred archiving option in the Edit Configuration
dialog box, see “Editing the archive configuration” on page 49. Regarding
storage media and deletion of documents, the scenario does not differ
from that with a given Retention period of x days.
Without Deferred archiving
The retention period is set a longer time after the archiving request, and
the document should be stored on final storage media during this time.
For example, in Germany, personnel files of employees must be stored
for 5 years after the employee left the company. The files are immediately
archived on storage media, and the retention period is set at the leaving
date. This scenario is only supported for archives with HDSK pool or
Single File (VI) pool (if supported by the storage system). In all other
pools, the documents would be archived with infinite retention, and the
retention period cannot be changed after archiving (only with migration).
For the same reason, do not use blobs in this scenario.
Retention period of x days
Enter the retention period in days. The retention period of the document is
calculated by adding this number of days to the archiving date of the
document. It is stored with the document.
Infinite retention
Documents in the archive never can be deleted. Use this setting for
documents that must be stored for a very long time.
Use compliance mode
If this mode is enabled, documents cannot be deleted even by the DS admin-
istrator before the retention period has expired. Documents with unknown
retention period (event based retention before event) cannot be deleted ei-
ther. The auditing of the document lifecycle is activated (see “Configuring
auditing” on page 233).
If this mode is disabled, an administrator can delete documents with internal
commands even if the retention period has not expired.
Important
To ensure consistent usage of the compliance mode, you can only
enable this setting but not disable it later.
Destroy unrecoverable
This additional option is only relevant for archives with HDSK pools with
local hard disks. Storage systems do not support this method. If enabled, the
system at first overwrites the file content several times and then deletes the
file.
Important
Documents with expired retention period are only deleted, if:
• Document deletion is allowed, see “Defining access to the archive” on
page 47, and
• No maintenance mode is set, see “Setting the runlevel” on page 272.
See also:
• “Retention” on page 46
• “When the retention period has expired” on page 135
See also:
• “Local cache” on page 36
4.2 Pools
At least one pool belongs to each logical archive. A pool contains physical storage
media that are written in the same way. The physical storage media are either
assigned to the pool automatically or manually.
The procedure for creating and configuring a pool depends on the pool type. The
main differences in the configuration are:
• Usage of a disk buffer. All pool types except for HDSK (Write Thru) pools
require a buffer.
• Settings of the Write job. The Write job writes the data from the buffer to the final
storage media.
For more information on pools and pool types, see “Pool” on page 25.
10. If you want to attach further pools to this archive, repeat steps 1 to 8. In step 2,
choose the application type, that this pool should receive (see “Application
type” on page 58).
Pool name
Unique, descriptive name for the pool. Consider the naming conventions, see
“Naming rule for archive components” on page 43.
Pool type
The pool type depends on the storage system and media in use, see “Pool type”
on page 25 and the Hardware Release Notes https://esc.ixos.com/1138018759-
507.
Application type
Refers to the document component that is to be archived in this pool:
• default: generally used for all document components
Copy to Cache
If this option is enabled, the documents are copied to the local cache before the
data is deleted from the disk buffer and after they are written to storage media.
Enable this option if you require fast access to the data.
See also: “Local cache” on page 36
Backup
Enable this option if the partitions of a pool are to be backed up locally in a
second jukebox of this Archive Server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
See also: “Backup of ISO media” on page 176
Exception:
• For a local backup of optical ISO media, the Write job is already
configured in such a way that multiple ISO media are written in the
same jukebox. The Backup option is not required.
Backup Job / Number of Drives
Number of write drives that are available on the backup jukebox. The setting is
only relevant or physical jukeboxes.
Media Handling
In this section, you specify the media type and its configuration.
Allowed Media Type
Here you specify the permitted media type. ISO pools support:
DVD-R Which DVD-R types are supported you find in the Hardware Release
Notes.
WORM Which WORM types are supported you find in the Hardware Release
Notes.
HD-WO HD-WO is the media type supported with many storage systems. An HD-WO
medium combines the characteristics of a hard disk and WORM: fast ac-
cess to documents and secure document storage. Enter also the maximum
size of an ISO image in MB, separated by a colon:
For some storage systems, the maximum size is not required, refer to the
documentation of your storage system (https://esc.ixos.com/1139823053-
661).
Original Jukebox
Select the original jukebox.
Backup Jukebox
Select the backup jukebox. For virtual jukeboxes with HD-WO media, we
strongly recommend to configure the original and backup jukeboxes on
physically different storage systems.
See also:
• “Creating a pool with buffer” on page 56
• “Pool type” on page 25
Copy to Cache
If this option is enabled, the documents are copied to cache before the data is
deleted from the disk buffer and after they are written to storage media. Enable
this option if you require fast access to the data.
Backup
This option must be enabled if the partitions of a pool are to be backed up locally
in a second jukebox of this Archive Server. During the backup operation, the
Local_Backup job only considers the pools for which backup has been enabled.
Write Job / Number of Drives
Number of write drives that are available on the original jukebox.
Backup Job / Number of Drives
Number of write drives that are available on the backup jukebox.
Media Handling
In this section, you specify the media type and its configuration.
Auto Initialization
Select this option if you want to initialize the IXW media in this pool
automatically, see also “Initializing storage media” on page 129.
Allowed Media Type
The media type is always WORM, for both WORM and UDO media.
Partition Name Pattern
Defines the pattern for creating partition names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL) for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in the Server
Configuration in the folder Administration Server (ADMS) > Jobs and Alerts >
Automatic initialization of media. You can define any pattern, only the
placeholder $(SEQ) is mandatory. You can also insert a fixed text. The
initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next partition on the basis of
this pattern.
Number of Backups
Here you specify the number of backup media.
Original Jukebox
Select the original jukebox.
Backup Jukebox
Select the backup jukebox.
Auto Finalization
Select this option if you want to finalize the IXW media in this pool
automatically, see also “Finalizing storage media” on page 131.
Last Write Access
Defines the number of days since the last write access.
See also:
• “Creating a pool with buffer” on page 56
• “Pool type” on page 25
Copy to Cache
If this option is enabled, the documents are copied to cache before the data is
deleted from the disk buffer and after they are written to storage media. Enable
this option if you require fast access to the data.
Backup
Enable this option if the partitions of a pool are to be backed up locally in a
second jukebox of this Archive Server. During the backup operation, the
Local_Backup job only considers the pools for which backup has been enabled.
Image Handling
Configure the name schema and size of the ISO images.
Name Pattern
Defines the pattern for creating image names.
$(IMAGEPREF)_$(ARCHIVE)_$(POOL)_$(IMAGESEQ) is set by default.
$(ARCHIVE) is the placeholder for the archive name, $(POOL) for the pool
name and $(IMAGESEQ) for an automatic serial number of the image. The
prefix $(IMAGEPREF) is defined in the Server Configuration in the folder
Administration Server (ADMS) > Jobs and Alerts > Automatic
initialization of media. You can define any pattern, only the placeholder
$(IMAGESEQ) is mandatory. You can also insert a fixed text. The image is
named by the Write job.
Click Test Pattern to view the name planned for the next image on the basis
of this pattern.
Minimum Amount of Data
Minimum amount of data to be written in MB. At least this amount must
have been accumulated in the disk buffer before any data is written to storage
media.
Maximum Amount of Data
If you want to limit the size of the ISO images, enter their maximum size in
MB. Data in the disk buffer that exceeds this limit will be written to the next
image.
Write Job / Number of Drives
Number of write drives that are available on the original virtual jukebox.
Backup Job / Number of Drives
Number of write drives that are available on the backup virtual jukebox.
Media Handling
In this section, you specify the media type and the configuration of the partitions
which are filled with ISO images.
Allowed Media Type
The media type is HD-WO.
Partition Name Pattern
Defines the pattern for creating partition names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL) for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in the Server
Configuration in the folder Administration Server (ADMS) > Jobs and Alerts >
Automatic initialization of media. You can define any pattern, only the
placeholder $(SEQ) is mandatory. You can also insert a fixed text. The
initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next partition on the basis of
this pattern.
Number of Backups
Number of backup media that is written in the backup jukebox. For virtual
jukeboxes (HD-WO media), the number of backups is restricted to 1.
Original Jukebox
Select the original jukebox.
Backup Jukebox
Select the backup jukebox. For storage systems (virtual jukeboxes with HD-WO
media), the original and backup jukeboxes must be configured on physically
different storage systems.
Auto Finalization
Select this option if you want to finalize the partitions in this pool automatically,
see also “Finalizing storage media” on page 131.
Last Write Access
Defines the number of days since the last write access.
Filling Level of partition
Defines the filling level in percent at which the partition should be finalized.
The Storage Manager automatically calculates and reserves the storage space
required for the ISO file system. The filling level therefore refers to the space
remaining on the partition.
Note: The partitions list of an IFS pool lists only the partitions but not the ISO
images. To list the images, use the List Images on an IFS Partition utility.
See also:
• “Creating a pool with buffer” on page 56
• “Pool type” on page 25
Copy to Cache
If this option is enabled, the documents are copied to cache before the data is
deleted from the disk buffer and after they are written to storage media. Enable
this option if you require fast access to the data.
Write Job / Documents written in parallel
Number of documents that can be written at once.
See also:
• “Creating a pool with buffer” on page 56
• “Pool type” on page 25
Mount path
Mount path in the file system. The mount path is a drive under Windows and
a partition directory under UNIX/LINUX.
Disc device type
Select the storage medium or storage system to ensure correct handling of
documents and their retention.
hdsk
Local hard disk partition, documents are only written from the buffer to
the partition.
hdskro
Local hard disk partition read-only, documents are written from the
buffer to the partition and the read-only attribute is set.
netapp, samfs, trinity
Documents are written from the buffer to the corresponding storage
system with system-specific setting of the retention period.
See also:
• “Creating a pool with buffer” on page 56
• “Pool type” on page 25
4.3 Jobs
A job is a recurrent task, which is automatically started according to a time
schedule, or when certain conditions are met. Jobs related to the Archive Server are
set up during installation. Pool-related jobs (Write jobs, Purge_Buffer jobs) are
configured when the pool is created.
Command Description
Write_CD Writes data from disk buffer to storage media as ISO images, belongs
to ISO pools
Write_WORM Writes data incrementally from disk buffer to WORM and UDO, be-
longs to IXW pools
Write_IFS Writes ISO images incrementally from disk buffer to certain storage
systems, belongs to IFS pools
Write_GS Writes single files from disk buffer to a storage system through the
interface of the storage system (vendor interface), belongs to Single
File (VI) pools
Write_HDSK Writes single files from disk buffer to the file system of an external
storage system, belongs to Single File (FS) pools
Command Description
Purge_Buffer Deletes the contents of the disk buffer according to conditions, see
“Purge Buffer settings” on page 34
backup_pool Performs the backup of all partitions of a pool
Compress_HDSK Compresses the data in a HDSK pool
Command Description
Migrate_Volumes Controls the operation of the Migration service that performs media
migration, see “Migration” on page 193.
compare_backup_ Checks one or more backup IXW partitions. Enter the partition
worms name(s) as argument. You can use the * wildcard. If no argument is
set, all backup IXW partitions in all jukeboxes are compared.
dmsremotecmd Required only on the primary Archive Server for the Context Server
in PDMS and BPM scenarios:
The job starts scripts on the Context Server.
Some arguments: reorgjob updateStat
See also: Section 2.6.3 "Define the scheduling PDMS Context jobs" in
Livelink® PDMS Context Server - Administration Guide (AR-ACX).
hashtree Builds the hash trees for ArchiSig timestamps, see “ArchiSig time-
stamps” on page 87.
pagelist Creates the index information for SAP print lists (pagelist)
start<DPname> Start the Document Pipelines for the import scenarios:
• import content (documents / data) with extraction of attributes
from content (CO*),
• import content (documents / data) and attributes (EX*),
• import forms (FORM).
See Livelink ECM - Archive Server - Document Pipelines and Import In-
terfaces (AR-CDP) for more information.
Name
Unique name of the job that describes its function so that you can distinguish
between jobs having the same command. Do not use blanks and special
characters. You cannot modify the name later.
Command / Argument
Job command to be executed together with the necessary arguments. The entries
in the Arguments field are limited to 250 characters. Only arguments can be
modified later.
Scheduling
To use the schedule settings, select the use Scheduling option. Define the start
time of the job, specified by month, day, hour and minute. Thus you can define
daily, weekly and monthly jobs.
Additionally, you can define the stop time for the job, the Time Limit. Thus you
can prevent collision of jobs which use the same resources. If you set the time
limit to 01:00, the job will be stopped at 01:00 o'clock in the night.
Some commands are specified as uninterruptable, for example, the Write_CD
command. You receive an error message when you try to set a time limit for
these commands.
Start condition
To specify start conditions for a job, select the Use Conditions option.
Previous Action
Here you specify the type of action that is to be performed before the job is
started. You can choose between starting the Administration Server and
another job.
The return value indicates the result of a job run. If a job finishes successfully,
it usually returns the value 0. To start a job only when the previous job
finished successfully, enter 0 into the Check Return Value field.
Time frame
Here you can specify a start and stop time for the execution of the write job.
All job dependencies are listed in the table Job dependencies in the Jobs tab.
Important
You can assign another buffer to the pool. If you do, make sure that
• all data from the old buffer is written to the storage media,
• the backups are completed,
• no new data can be written to the old buffer.
Data that remains in the buffer will be lost after the buffer change.
You can also modify the scheduling of the write job in the Edit Job tab (see “Job
command and scheduling” on page 72).
Select... to print...
Archives the complete archive and pool configuration with the associ-
ated partitions, write jobs and Purge_Buffer jobs with their
settings and schedule.
See also: “Archives directory” on page 276
Cache partitions all defined cache partitions.
See also: “Cache Paths directory” on page 289
Servers all settings in the Servers tab: archives and known servers
with their cache server settings and SAP configurations, the
server's hard disk buffers and the connected SAP systems and
SAP gateways.
See also: “Servers tab” on page 275
Devices all settings in the Servers tab, Devices directory: devices with
their device information and the available partitions.
See also: “Devices directory” on page 290
Select... to print...
Buffers all settings in the Servers tab, Buffers directory: hard disk
buffers with the attached partitions and the Purge_Buffer job
settings.
See also: “Buffers directory” on page 290
Vaults all defined vaults with their name, type, server and last access
time.
See also: “Vaults directory” on page 295
Jobs all configured jobs with their names, commands, schedule and
activation status.
See also: “Jobs tab” on page 299
Job protocol log messages for all jobs, with or without messages for each
log entry.
See also: “Job protocol” on page 198
Notifications all settings in the Notifications tab: events together with their
associated notifications and an overview of all the available
notifications.
See also: “Notifications tab” on page 301
Alerts notifications of type Admin Client Alert.
See also: “Buttons” on page 268 and “Configuring and assign-
ing notifications” on page 203
Archive modes all settings in the Archive Modes tabs: configuration of the
archive modes with the scan workstations and the associated
archive modes.
See also: “Archive Modes tab” on page 302
Users Groups Poli- users with their group membership and the assigned policies.
cies user groups with their users and the assigned policies:
existing policies with the associated rights, all or user defined
only.
See also: “Users tab” on page 302
Security all settings in the Security tab.
See also: “Security tab” on page 303
3. Click:
• Print to print the configuration settings, or
• Preview to view the file on the screen.
You can print the file or save it in HTML format.
signature for the relevant parameters in the URL and the expiration time and signs it
with a private key. The Archive Server verifies the signature with the public key and
only accepts requests with a valid signature and if the SecKey's expiration time is
not expired.
Certificates The certificates with public keys are related to the logical archives. You can use
different keys for different archives if you have more than one leading application
or document types with different security requirements. You can also use one
certificate for several or all archives.
Remote Standby In Remote Standby environment, the Synchronize_Replicates job replicates the
certificates for authentication. Only enabled certificates are copied. The certificate on
the Remote Server is disabled after synchronization, enable it as described in the
procedure “To enable a certificate” on page 81.
Protection levels Whether SecKeys are verified or not is defined on several levels.
For the Archive Server, i.e. all archives on the server:
In the Server Configuration under Document Service (DS) / Security Settings /
Global security settings for HTTP: Security Behaviour
If No Security is set, the Archive Server does not verify the SecKeys, other
settings are not relevant.
See also: “To activate SecKeys” on page 81.
For the archive:
Settings Signature required to in the Archive Access dialog box.
See also: “To set security settings for the archive” on page 81.
For the document:
The leading application can archive the document together with the document
protection level. It defines for which actions on the document (create, read,
update, delete) a valid SecKey is required.
By default, the document protection has higher priority than the archive protection.
The administrator can reverse the priority by enabling Ignore Document Protection
in the Server Configuration under Document Service (DS) / Security Settings /
Global security settings for HTTP.
Caution
Do not use the Ignore Document Protection setting on a working server!
Take care to enable the Signature required to settings for the archives.
Otherwise, protected documents can be used without a valid SecKey.
Tasks The administrator must send or import the certificate with the public key to the
Archive Server. This procedure depends on the requesting leading application or
component. On the Archive Server, the adminstrator must configure the usage of
SecKeys:
• “Configuring SecKeys on the Archive Server” on page 81
To activate SecKeys
1. Choose File > Server Configuration.
2. Navigate to Document Service (DS) / Security Settings / Global security
settings for HTTP.
3. Set Security Behaviour to PKCS7 signatures (SEC_R3L2).
See also: “Protection levels” on page 80.
To enable a certificate
1. Click the Servers tab.
2. Select the archive for which the certificate that was sent or imported before.
See also: “SecKeys from SAP ” on page 81
“SecKeys from other leading applications and components ” on page 82
3. In the Assigned certificates for authentication table, check the fingerprint and
view the certificate.
Details: “View Certificate” on page 93
4. Right-click the certificate and choose Enable.
Before verification is possible, the SAP system must send the public key to the
Archive Server as a certificate with the transaction OAHT. There, you enter the target
Archive Server and the archives for which the certificate is valid.
To verify the authenticity of the transmitted certificate, the system administrators of
the SAP system and the Archive Server compare the fingerprints of the sent and the
received certificates. If the fingerprints match, the archive administrator Enables the
certificate.
Important
For security reasons, limit the read permission for these directories to
the system user (Windows) or the archive user (UNIX).
For the <archive> variable, enter the logical archive on the Archive Server for
which the certificate is relevant. Replace the <file> variable with the name of
the certificate, i.e. cert.pem.
If you need the certificate for several archives, call the command again for
each archive.
d. Quit the program with exit.
The Archive Server now recognizes the certificate and the certificate is
shown in the Assigned certificates for authentication table of the archive in
the Servers tab.
5. In the Servers tab, select the archive. Enable the certificate.
Archive
Type the name of the archive for which the certificate is relevant. Leave the field
empty if you want to use it for all archives. Start the utility again, if you need it
for several but not all archives.
Path to the certificate file
Enter the path including the complete file name.
Enable certificate
If you are sure that the certificate is correct, select this option. You can also
enable it later, after checking the fingerprint.
ID
The IDs of the existing certificates are shown in the Security tab.
Caution
Do not overwrite the file ixoscert.pem or the server will not be able to
decrypt encrypted documents anymore!
4. Activate SSL communication for the logical archives in the Servers tab, Security
area, Edit.
Details: “Defining access to the archive” on page 47.
Important
In the period of time between archival and running the relevant job, there is
unencrypted data on the hard disk partition!
For document encryption a symmetric key (system key) is used. You create this key
initially. The system key is encrypted on the archive server with the archive server's
public key and can then only be read with the help of the archive server's private
key. RSA is used to exchange the system key between the archive server and the
backup server.
Encryption can be set for each archive. By default, it is disabled.
Caution
Be sure to store this key securely, so that you can reimport it if necessary. If
the key gets lost, the documents that were encrypted with it can no longer
be read!
Do not delete any key, if you set a newer one as current. It is still used for
decryption.
The Synchronize_Replicates job updates the keys and certificates first, before it
synchronizes the documents. The system keys are transmitted encrypted.
If you do not want to transmit the system keys through the network, you can also
export them from the original server to an external data medium and reimport them
on the backup server (see “Exporting and importing the key store” on page 94).
5.4 Timestamps
Timestamps Using timestamps, you can verify that documents have not been altered since
archiving time. An additional timestamp server is required for this. Creating a
timestamp means: The computer calculates a unique number - a cryptographic
checksum or hash value - from the content of the document. The timestamp server
adds the time and signs the checksum with the private key. The signature is stored
together with the document component. When a document is requested, the
Archive Server verifies whether the component was modified after storage by
looking at the signature. It needs the public key of the timestamp server certificate
for verification. The Windows Viewer and Java Viewer can display the verification
result. The Archive Server can use timestamps in two ways:
• Document timestamps (old)
• ArchiSig timestamps
Document Each document component gets a timestamp when it arrives in the archive - more
timestamps precisly, when it arrives in the diskbuffer and is known to the DS. This (old) method
requires a huge amount of timestamps and can be very expensive, depending on the
number of documents. Thus it is available only for archives that used timestamps in
former Archive Server versions. You can migrate these timestamps to ArchiSig
timestamps.
ArchiSig With ArchiSig timestamps, the timestamps are not added per document, but for
timestamps containers of documents represented by hash trees:
A job builds the hash tree that consists of hash values of as many documents as
configured, and signs with the timestamp. Thus you can collect, for example, all
documents of a day in one hash tree. Only one timestamp per hash tree is required.
The verification process needs only the document and the hash chain leading from
the document to the timestamp but not the whole hash tree:
ArchiSig timestamps are less expensive and can be easily renewed. Open Text/IXOS
recommends to use this method.
Renewal of Electronically signed documents can loose their validity in the course of time,
timestamps because the availability and verifiability of certificates is limited (dependent on the
regional laws) and the key lengths, certificates as well as cryptographic and hash
algorithms may become unsafe. Therefore you can renew the timestamps for long-
term stored documents. You should renew the timestamps, before:
• the certificate is invalid,
• the key length is unsafe,
• the cryptographic algorithm is unsafe,
Important
Once you have decided to use ArchiSig timestamps, you cannot go back to
document timestamps.
If you use both methods in parallel, the document timestamp secures the document
until the hash tree is built and signed. As this time period is short, an inexpensive
timestamp is sufficient for the documents, while the hash tree gets a timestamp
created with a certificate of an accredited provider. This trusted certificate is used
for verification.
Certificates The Archive Server gets the certificates required for verification on different ways:
Timeproof timestamp server and IXOS timestamps
The certificate is automatically stored on the Administration Server during the
first signing process. Thus, the certificates are only shown in the Security tab
after several documents have been signed. If you want the certificates to be
shown before the signing starts, enter in the command line:
dsSign -t.
5. In the Jobs tab, create jobs to build the hash trees. You need one job for each
archive that uses timstamps.
See also: “Jobs” on page 69.
Command
hashtree
Arguments
archive name
Scheduling
If you use ArchiSig timestamps, schedule a nightly job. If the hash trees are
written to a storage system, make sure that the job is finished before the
Write job starts.
Important
You can migrate document timestamps only once! Never disable ArchiSig
timestamps after starting migration.
3. Call the hash tree creation tool for each archive with migrated timestamps:
dsHashTree <archive name>
The tools calculate hash values from the existing timestamps, build hash trees and
get a timestamp for each tree.
1. Configure a new certificate on your timestamp server, make is available for the
Archive Server and enable it in the Security tab.
Details: “Timestamps” on page 87.
3. In the resulting list, find the distinguished subject name(s) of your timestamp
service (subject of the service’s certificate).
4. In a command line, enter:
dsHashTree -s <DistinguishedNameOfOldCertificate>
The process finds all timestamps that were created with the certificate indicated in
the command. It calculates hash values for the timestamps and builds new hash
trees. Each hash tree is signed with a new timestamp.
5.5 Checksums
Checksums are used to recognize and reveal unwanted modifications to the
documents on their way through the archive. The checksums are not signed, as the
methods used to reveal modifications are directed towards technical failures and not
malicious attacks.
The Enterprise Scan generates checksums for all scanned documents and passes
them on to the Document Service. The Document Service verifies the checksums and
reports errors that occur (see “Simplify monitoring with notifications” on page 199).
On the way from the Document Service to the STORM the documents are provided
with checksums as well, in order to recognize errors when writing to the media.
The leading application, or some client, can also send a timestamp instead of the
checksum. The verification can check timestamps as well as checksums. The
certificates for those timestamps must be known to the Archive Server and enabled
in the Security tab before the timestamp checksums can be verified.
You can activate the use of checksums for Document Pipelines on the local server in
the Server Configuration under General eCONserver Settings / Client Certificate
Settings / Use checksum in DS communication.
store. For security reasons, Open Text/IXOS recommends to obtain and import your
own certificate instead of using the delivered one.
1. In the Security or Servers tab, right-click the certificate and choose View.
2. Check the certificate:
General tab
This tab provides detailed information to unambigously identify the
certificate: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path tab
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path
shows the complete derivation chain. You can also view the parent certificate
information from here.
E
Exports the contents of the key store. Use the export in particular to store the
system keys for document encryption.
The user must log on and specify a path for the export files. The option -t NN:MM
splits the contents of the key store into several different files (MM; maximum 8).
At least NN files must be reimported in order to restore the complete key store.
sunny:~> /usr/ixos-archive/bin/recIO E -t 3:5
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Please authenticate!
User :dsadmin
Password :
Writing keystore with 3 system-keys to 5 token-files (3 required to restore)
Token[1/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p1.pem
Token[2/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p2.pem
Token[3/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p3.pem
Token[4/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p4.pem
Token[5/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p5.pem
V
Verifies the contents of the key store against the exported files.
The user must log on and specify the path for the exported data. Then the
exported data is compared with the key store on the Archive Server.
sunny:~> /usr/ixos-archive/bin/recIO V
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p2.pem
D
Displays the information on the exported files. The information is shown in a
table.
sunny:~> /usr/ixos-archive/bin/recIO D
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Token[1/?] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p3.pem
idx ID created origin
---------------------------------------------------
1 EA03BDAF9ABB85A1 2001/01/18 17:26:01 sunny
2 1EE312C064A27F73 2000/11/03 14:28:08 hausse
3 BEEB5213EF5FFABF 2000/11/08 09:26:36 emma
I
Imports the saved key store.
The user must log on and specify the path for the exported data. The data in the
key store is restored, encrypted with the Archive Server's public key and sent to
the administration server. The results are displayed. Keys already contained in
the Archive Server's store are not overwritten.
sunny:~> /usr/ixos-archive/bin/recIO V
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p3.pem
ID:BEEB5213EF5FFABF created:2000/11/08 09:26:36 origin:emma
Key already exists
ID:276CBED602BDFC25 created:2001/01/18 12:09:32 origin:arthomasa
Key successfully imported
1. In the Servers tab, right-click the archive and choose Utilities > Analyze
Security Settings.
2. The selected archive is already entered in the Archive to analyze field. Click
OK.
3. A dialog with all security settings opens. Click Refresh to see the complete
information.
Important
For security reasons, you can deactivate this function: In the Server
Configuration, under Document Service (DS) / Security Settings / Global
security settings for HTTP, deactivate the Allow the server to print reports
about security settings variable.
You can also print all settings that are shown in the Security tab, see “Printing the
configuration settings” on page 76.
Subnet mask
Specifies the sections of the IP address that are evaluated. You can restrict
the evaluation to individual bits of the subnet address.
Subnet addr.
Specifies the address for the subnet in which an Archive Server or Enterprise
Scan is located. At least the first part of the address (NNN.0.0.0) must be
specified. A gateway must be established for each subnet.
R/3 system ID
Three-character system ID of the SAP system (SAP_SID) for which the
gateway is configured. If this is not specified, then the gateway is used for all
SAP systems for which no gateway entry has been made. If subnets overlap,
the smaller network takes priority over the larger one. If the networks are of
the same size, the gateway to which a concrete SAP system is assigned has
priority over the default gateway that is valid for all the SAP systems.
R/3 Gateway
Name of the server on which the SAP gateway runs. This is usually the SAP
database server.
Gateway number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapgwxx service on the gateway server in order to
determine the number of the TCP/IP port (xx = instance number; e.g.
instance number = 00, sapgw00, port 3300).
1. In the Servers tab, right-click R/3 Systems and choose Add R/3 System or Edit
R/3 system.
2. Enter the configuration:
R/3 SID
Three-character system ID of the SAP (SAP_SID) system with which the
administered server communicates.
Server name
Name of the SAP database server on which the logical archives are set up in
the SAP system.
R/3 client
Three-digit number of the SAP client in which archiving occurs.
R/3 feedback user
Feedback user in the SAP system. The cfbx process sends a notification
message back to this SAP user after a document has been archived using
asynchronous archiving. A separate feedback user (type CPIC) should be set
up in the SAP system for this purpose.
Password of R/3 feedback user
Password for the R/3 feedback user. This is entered, but not displayed,
when the SAP system is configured. The password for the feedback user
must be identical in the SAP system and in Archive Administration.
2. Right-click in the R/3 system config. table at the bottom of the details area and
choose Add or Edit.
3. Enter the configuration:
R/3 system ID
Three-character system ID of the SAP system with which the logical archive
communicates (SAP_SID).
ArchiveLink Version
The ArchiveLink version 4.5 for SAP R/3 version 4.5 and higher is currently
used.
Protocol
Communication protocol between the SAP application and Archive Server.
Fully configured protocols which can be transported in the SAP system are
supplied with the SAP products of Open Text/IXOS.
Default destination
Selects the SAP system to which the return message with the barcode and
document ID is sent in the Late Storing with Barcode scenario. This setting is
only relevant if the archive is configured on multiple SAP applications, e.g.
on a test and a production system.
Thus you can create several archive modes, e.g. if you want to assign document
types to different archives.
Name
Name of the archive mode. Do not use spaces. You cannot change the name of
the archive mode after creation.
Scenario
Name of the archiving scenario (also known by the technical name Opcode).
Scenarios apply to leading applications.
Archive
Name of the logical archive to which the document is sent.
Pipeline host
In project solutions only, the Document Pipelines together with the Remote
Pipeline Interface can be installed on a separate computer. The pipeline is
accessed via an http interface (libdph). For those solutions, click Connect to
Pipeline, select the protocol, and enter the name and port of the computer where
the Document Pipeline is installed.
Conditions
These archiving conditions are available:
BARCODE
If this option is activated, the document can only be archived if a barcode was
recognized. For Late Archiving, this is mandatory. For Early Archiving, the
behavior depends on your business process:
• If a barcode or index is required on every document, select the
BARCODE condition. Enterprise Scan makes sure that an index value is
present before archiving. The barcode is transferred to the leading
application.
• If no barcode is needed, or it is not present on all documents, do not select
the BARCODE condition. In this case, no barcode is transferred to the
leading application.
ENDORSER
Special setting for certain scanners. Only documents with a stamp are stored.
R3EARLY
Early archiving with SAP.
PILE_INDEX
Sorts the archived documents into piles for indexing according to certain
criteria. For example, the pile can be assigned to a document group in
Enterprise Scan, and the access to a document pile in a leading application
like PDMS can be restricted to a certain user group.
Workflow
Name of the workflow that will be started in Livelink ECM - BPM Server when
the document is archived. For details concerning the creation of workflows, see
the Livelink ECM - BPM Server documentation.
Note: For internal purposes, the Archive Server converts the workflow
name to Base64 (a simple ASCII format). Use the Base64
Scan Host
Name of the Enterprise Scan workstation that is used to reference it in the
network. Spaces are not permitted. You can check the validity of the name by
sending a ping to the scan workstation. The name must be entered in exactly the
same way as it has been defined at operating system level.
Archive Mode
Archive mode assigned to that scan host.
Default
If this option is enabled, the archive mode is preset as the default at the
corresponding Enterprise Scan workstation. Only one archive mode can be set as
default.
8.1 Concept
Modules To keep administrative effort as low as possible, the rights are combined in policies
and users are combined in user groups. The concept consists of three modules:
Policies
A policy is a set of rights, i.e. actions that a user with this policy is allowed to
carry out. You can define your own policies in addition to using predefined and
unmodifiable policies.
User groups
An user group is a set of users who have been granted the same rights. Users are
assigned to a user group as members. Policies are also assigned to a user group.
The rights defined in the policy apply to every member of the user group.
Users
An user is assigned to one or more user groups, and he is allowed to perform the
functions that are defined in the policies of these groups. It is not possible to
assign individual rights to individual users.
Notes:
• Users, user groups and policies can only be defined on the original archive
server. They cannot be created or modified if you are working on a remote
standby server.
Standard users During the installation of the Archive Server, standard users and user groups are
preconfigured:
dsadmin in aradmins group
This is the administrator of the archive system. The group has the policy
ALL_ADMS and can perform all administration tasks, view accounting
information, and start/stop the Spawner. After installation, the password is
empty, change it as soon as possible with Edit User, see “User settings” on page
116.
dpuser in dpusers group
This user controls the DocTools of the Document Pipelines. The group has the
policy DPinfoDocToolAdministration. The password is set by the user dsadmin
with Edit User, see “User settings” on page 116.
dpadmin in dpadmins group
This user controls the DocTools of the Document Pipelines and the documents in
the queues. The group has the policy ALL_DPINFO. The password is set by the
user dsadmin with Edit User, see “User settings” on page 116.
For more information on tasks of the users dpuser and dpadmin, see
“Documents in Document Pipelines” on page 229.
Note for users of previous versions:
Due to the improvement of internal processes, the number of internal users and
groups is now reduced. All scan related users, groups and rights are no longer
needed. If you need the old users for some specific reason, you can recreate
them as described in “Configuring users and their rights” on page 112:
User aradmin: policy ALL_ADMS, group aradmins.
User aroperator: policies Backups and MediaHandling, group aroperators.
User scan: policy ALL_SCAN, group scanusers.
User scanadmin: policy ALL_SCAN, group scanadmins.
You use the Available Policies function to check, create, modify and delete policies.
1. Choose the Users tab.
2. Right-click on the Groups list and choose Available Policies.
3. To check a policy, select it in the left-hand list. The assigned rights are shown in
a hierarchical structure in the right-hand list.
Details: “Policy overview” on page 113
4. To create a new policy, click Add Policy.
5. Enter a name and description for the new policy, and assign the rights.
Details: “Policy configuration” on page 114
Tip: If the new policy is going to resemble an existing one, copy the existing
policy with Copy Policy, specify a new name and description, and then modify
this policy with Edit Policy.
A self-defined policy can be modified with Edit Policy. You can add and remove
rights, and change the description. You proceed in exactly the same way as when
creating a policy. The name of the policy cannot be changed.
To delete a self-defined policy, select it and click Delete Policy. The rights
themselves are not lost, only the set of them that makes up the policy.
See also:
• “Concept” on page 111
• “Adding users” on page 115
• “Setting up user groups” on page 117
Policies
All available policies are listed here. A number of standard policies are supplied
on installation. These cover the vast majority of your working requirements. If
you select a policy, you see the rights assigned to this policy in the Assigned
rights table.
Assigned rights
The rights assigned to the selected policy are listed here. The rights are
organized into a hierarchy based on Archive Server components, tasks and
commands.
See also:
• “Concept” on page 111
• “Creating and checking policies” on page 112
• “Policy configuration” on page 114
Name
Name of the policy. Spaces are not allowed. The name cannot be modified.
Description
Short description of the role the user can assume by means of this policy.
Assigned Rights
The rights that are currently associated with the policy. Click the Remove >>
button to remove the selected right or category of rights from this box.
Non-assigned Rights
All available rights that are not assigned to this policy. Click <<Add to assign the
selected right to the policy.
Rights are sorted on a hierarchical basis into categories. To select all rights in a
category, simply select the name of the category.
See also:
• “Concept” on page 111
• “Creating and checking policies” on page 112
• “Policy overview” on page 113
See also:
• “Concept” on page 111
• “Creating and checking policies” on page 112
• “Setting up user groups” on page 117
Username
User name for Archive Server. The name may be a maximum of 14 characters in
length. Spaces are not permitted. This name cannot subsequently be changed.
Password
User password for the specified user.
Confirm password
Enter exactly the same input as you have already entered under Password.
Global user
Obsolete setting
Document Service Rights to
Here you define the user's rights in the Document Service.
Member of
List of the groups of which the user is a member. To remove a user from one or
more groups, select the groups and click Remove >>.
Non-member of
List of the groups of which the user is not a member. To include a user in one or
more groups, select the groups and click <<Add.
See also:
• “Concept” on page 111
• “Adding users” on page 115
• “Setting up user groups” on page 117
6. Right-click the new group and choose Members. Users are assigned in the same
way as policies.
Details: “Group's members” on page 119
To modify the assignments of a group, repeat steps 4 and 6.
To delete a user group, right-click it and select Delete Group. Neither users nor
policies are lost, only the assignments are deleted.
See also:
• “Concept” on page 111
• “Creating and checking policies” on page 112
• “Adding users” on page 115
See also:
• “Concept” on page 111
• “Setting up user groups” on page 117
• “Group's policies” on page 118
• “Group's members” on page 119
Name
Name of the selected user group. You cannot edit the specification entered here.
Global group
Obsolete setting.
Implicit group
In an implicit group, all users are automatically members. You cannot change
this setting.
Assigned policies
The policies that are currently assigned to the group. Click the Remove >>
button to remove the selected policy from this box.
Other available policies
All available policies that are not assigned to the group. Click the <<Add button
to assign the selected policy selected to the user group.
See also:
• “Concept” on page 111
• “Setting up user groups” on page 117
• “Add Group settings” on page 118
• “Group's members” on page 119
Name
Name of the selected user group. You cannot edit the specification entered here.
Global group
Obsolete setting.
Implicit group
In an implicit group, all users are automatically members. You cannot change
this setting.
Members
The users who are currently assigned to the group. Click the Remove >> button
to remove the selected users from this box.
Non-members
All available users who are not assigned to the group. Click the <<Add button to
add the selected user to the group.
See also:
• “Concept” on page 111
• “Setting up user groups” on page 117
• “Add Group settings” on page 118
• “Group's policies” on page 118
vol_jbd, vol_dirty
The file vol_jbd contains a list of all the media, which are known to the
jukebox daemon, i.e. including also media, which have been taken from the
jukebox but not exported. The file vol_dirty contains details of the media,
which have to be examined before they are used in the archive system. The
files are located in the directory
<IXOS_ROOT>\config\storm.
JournalFile1, JournalFile2
Contains the records for all the actions started by the jukebox daemon but not
completed. The files are located in the directory
<IXOS_ROOT>\config\storm.
size { 1300 }
mode { file }
}
}
}
Note: Please note that the required directories must be created separately.
Directories are not automatically created.
To avoid this problem the amount of jobs per jukebox can be limited: Configure the
jobnum parameter in the parameters section of the server.cfg file.
Caution
Under Windows, writing signatures to media with the Windows Disk
Manager is not allowed. These signatures make the medium unreadable for
the archive.
Details:
• “Write configuration for IXW pools” on page 61
• “Write configuration for IFS pools” on page 63
• “Pool type” on page 25
Partition name
Name of the partition. The maximum length is 32 characters. You can only use
letters (no umlauts), digits and underscores. Give a unique name to every
partition in the entire network. This is a necessary precondition for the
replication strategy in which the replicates of archives and partitions must have
the same name as the corresponding originals. The following name structure is
recommended: <archive-name>_<pool-name>_<serial-number>_<side>.
If the partition is intended to be a replicate or backup of another partition, do not
enter anything here. In these cases, the name is derived from the name of the
original partition.
Create as replicated partition
Click this check box if the partition will be a replicate partition on a remote
standby server. Click Select Partition to select the name of the associated
original partition. The Select Replicated Partition dialog box opens (see “Select
Replicated Partition dialog box” on page 295).
Backup
Click this check box if the partition will be a backup partition. Click Select
Partition to select the name of the associated original partition. The Partitions to
Backup dialog box opens where you can select the original partition.
Note: You can also automate initialization for a pool, see “Automatic
initialization and assignment” on page 129.
Procedure:
• “Manual initialization and assignment” on page 130
• In the list of partitions of a virtual IFS jukebox the file system ifs and the status
fin is displayed.
Backups Backup partitions should be finalized when the corresponding original partition is
finalized and the backup is completed. Therefore finalization is included into the
backup jobs. If a backup job recognises that the original partition is finalized, it
performs the backup as usual. When done, it calls the finalization program for the
backup medium. The High Sierra name of the partition is not changed. It is not
possible to finalize backup partitions manually.
See also:
• “Manually finalizing a partition” on page 132
• “Automatic finalization” on page 132
See also:
• “Manually finalizing a pool” on page 132
• “Automatic finalization” on page 132
Pool
Name of the pool that is to be finalized.
Archive
Name of the logical archive to which the pool belongs.
Last write access
Defines the number of days since the last write access.
Filling level of partition
Defines the filling level in percent at which an IXW or IFS partition should be
finalized. For IXW partitions, the Storage Manager automatically calculates
and reserves the storage space required for the ISO file system. The filling
level therefore refers to the space remaining on the IXW partition.
The finalization conditions are independent from the pool parameters and do
not overwrite the specified pool parameters.
3. Click OK.
All partitions of the pool meeting these conditions are finalized.
See also:
• “Manually finalizing a partition” on page 132
• “Automatic finalization” on page 132
1. In the Servers tab, open the Devices directory and select the jukebox.
2. Right-click the IXW partition to which you want to assign the error status, and
choose Utilities > Set Finalization Status.
3. Check the settings.
Partition
Name of the partition.
Server name
Name of the STORM server.
Device name
Name of the jukebox.
Slot
Slot number in the jukebox.
Side
Number of the medium side.
4. Click OK.
The error status fin_err is inserted in the Final status column of the partiton
list.
Note: The failure of the finalization does not affect the security of the data on
the medium!
2 Deletion of components works differently: If the storage system cannot delete a component physically, the component
remains, it is not deleted logically.
vidually, neither from finalized partitions (which are ISO partitions) nor from
non-finalized partitions using the IXW file system information.
Delete empty If documents with retention periods are stored in container files, the container
partitions partition gets the retention period of the document with the longest retention. The
retention period of the partition is propagated to the storage subsystem if possible.
The partition – and the content of all its documents – can be deleted only if all
documents are deleted from the archive database. The partition is purged by the
Delete_Empty_Partitions job. It checks for logically empty partitions meeting the
conditions defined in the Server Configuration under Administration Server > Jobs
and Alerts > Delete Empty Partitions and automatically deletes these partitions.
IXW partitions are only considered if they are physically full at the given level and
logically empty. In IFS pools, empty images are deleted. You can schedule the job
and run it automatically, or use the List Empty Partitions/Images utility to display
the empty partitions first and then start the deletion job manually.
Important
To ensure correct deletion, you must synchronize the clocks of the Archive
Server and the storage subsystem, including the devices for replication.
Storage Pool type Delete from Delete content physically Destroy con-
mode archive DB tent
Single file HDSK x x x (Destroy un-
storage recoverable)
FS and VI x x —
Container ISO, IXW x Delete partition, when the x (destroy me-
file stor- on optical last document is deleted: dia)
age media Delete_Empty_Partitions
job
ISO on x Delete partition, when the —
storage last document is deleted:
system Delete_Empty_Partitions
job
Storage Pool type Delete from Delete content physically Destroy con-
mode archive DB tent
IFS x Delete ISO image, when the —
last document is deleted:
Delete_Empty_Partitions
job. The IFS partition must
be read-only.
Delete IFS partition when all
images are deleted, and the
partitions is closed or suffi-
ciently full:
Delete_Empty_Partitions
job. The administrator de-
letes it physically by means
of the storage system tools.
Notes:
• Not all storage systems release the space of the deleted partitions (see
documentation for your storage system).
• Blobs are handled like container file archiving.
Important
On double-sided media, check that both partitions are deleted.
Important
Partition name(s)
Name of the partition(s) to be exported. You can use wildcards to export
multiple partitions at the same time.
Export from database
Enable this option when you export a defective partition. It causes the
database to be searched for entries for this partition, and the entries relating
to the contents of the partition are deleted. The partition itself is not
accessed.
If this option is disabled, the command searches the partition directly and
deletes the associated entries from the database. Intact partitions which are
no longer needed are exported in this way. The partition must be in the
jukebox.
4. Click OK.
The export process may take some time. To update the progress display, click
Refresh.
5. If the medium is a double-sided optical one, export the second partition in the
same way.
6. Remove the optical medium from the jukebox with Eject.
Details: “Removing optical media from jukebox” on page 155
Partitions on storage systems can be deleted by means of the storage system
administration if provided.
STORM server
Name of the STORM server by which the imported partition is managed.
Partition name(s)
Enter the name of the partition(s) to be imported. Use the existing name.
Additional command-line arguments for 'dsTools'
Not required for normal import, only for special tasks like moving
documents to another logical archive. Contact Open Text Customer Support.
Set backup flag after import
The partition is imported as a backup partition and entered in the list of
partitions as a Backup type.
5. Click OK.
The import process may take some time. A message box shows the progress of
the import. To update the display, click Refresh.
STORM server
Name of the STORM server by which the imported IXW medium is
managed.
Partition name(s)
Name of the partition(s) to be imported.
Additional commands ...
Not required for normal import, only for special tasks like moving
documents to another logical archive. Contact Open Text Customer Support.
Import original partitions
The partitions are imported as original partitions.
Import backup partitions (for use in replicate archives only!)
The partitions are imported as backup partitions and entered in the list of
partitions as backup type.
Note: Do not select both Import... options for the same import process.
Set read-only flag after import
The partition is imported as a write-protected partition.
5. Click OK.
The import process may take some time. A message box shows the progress of
the import. To update the display, click Refresh.
Base directory
Mount path of the partition.
Partition
Name of the hard disk partition to be imported.
Backup
The partition is imported as a backup partition and entered in the list of
partitions as a backup type.
Read-only
The partition is imported as a write-protected partition.
3. Click OK.
The import process may take some time. A message box shows the progress of
the import. To update the display, click Refresh.
4. In the Servers tab, open the Archives directory.
Base directory
Mount path of the partition.
Partition
Name of the hard disk partition to be imported.
Read-only
The partition is imported as a write-protected partition.
3. Click OK.
The import process may take some time. A message box shows the progress of
the import. To update the display, click Refresh.
4. In the Servers tab, open the Archives directory.
5. Select the archive and the pool.
6. Right-click in the list of partitions, and choose Attach Partition to attach the
imported partition to the VI pool.
Partition
Name of the partition that is to be checked.
try to copy document/component from other partition
The utility attempts to find the missing component on another partition. If
the component is found, it is copied to the checked partition. If not, the
component entry is deleted from the database, i.e. the component is
exported.
export component
The database entry for the missing component on the checked partition is de-
leted.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact Open Text Customer Support.
Repair, if needed
Check this box if you really want to repair the inconsistencies.
If the option is deactivated, the test is performed and the result is displayed.
Nothing is copied and no changes are made to the database.
3. Click OK. The utility starts and a protocol window opens.
4. Click Refresh to view the current messages. Once the test has been completed,
click Close.
Partition
Name of the partition that is to be checked.
Import documents if they are not in the database
Missing document or component entries are imported into the database.
3. Click OK. The utility starts and a protocol window opens.
4. Click Refresh to view the current messages. Once the test has been completed,
click Close.
DocID
Type the document ID accordingly to the Type setting.
You can determine the string form of the document ID by searching for the
document in the application (e.g. on document type and object type) and
displaying the document information in Windows Viewer or in Java Viewer.
Type
Select the type of document ID. The ID can be entered in numerical (Number)
or string (String) form.
Repair document, if needed
Check this box if you want to repair defective documents. The utility at-
tempts to copy the document from another partition. If this option is deacti-
vated, the utility simply performs the test and displays the result.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact Open Text Customer Support.
Replicate partitions are attached to a replicate buffer or replicate hard disk pool on
the remote standby server in the same way.
Important
Take care not to detach the partition until all the documents are stored on
the storage media.
STORM server
Name of the STORM server.
Device name
Name of the device, entered in the Devices directory.
Slot list
Numbers of the slots to be tested. The following entry syntax applies:
7 Specifies slot 7
3,6,40 Specifies slots 3, 6, and 40.
3–7 Specifies slots 3 to 7 inclusive
2,20-45 Specifies slot 2 and slots 20 to 45 inclusive
4. Click OK.
1. In the Servers tab, right-click the Servers directory and choose Add Vault.
To edit an existing vault, right-click it and choose Edit Vault.
2. Enter the vault information:
Name
Name of the vault. The maximum length is 32 characters. Spaces, umlauts
and special characters are not allowed.
Description
Description of the vault. Preferably use its location, for example, “safe in
room 317”. The maximum length is 255 characters. Umlauts are not allowed.
Shared
Enable this option if you want to share the vault with other known Archive
Servers. The vault data is exchanged by means of the job
SYS_REFRESH_ARCHIVE.
3. Click OK.
Name
Exact name of the partition which is to be assigned to the vault.
Type
Define whether the partition is an original or a backup partition.
Barcode
Barcodes are currently not supported.
4. Click OK.
1. In Archive Administration, click the Servers tab and open the Devices
directory.
2. Right-click in the list of devices, and choose Unavailable.
In the Unavailable Partitions dialog box, you see the names of the partitions
which have been unsuccessfully accessed and the number of access attempts.
3. Note the names of the required partitions, and click Close.
Use Clear to delete selected partitons from this list. This also clears the display
of these unavailable partitions in the Archive Web Monitor
4. Insert the medium (or the media) that holds the required partition into the
jukebox, right-click the jukebox in the Devices directory, and choose Insert.
The partition gets the online status and the documents are available again.
There are several parts that have to be protected against data loss:
Partitions
All hard disk partitions that may hold the only instance of a document must be
protected against data loss by RAID. Which partitions have to be protected you
find in the Installation overview chapter of the installation guides for the Archive
Server.
Document Pipelines
The Document Pipeline on the Enterprise Scan has to be protected against data
loss. For details please refer to Section 19 "Backing up" in Livelink® Enterprise
Scan - User and Administration Guide (CL-UES).
Archive database
The archive database with the configuration for logical archives, pools, jobs and
relations to other Archive Servers and leading applications has to be protected
against data loss. The process depends on the type of database you are using (see
“Backup of the database” on page 160).
Optical media
Optical storage media have to be protected against data loss. The process differs
if you use ISO or IXW media (see “Backup and recovery of optical media” on
page 175).
Storage Manager configuration
The IXW file system information and the configuration of the Storage Manager
must be saved, see “Backup of the Storage Manager configuration” on page 180.
Data in storage systems
Data that is archived on storage systems like HSM, NAS, CAS needs also a
backup, either by means of the storage system or with Archive Server tools, see
“Backup for storage systems” on page 180.
Important
If you have installed BPM and/or PDMS, database backups are required for
all databases of the system: the Archive Server, the Context Server and the
User Management Server. Note that storage media does neither contain any
data of the Context Server database and the User Management Server
database, i.e. you cannot restore these databases by importing from media.
The database backup procedures are very similar.
Important
During the configuration phase of installation, you can either select default
values for the database configuration or configure all relevant values. To
make sure that this user guide remains easy to follow, the default values are
used below. If you configured the database with non-default values, replace
these defaults with your values.
1. Encrypt the new password with the command line tool enc:
enc -s <decrypted password>
2. Copy the content of the first line without the surrounding brackets - in this
example G7F187E050632E85D - and paste it in the registry key.
Caution
Make sure that there is always enough free storage space for the archived
redo log files as otherwise the Oracle database will come to an halt and will
not be able to continue functioning! No error message is displayed if this
occurs. Set the directory so that it is large enough (min. 1 GB) and monitor
the amount of storage space available at regular intervals in the Archive
Web Monitor (see “Using the Archive Web Monitor” on page 209).
The archived redo logs must be backed up to tape every day. If large volumes of
data have to be archived every day then more frequent backups are recommended.
To do this, it is not necessary to shut down the database.
1. Back up the archived redo logs from the archive directory (confirm its location
using the Parameter log_archive_dest in the initECR.ora file) to tape at least
once a day.
2. Check that the tape backup can be read.
3. Delete the archived redo logs at regular intervals from the archive directory
(confirm its location using the Parameter log_archive_dest in the initECR.-
ora file). To increase security, the archived redo logs should not be deleted until
after the next or next to last complete backup has been completed.
Caution
The NOARCHIVELOG mode should not be used! In this mode, the online
redo log files are cyclically overwritten and not archived to a separate
directory. This means that complete logging of all operations is not possible.
UNIX only: <ORACLE_HOME>\dbs\ Static part of the database configuration and pass-
word file (backup using an off-line backup):
orapwECR
You can find the default settings for the password and service in Table 12-1 on
page 161.
3. If you do not know where the files which are to be backed up are located, find
out the corresponding directories:
for the data files: select name from v$datafile;
for the control files: select name from v$controlfile;
for the redo log files: select member from v$logfile;
4. Shut down the database:
shutdown normal
!del /f e:\backup\ctl_bak.dbf
alter database backup controlfile to 'e:\backup\ctl_bak.dbf';
select sequence# from v$log where status='CURRENT';
-- please remember this number as <end_log#>
alter system switch logfile;
-- backup all archived log files with number from <start_log#> to <end_log#>
to complete the online backup of the tablespaces
spool off
exit
An SQL Server database is generally backed up online, i.e. the backup is performed
while the database is running. The backup should be performed during periods of
low system activity since it reduces the performance of the database.
To make sure that an SQL Server database can be fully restored whenever necessary,
the data described here must be backed up separately. The exact procedure is
described below.
Configuring the database: master database
The master database contains all the information about the structure of the SQL
Server database, i.e. about all the employed databases and users. The master
database must be backed up:
• after initial installation,
• whenever a database has been created, modified or deleted,
• whenever logins or users have been added or deleted,
• whenever backup devices have been added or deleted,
• after reconfiguration of the SQL server,
• and whenever the physical (files) and logical (file groups) structure has been
modified, e.g. after ALTER DATABASE, CREATE DATABASE, DISK INIT, DISK
RESIZE.
The transaction log for the master database is then also backed up automatically.
Scheduling database actions: msdb database
The msdb database contains all the scheduling information, e.g. concerning
regular backup processes. It must be backed up regularly, and preferably every
week. An additional backup is required after every scheduling modification.
Data: ECR database
This database contains all data of the server. It must be backed up regularly,
every week to every day depending on the amount of activity. To increase data
security, the data and index devices should also be backed up with RAID1 or
RAID5.
Transaction logs
Transaction logs record all database activities. Each database has its own
transaction log. A transaction log bridges the period between the last backup of
its database and a possible crash. Transaction logs must be backed up at least
once a day. Following the backup, the number of entries in the transaction log is
automatically reduced and space is created for further records (transaction log
dump). The space reserved for the transaction logs must never be filled since
otherwise the database will cease operating! Transaction logs should be backed
up by RAID1.
Recording the database structures
The procedure sp_helpdb [database] displays the structures of all the databases
running on the SQL server. You should record the output in a list that should
also be backed up regularly.
Important
Perform these backups at regular intervals. A backup job does not simply
back up the data but also deletes all the entries in the transaction log
(transaction log dump) relating to transactions that have been completed
(inactive part). Without this, the log directory would fill up and the database
would no longer be able to function. Check the amount of free storage space
in the Archive Web Monitor at regular intervals (see “Using the Archive
Web Monitor” on page 209).
5. In the context-sensitive menu, click All Tasks > Backup Database and then
select the Backup tab.
6. In the dialog box, enable the following options:
Transaction Log
Append to media/Overwrite existing media
Enter the settings required for your backup strategy (see “Sample backup
strategy” on page 173).
In the Destination section, select the created backup device ECRLOG.
7. Click Schedule and enter the job parameters.
4. Select the following in the Enterprise Manager structure: SQL Server Group >
<computer name> > Management > Backup.
5. Right-click and choose Backup a Database.
6. Enter the options shown in the dialog box:
• The next character is a digit from 1 to 7 which specifies the day of the week.
• The next character is a letter from A to Z which specifies the order of the tapes in
the backup sequence.
Important
You can also use a Remote Standby Server for backing up data. For details
refer to “Remote Standby” on page 185.
Notes:
• The Local_Backup job considers all pools, for which the Backup option is
set. The backup_pool job considers only the pool for which it is created.
You can schedule additional backups of a pool by configuring both jobs, or
configure the pool backup separately.
• If problems occur, have a look in the protocol of the relevant job (see
“Checking the execution of jobs” on page 198).
1. Choose Eject to remove the defective ISO medium from the jukebox.
2. If the original medium is damaged, insert the backup copy in the jukebox using
Insert. It is now used as the original ISO medium without any further
configuration.
3. Right-click this partition in the partition list and choose Utilities > Backup
Partition.
The Backup Partition dialog box opens and the chosen partition name is
entered.
4. Click OK.
The utility starts and creates a backup medium in the same pool. Execution is
logged in a message box. Click Refresh to view the current messages.
5. For double-sided media, repeat steps 3 and 4 to backup the second side of the
medium.
6. Store the new backup in a secure place.
Automatic backup
Normally the backup of IXW media is done asynchronously by the Local_Backup
job.
1. Click the Servers tab.
2. Right-click the IXW pool and choose Edit Pool.
The dialog Edit Pool Configuration opens.
3. Check the option Backup.
4. Check the option Auto Initialization for complete automatic backup.
5. Set the value for Number of Backups to n>0 and choose the required Backup
Jukebox.
6. Configure the Local_Backup job according to your needs (see “Job command
and scheduling” on page 72).
According to the scheduling, the Local_Backup job updates the oldest backup
media. The job writes only one backup partition per instance.
Note: If problems occur, have a look in the protocol of the Local_Backup
job (see “Checking the execution of jobs” on page 198).
Semi-automatic backup:
With this method, you initialize the original and backup partitions manually in
the corresponding jukebox devices. The backup partition must have the same
name as the original one. To initialize the media, proceed as described in
“Manual initialization and assignment” on page 130. The configuration
procedure is the same as for automatic backup except for steps 4 and 5 which
are here: No Auto Initialization, no Number of Backups and no Backup
Jukebox selection. The backup job finds the backup partitions by their names.
1. In the partition list of the pool, right-click the partition that is to be backed up
and and choose Utilities > Backup Partition.
This causes the local backup partition to be updated. Execution is logged in a
message box. Click Refresh to view the current messages. If it is not possible to
back up the data, stop here and contact Open Text Customer Support.
2. If the second side of the original IXW medium has also been written, set the
status of the second side to W (write locked), start the utility again and enter the
partition name of the second side of the IXW medium.
3. In the Devices folder, right-click the defective IXW medium and choose Eject.
4. Remove the defective IXW medium from the jukebox and label it clearly as
defective.
5. Right-click the backup IXW partition and choose Restore.
This makes the backup partition available as original. If a partition has already
been written to the second side of the defective IXW medium, restore it in
exactly the same way.
6. Create a new backup partition (see “Manual backup of one partition” on page
178).
Note: If an IXW backup partition is damaged, remove the medium with Eject
and create a new backup partition (see “Manual backup of one partition” on
page 178).
Number of Partitions 1
Number of Backups 1
Backup Jukebox Must be different from Original Jukebox
Backup On for Local_Backup job
backup {
list { dest1 }
backuproot { dest1 }
dest1 { path { m:\backup1 } size { 1800 } }
After a successful backup, you can save the backup files to another medium.
In the event of disk space problems or if the IXW file system information is
extended, the backup directories can also be spread over several hard disks, e.g.
backup {
list { dest1 dest2 dest3 }
backuproot { dest1 }
dest1 { path { m:\backup1 } size { 1350 } }
dest2 { path { n:\backup2 } size { 300 } }
dest3 { path { o:\backup3 } size { 300 } }
}
If you use this type of distribution, please note the following: The files are always
copied in their entirety to a backup directory; they are not split up. The job sorts the
files by size and searches automatically for a directory that will be big enough for
each file (starting with the largest file). The size of the directories must therefore be
chosen so that all the files can be accommodated.
Moreover, the backup directories must be chosen so that they are somewhat larger
than the files configured in ixworm section. One directory must be larger than the
inode file. Every backup file is approximately 1 MB larger than its original. If there is
not enough storage space available, the backup job is terminated with an error
message. The backed up files are given an automatically generated name. Files with
the same name from an earlier backup that is also held in the target directory are
overwritten. The backup files in the backup directory are not explicitly deleted prior
to the backup.
Note: The required directories must be created separately. Directories are not
automatically created.
The distribution of the files in the backup directories is written to a special file
backup.map, which is created under backuproot. There can only be one
backuproot.
All files which are needed to get a consistent state of the JBD at a later time are
backed up:
inode, hashname, hashfile, hashdir files (IXW only)
hw_errors.txt
vol_jbd
dev files, toc files and status files of configured jukeboxes (in
server.cfg)
destroyedVols (information on destroyed volumes)
server.cfg
runtimefile
backup.map
Executable files, log files, journal files, vol_dirty file, devices save files, unused dev
files (not configured in server.cfg) and persistent ISO cache are not backed up.
Note: Where ISO media are used as the storage media, no file system
information has to be backed up. This means that the server.cfg file need not
contain an ixworm section. In this case, a backup directory of 100 to 200 MB is
generally sufficient.
12.4.2 Backup
The job Save_Storm_Files backs up the configuration of the Storage Manager. The
IXW file system requires the most space for the backup.
The job executes the following steps:
• Setting Storage Manager to online backup status,
• Checking for running write jobs,
• Copying configuration files to the backup directories;
• Setting back Storage Manager to normal operation status,
• Performing a consistency check of the backed up files. The consistency check can
take some time. Functioning of the archive is not affected by it.
Important
A backup of the STORM configuration files using operating system means
or other tools may lead to inconsistencies during the restoration or even to
data loss. Therefore use only the SaveStormFiles job for the backup.
The SaveStormFiles job is configured as a standard job, but is disabled per default.
Note: While the configuration files are being copied to the backup directories,
no IXW write job can be executed.
The Save_Storm_Files job checks first whether any write jobs are running. If so, it
does not start and an error message is generated.
Any write jobs that are started during backup of the files are terminated with an
error message. When scheduling the jobs, make sure that the write jobs have been
completed by the start time for Save_Storm_Files, and that no write jobs are
started during the backup.
IXW backup jobs and Save_Storm_Files can run in parallel. But it must be pointed
out that the jukebox has enough drives inside to reply read requests, because JBD
delays imports of IXW media already started in a drive until the Storage Manager is
set back to normal operation.
During your regular checks of the job log you can see whether Save_Storm_Files
has been successfully completed, along with the consistency check. When this is the
case, you should save the backup directories to tape. We recommend planning a
backup cycle here and taking backups at regular intervals (once a week or once a
month, depending on the volume of data to be archived).
Description of parameters:
<config_root>: path of the standard configuration files (e.g. server.cfg,
normally in <IXOS_ROOT>\config\storm)
<backuproot>: path that has been configured for the directory specified as
backuproot.
All new and modified documents are asynchronously transmitted to the Remote
Standby Server by the Synchronize_Replicates job. The job physically copies the
data on the storage media between these two servers. Therefore the Remote Standby
Server provides more data security than the local backup of media (see “Backup and
recovery of optical media” on page 175).
The Remote Standby Server has the following advantages:
• Increased availability of the archive, since the Remote Standby Server is accessed
when the original server in not available.
• Backup media are located in greater distance from the original Archive Server,
providing security in case of fire, earthquake, and other catastrophes.
Nevertheless there are also disadvantages:
• Only read access to the documents is possible; modifications to and archiving of
documents is not possible directly.
• A document may have been stored or modified on the original server, but not
yet transmitted to the Remote Standby Server.
• No minimization of downtime with regard to archive new documents, since only
read access to the Remote Standby Server is possible.
Note: The usage of a Remote Standby Server depends on your backup strategy.
Contact the Open Text Global Services for the development of a backup
strategy that fits your needs.
13.1 Installation
The installation of a Remote Standby Server is a normal Archive Server installation
and is described in the installation guides of the Archive Server in the ESC
https://esc.ixos.com/1082377796-624.
For complex installations contact Open Text Global Services.
13.2 Configuration
You have to perform several configuration steps on original server and Remote
Standby Server.
The replicated archive now appears also under the Remote Standby Server. You
are now asked to configure the pools of the replicated archive (see “Backups on
a Remote Standby Server” on page 188).
Note: If the original server does not appear in the server list, click the
Synchronize Servers button.
4. Set the server priorities for the replicated archive server:
a. In the Known Servers or in the Archives folder, right-click the replicated
archive and choose Server Priorities.
b. Specify the sequence used to search for documents in this archive.
The order should be: On the original archive server: first the original then
the remote standby server. On the remote standby server: first the remote
standby then the original server.
Details: “Server Priorities dialog box” on page 281
5. Repeat these two steps for each archive you want to replicate.
6. Replicate the disk buffer(s):
a. In the Known Servers folder, right-click the disk buffer you want to
replicate and choose Replicate.
The replicated disk buffer now appears also under the Remote Standby
Server.
b. Repeat this step for each disk buffer you want to replicate.
7. Set up the media for the replicated data:
For disk buffers and hard disk pools, you have to define which partition you
want to use for the replicated data.
Important
These partitions have to be named the same way as the original
partition. The replicate partitions need at least the same amount of
diskspace.
f. Repeat this for each hard disk pool and disk buffer you want to replicate.
8. Define the replication job:
Set the time and date for the replication job Synchronize Replicates in the
Jobs tab (see “Job command and scheduling” on page 72).
Note: On the original server, the old backup jobs can be disabled, if no
additional backup shall be written.
9. Connect to the original server and define the server priorities for the replicated
archives as described in 4.
a. Choose the Servers tab and select the replicated partition in the Device
folder.
b. Choose Utilities > Export Partition in the context-sensitive menu.
c. If both sides of the IXW medium were already initialized you must also
export and remove the second partition.
d. Select the replicated partition and choose Eject in the context-sensitive
menu.
6. Export also the IXW partition(s) from IXW file management. The procedure
depends on the operating system you are using.
a. In the command line, change to directory <IXOS_ROOT>\bin
b. Determine the ID of the IXW medium:
cdadm survey -n +uoi
Important
Don't execute this job during office times, because this may lead to
problems.
14. Check whether the job ran successfully (see “Checking the execution of jobs” on
page 198).
Important
Don't execute this job during office times, because this may lead to
problems.
Archive Administration
• check the success of scheduled jobs, in particular of the Write and Backup
jobs
• check for notifications according to your configuration (e-mails, alerts,
execution of files, see “Simplify monitoring with notifications” on page 199)
• check free disk space
2. Check the component groups of the hosts for error indications and warnings. If
an error or warning occurs, identify the component or components that have
caused it. For further details refer to “Using the Archive Web Monitor” on page
209.
Time The date and time when the job was started.
Name The user-specific name of the job.
ID Identifies the execution of job instance. The number appears on job ini-
tialization and is repeated on job execution.
Status INFO indicates that the job was completed successfully. ERROR indicates
that the job was terminated with an error.
Command The system command and arguments executed by the job.
Message The message generated by Archive Server. It provides more detailed
information about how the job was terminated in case of an error.
To sort the table according to one of these attributes, click on the column title. Use
the buttons to browse the protocol. You can also display the detailed messages for
job instances, and delete protocol entries.
4. Click OK.
See also:
• “Examples of event definitions” on page 201
• “Configuring and assigning notifications” on page 203
Name
A self-explaining name.
Severity
Specifies the importance. Possible entries:
• Fatal Error
• Error
• Warning
• Important
• Information
• (<any>): All severities are recorded.
Message class
Classifies and characterizes events.
• Database: Database event (not used in this version)
• Server: Events at servers
• Administration: Events which affect administration.
• (<any>): All classes are recorded.
Component
Specifies the software component which issued the message. If nothing is
specified here, all components are recorded (<any>). The most important
components are:
• Administration Server: mainly monitors the execution of the jobs.
See also:
• “Defining events” on page 199
• “Examples of event definitions” on page 201
• “Configuring and assigning notifications” on page 203
HA reattach jukebox
Event occurs in case of an jukebox error when the system tries to find another
jukebox or SCSI device.
ISO partition has been written
The event occurs when an ISO partition has been written successfully.
IXW partition has been initialized
The event occurs when automatic initialization of an IXW partition has finished
successfully.
Jukebox error: Jukebox detached
The event occurs when the STORM cannot access the jukebox.
More blank media required in jukebox
This event occurs when new optical media have to be inserted in a jukebox.
User-defined Additionally, you can define other events to get notifications if they occur. Userful
events events are:
Job Error
This event records errors that are listed in the job protocol and notifies you with
a particular message. Use this configuration:
Severity: Error
Message class: Server or <any>
Component: Administration Server
Message code: 1
See also:
• “Event Specification” on page 200
• “Configuring and assigning notifications” on page 203
See also:
• “Using variables in notifications” on page 206
• “Assigning notifications to events” on page 206
Name
The name should be unique and meaningful.
Type, Parameter
Select the type of notification and enter the specific parameters. The following
notification types and parameters are possible:
Admin Client Alert
Alerts are simple notifications. They are displayed in the Archive
Administration when clicking the Alerts button and are automatically
deleted after a specific period. The alerts refer to the connected Archive
Server. No parameters are required.
Send an e-mail
E-mails can be sent to respond immediately to an event or in standby time. If
you want to send it via SMS, consider that the length of SMS text (includes
Subject and Additional text) is limited by most providers. Enter the following
parameters:
• Mailhost: Name of the target mail server. The connection to the mail
server is produced using SMTP. The entry is mandatory.
• Sender address: E-mail address of the sender. It appears in the from field
in the inbox of the recipient. The entry is mandatory.
• Recipient address: E-mail address of the recipient. If you want to specify
more than one recipient, separate them by a semicolon. The entry is
mandatory.
• Subject of the mail, $-variables can be used. If not specified, the subject is
$SEVERITY message from $HOSTNAME/$USERNAME($TIME).
See also:
• “Configuring and assigning notifications” on page 203
• “Using variables in notifications” on page 206
• “Assigning notifications to events” on page 206
See also:
• “Notification settings” on page 204
To assign a notification
1. Click the Notifications tab.
2. In the Events list, select the event.
3. In the All available notifications list, right-click the notification and choose
Assign to Event.
Calling this URL opens the Server start page. Click on Archive Web Monitor to
open the client.
You can specify a number of parameters with the URL in order to customize
Archive Web Monitor to meet your requirements (see “Customizing the Archive
Web Monitor” on page 213).
Button bar
The button bar contains buttons to configure the Archive Web Monitor. All
settings here apply only to the current browser session. If you want to reuse your
settings, pass them as parameters when you start the program (see “Customizing
the Archive Web Monitor” on page 213).
Left column: Monitored servers
Here you find a list of the monitored Archive Servers. Click a name. The current
status of this Archive Server is displayed in the other two columns. If you click
on the name again, the status is checked at the Monitor Server and the display in
the Archive Web Monitor is updated if necessary.
Otherwise the status of the components is updated after the specified refresh
interval (see “Refresh interval” on page 212). If it is not possible to establish a
connection to a Web server then the icon is displayed in front of the server
name.
Tip: If you want to compare the status of different servers, open Archive
Web Monitor for each of them and use the task bar to switch between the
different instances.
Middle column: Components
In a hierarchical structure, you see the groups of components that run on the
interrogated host. Below each component group, you see the associated
components. Click a component to display its current status in the third column.
Click the icon to display the status of the component group on the right. For
information on the components and the possible messages, refer to “Component
status display” on page 213.
The icon in front of the component group name represents a summary of the
individual statuses of the components in the group. If you move the mouse
pointer to an icon in front of a component, abbreviated status information is
displayed in a tooltip even if the detailed information is not displayed in the
third column. In this way you can compare the statuses of two components.
Right column: Detailed information and status
This column contains detailed status information on the selected components or
component groups. If the third column is too narrow to display the information,
move the mouse pointer to the icon to display the status information in a tooltip.
Status line
Provides information on the status of the initiated processes.
Status icons The icons identify the system status at a glance. To configure the icons, click the Icon
Type button. You can choose between bulbs, LEDs, faces, signs, and traffic lights.
The possible statuses are:
• Available without restriction.
• Warning, storage space problems are imminent. You can continue working for
the present but the problem must be resolved soon.
• Error, component not available.
Note: To refresh the display of the host status manually, click on the name of
the host in the left column. In the Internet Explorer, you can also refresh the
display with F5 or CRTL+R.
To remove a host
1. Click the Remove Hosts button.
2. Select one or more Archive Servers that you no longer want to monitor and click
OK.
16.2.1 DP space
Monitors the storage space for the Document Pipelines that are used for the
temporary storage of documents during the archiving process. A special directory
on the hard disk is reserved for the Document Pipelines. You can find its location in
the Server Configuration, under General Archive Server settings (COMMON) >
DP settings.
During archiving, the documents are temporarily copied to this directory and are
then deleted once they have been successfully saved. The directory must be large
enough to accommodate the largest documents, e.g. print lists generated by SAP.
The possible status displays are Ok,Warning and Error. In Details you can see the
free storage space in MB, the total storage space in MB and the proportion of free
storage space in percent. The values refer to the hard disk partition in which the
DPDIR directory was installed.
A warning or error message is issued if insufficient free storage space is available.
Possible causes are:
Error during the processing of documents in the Document Pipeline
Normally the documents are processed rapidly and deleted immediately. If
problems occur, the documents may remain in the pipeline and storage space
may become scarce. Check the status of the DocTools (group DP Tools in the
Monitor) and the status of the Document Pipelines in Document Pipeline Info.
Document is larger than the available storage space
If no separate partition is reserved for the Document Pipeline, the storage space
may be occupied by other data and processes. In this case, the partition should
be cleaned up to create space for the pipeline. To avoid this problem, reconfigure
the Document Pipeline and locate it in a separate partition. The partition must be
larger than the largest document that is to be archived.
<jukebox_name>
Provides an overview of the partitions for each attached jukebox. The possible
status specifications are Ok, Warning or Error. Warning means that there are no
writeable partitions or no empty slots in the jukebox. Error is displayed if at
least one corrupt medium is found in a jukebox (display -bad- under Devices in
Archive Administration).
The following information is displayed under Details:
16.2.4 DS pools
The Monitor checks the free storage space which is available to the pools (and
therefore the logical archives). The pools and buffers are listed. The availability of
the components depends on two factors. Partitions must be assigned and there must
be sufficient free storage space in the individual partitions.
• The status Ok specifies that partitions are present and sufficient storage space is
available.
• The status Error together with the message No volumes present means that a
partition (WORM or hard disk) needs to be linked to this buffer or pool (see
“Attaching an additional partition to a disk buffer or pool” on page 149 and
“Manual initialization and assignment” on page 130).
• The status Error with the message No writable partitions refers to WORM
partitions and means that the available partitions are full or write-protected.
Initialize and assign a new partition (see “Manual initialization and assignment”
on page 130) or remove the write-protection.
• The status Full refers to disk buffers or hard disk pools and means that there is
no free storage space on the partition. In the case of a hard disk pool, create a
new partition and assign it to this pool (see “Attaching an additional partition to
a disk buffer or pool” on page 149).
In the case of a disk buffer, check whether the job Purge_Buffer has been
processed successfully and whether the parameters for this job are set correctly.
DS DP Tools
The availability of each DocTool in the DS DP is displayed. Under normal
circumstances, the DocTools are started by the spawner when the archive is
started and continue to run for the entire archive session. The status is
Registered if the DocTool has been started. Under Details you can see
whether the DocTool is processing documents (active) or whether it is
unoccupied (lazy).
DS DP Queues
Monitors the Document Service DocTools queues and specifies the number of
documents in each of them. Normally the documents are processed very quickly
and the queues are empty.
DS DP Error Queues
Monitors the Document Service DocTools error queues and specifies the number
of documents in each of them.
DP Tools
The Monitor checks the availability of the DocTools. The status is Registered if the
DocTool has been started. Various messages may appear under Details for the
status:
lazy
The DocTool is unoccupied. There are no documents available for processing.
active
The DocTool is processing documents.
disabled
The DocTool has been locked. To check this status, start Document Pipeline Info.
Here all the queues that are associated with a locked DocTool are identified by
the locked symbol. In general, a DocTool is only locked if an error has occurred.
Once the problem has been analyzed and eliminated, restart the DocTool in
Document Pipeline Info.
Not registered
The DocTool has not been started.
DP Queues
Monitors all queues of the Document Pipelines and specifies the number of
documents in each queue. Precisely one DocTool is assigned to each queue. One
DocTool may be assigned to multiple queues. You can find the same queues in
Document Pipeline Info but with different names.
Usually the documents are processed very quickly by the associated DocTool and
the queues are empty. The status Empty is specified. If there are documents in the
queue, the status is set to Not empty. Under Details, you find the number of
documents in the queue. To analyze this situation, check the availability of the
DocTool under DP Tools and use the functions provided in Document Pipeline Info.
DP error queues
Monitors the error queues and specifies the number of documents in each queue.
There is an error queue for each ordinary queue. Documents in error queues cannot
be processed because of an error. The processing DocTool is specified for each
queue. You can find the corresponding queues in Document Pipeline Info but with
different names.
The error queues are usually Empty. If a DocTool cannot process a document, the
document is moved to the error queue. The status is set to Not empty. Under
Details you can see the number of unprocessed documents. If the same error
occurs for all the documents in this pipeline, then all the documents are gathered in
the error queue. The documents cannot be processed until the error has been
eliminated and the documents have been transferred for processing again with
Restart in Document Pipeline Info.
...provide
The Archive Server could not supply a document to the SAP host.
• In the DocService component group, check the rc component. If error is
displayed, the Archive Server is not available and must be restarted.
• The network connection to the SAP host has been interrupted.
• Check that there is sufficient free storage space in the exchange directory.
...cpfile
A document could not be copied from the SAP host to Archive Server.
• Check the component group DP Space to determine whether there is
sufficient free space available for the document pipeline. Consider one of the
explanations above.
• The network connection to the SAP host has been lost.
• Problems with the exchange directory (shared file transfer directory that
must be available before the SAP host can be accessed).
...caracut
A collective document (outgoing document, OTF) could not be subdivided into
single documents. Check the component group DP Space to determine whether
there is sufficient free space available for the pipeline. Consider one of the
explanations above.
...doctods
One or more documents could not be archived.
• In the DocService component group, check the wc component. If error is
displayed, the Archive Server is not available and must be restarted.
• Check the DS Pools component group. If warning or error is displayed for
the logical archive in which the document is to be archived or for the
corresponding disk buffer, there is no storage space available for archiving.
Please note the comments on DS Pools above.
...wfcfbc and ...notify
These DocTools are used to subdivide collective documents into single
documents. It is unusual for errors to occur here.
...cfbx
The response could not be sent to the SAP system.
• The connection to the SAP system could not be established. Check the log file
cbfx.log for information on the possible error causes.
...docrm
The temporary data in the pipeline could not be deleted following the correct
execution of all the preceding DocTools. Start Document Pipeline Info and
remove the documents in the corresponding error queue. You require special
access rights to do this.
1. Windows: From the Start menu, choose All Programs > Livelink ECM -
Archive Server > Livelink ECM - Document Pipeline Info.
UNIX: From the shell, change the directory to <IXOS_ROOT>/bin and enter
startIxdpinfo.
2. In the Document Pipeline Info window, select the Document Pipeline host, e.g.
the Archive Server or Enterprise Scan workstation, from the Host Name list. If
the host is not in the list, enter its name directly in the field.
3. To create the connection:
• click the Connect button, or
• click Connect on the Host menu.
Tip:
To connect to the current host the next time you start Document Pipeline Info,
select Preferences from the Options menu. Select Connect when started.
You may connect to only one host at a time. To connect to a different host, select the
host from the Host Name list or enter the host name directly in the field, and click
the Connect button. The previous connection is discontinued and the new
connection established. To monitor multiple hosts simultaneously, start as many
instances as required of the Document Pipeline Info and connect them to different
hosts.
Document The list in the Document Pipeline Info window contains the following elements:
Pipelines
documents flow through the pipeline rapidly. For that reason, the numbers in
this column are usually zero.
Error column
This column displays the number of documents in the error queue. These are
documents that could not be processed by the associated DocTool. The archiving
process for the pipeline is not interrupted because faulty documents are removed
from the pipeline and placed here.
<Document Pipeline>
The top level of the hierarchy displays the Document Pipelines, e.g. Store
scanned documents with barcode into SAP(R3SC). Each Document Pipeline
corresponds to a different archiving scenario. A pipeline is made up of
processing queues. They are listed in the order in which the documents are
processed. Each queue is associated with its corresponding DocTool. A DocTool
is a program that performs a particular processing step.
Removed documents
This is a special Document Pipeline. Documents to be permanently deleted are
deposited here. As a rule, these are documents that could not be processed due
to, for example, a customizing error. To move a document to the recycle bin, a
special access right is required.
Queue is enabled
When a queue is enabled, it passes the documents sequentially to the associated
DocTool for processing. A processing queue contains the documents that are
waiting to be processed by the associated DocTool.
Queue is disabled
When a queue is disabled, it does not pass the documents to the associated
DocTool for processing. A queue is disabled if the associated DocTool is
disabled. As several queues can use the same DocTool, all these queues are
disabled if the DocTool is disabled.
appropriate access right. Expand the list to see the documents. The document
name is the name of the document directory, which is a subdirectory of the
pipeline directory on the host (DPDIR), and contains all components of that
document.
Status line
The status line appears in the lower border of the window. When a queue is
selected, the status line displays the technical name of the corresponding
DocTool. If some Document Pipelines are hidden, the message Hidden Pipelines
appears in the lower right of the status line.
If an error occurs, a small red dot will appear on the Document Pipeline symbol. The
queue where the error has occurred will be marked with a small red arrow. The
documents affected will be deposited in the associated error queue.
Menus and Above the main display of Document Pipelines, you find the toolbar and menus.
toolbar They contain the following functions:
Host > Refresh Updates the information for the selected queue.
All information in the Document Pipeline Info
window is also updated at regular intervals.
Host > Statistics... Displays a window containing general statistics
for the Document Pipelines running on the host,
See also “Statistics” on page 227
Host > Restart all Resubmits all documents in error queues to the
queues from which they were removed due to
errors. In other words, it restarts the processing
of documents at the point where processing was
interrupted.
Host > Pipelines Opens the dialog for showing and hiding the
view... Document Pipelines.
See also “Pipelines view ” on page 227
Host > Print report... Prints information displayed in the Document
Pipeline Info window. You can also print to a
file. Thus you can log the current status of the
Document Pipelines.
See also “Print report” on page 228
17.1.3 Preferences
To specify the basic settings for Document Pipeline Info, open the Options menu
and click Preferences.
In the General tab you find the following settings:
Supported languages
Select the language of the Document Pipeline Info window. Your selection will
be effective after you restart Document Pipeline Info.
Network timeout in seconds
Specify how long the Document Pipeline Info will wait for a response from the
host after it has sent a query.
Connect when started
Select this option to enable Document Pipeline Info to reconnect to the host to
which it was last connected.
The dialog box displays all Document Pipelines configured for the connected host.
Hidden pipelines are marked with a small red 'X' on their symbols. To hide or show
pipelines, right-click the pipeline and choose the appropriate option.Your choices
become active as soon as you click OK.
17.2 Statistics
The Statistics option in the Host menu displays general statistical information about
the pipelines on the connected host since the session began. Date and time of the
session start are given, followed by the number of documents processed, the
number of documents currently being processed, and the number of changes that
were made manually to the processes.
In both the toolbar of the protocol window and the File menu, you find the
following options:
Save as
Saves the file in HTML format. Enter a name and the path for the file.
Refresh
Updates the information.
Print
Prints the statistics.
Close
Closes the dialog box.
No documents
The report does not include the names of documents located in the queues.
All documents
The report includes the names of all documents located in each queue. Special
access rights and logon are required; see “Data security and access rights” on
page 229.
Only visible documents
The report includes only the names of documents currently visible in the
Document Pipeline Info window. To display documents in the window, use
Documents - Show; see “Options for queues” on page 230.
Hidden pipelines
The report includes information on hidden pipelines. These pipelines are not
visible in the Document Pipeline Info window.
Collapsed pipelines
The report includes information on collapsed pipelines. These pipelines are
marked with a plus sign in front of the pipeline symbol.
DocTool-info
The report includes technical information about the DocTools of each queue.
This information corresponds to the detailed information given in the status line.
If you want to save the report in a file, click Preview and save the file.
17.4.2 Logon
Document Pipeline Info asks you to log on the first time you invoke a rights-
protected command. Once you log on, you will not be asked to log on again for the
remainder of the session with that host.
To log on, the following entries are required:
Username
An user name registered on the Administration Server. Depending on the task,
the default user is dpuser or dpadmin.
Password
The password registered on the Administration Server for the specified user.
If the logon operation succeeds, a message informs you about granted rights.
Documents
Displays a submenu with options that affect all documents in the processing
queue or in the error queue. Executing these options requires special access
rights.
Caution
The Remove options permanently delete the documents. The action is
irreversible. Before you select this option, make sure that the documents
involved are in fact irreparable. It is strongly recommended that you
contact Open Text Customer Support before you use this option.
Show
Disables the queue and displays a list of the documents currently in the
processing queue and in the associated error queue. The name displayed for a
document is the name of the directory in which all components of an
individual document are stored during processing. The document directories
reside on the host in a directory specified by the variable DPDIR. If you select
a document, the full path is displayed in the status bar.
Remove all (documents) on input
This option removes the documents from the normal queue (the Input
column displays how many there are) and deposits them in the Removed
documents pipeline. From this pipeline, the documents will be permanently
deleted. Use this option only when those documents are not to be processed
further.
Remove all (documents) in error
This option removes the documents from the error queue (the Error column
displays how many are there) and deposits them in the Removed documents
pipeline. From this pipeline, the documents will be permanently deleted. Use
this option only when the documents cannot be further processed due to their
faulty condition.
Caution
Remove permanently deletes the selected document. The action is
irreversible. Before you select this option, make sure the document
involved is in fact irreparable. It is strongly recommended that you
contact Open Text Customer Support before you use this option.
17.4.5 Protocol
Users with the right to disable queues may also show documents and display their
protocol. This protocol is a history of the processing performed on an individual
document by the DocTools.
In both the toolbar of the protocol window and the File menu, you find the
following options:
Print
Prints the protocol.
Save as
Saves the file in HTML format. Enter a name and the path for the file.
Refresh
Updates the protocol with the latest processing information.
18.1 Auditing
The auditing feature of the Archive Server traces events of two aspects:
• It records the document lifecycle, or history of a document - when the document
was created, modified, migrated, deleted etc. These are the events of the
Document Service.
• It records administrative activities - who changed the configuration of the system
and when by using the Archive Administration and the Server Configuration.
Important
Administrative changes are only recorded if they are done with the Archive
Server administration utilities. To get complete audit trails, make sure that
other configuration ways cannot be used, for example, editing the Registry
or files directly. At least, such activities must be logged by other means.
The auditing data is collected in a separate database tables and can be extracted
from there with the exportAudit command to files, which can be evaluated in
different ways.
You can define the timeframe for data extraction. Without these dates, you get all
audit data until the current date and time.
-s date start date and time of the timeframe, date has the format
YYYY/MM/DD:HH:MM:SS
-e date end date and time of the timeframe, date has the format
YYYY/MM/DD:HH:MM:SS
-S Extracts the document information. Without any other options, you get two
files in SAP printlist format, containing all information on documents that
have been deleted in the given timeframe. The filename is STR-<begin
date>-<end date>-DEL.<ext>, where the extensions are .prt and .idx.
The extracted data remains in the database.
-A Extracts the audit information of administration activities. The resulting file
is ADM-<begin date>-<end date>.txt in csv format, and the data is sepa-
rated by semicolons if no other options are set.
With further optional options, you can adapt the output to your needs.
-a Only relevant for document lifecycle information (-S is set). Extracts data
about all document related activities on the given timeframe. The generated
file name reflects this option with the ALL indicator: STR-<begin date>-
<end date>-ALL.<ext>.
-x Deletes data from the database after successful extraction. This option is not
supported if -a is set, so only information on deleted documents can be re-
moved from the database after extraction.
-o ext Defines the file format. For example, with -o csv you get a .csv file for
evaluation in Excel, independently of the extracted data.
-h Adds a header line with column descriptions to the output file.
-c sepchar Defines the separator character directly (e.g. -c , ) or as ASCII number in
0x<val> syntax (e.g. -c 0x7c ). The default separator is the semicolon. Con-
sider changing the separator if it does not fit your Excel settings.
18.2 Accounting
For smaller firms it may be more profitable to lease an archive from a service
provider rather than to buy it. The provider must be able to list and analyze all data
concerning the costs of using the archive in order to create a precise invoice.
Providing these data is the purpose of the accounting functionality.
Accounting is performed in the following steps:
Enable Accounting
Activates the logging of accounting data in accounting files.
Path to accounting data files
Defines the directory in the file system where the accounting data will be stored.
Since the accounting data is updated frequently during runtime, you should
define a fast local partition with sufficient capacity.
Accounting library
Displays the employed library. You cannot edit this field.
Separator for columns in accounting files
Defines the separator between the columns in the accounting files. The default
separator is the tab. You cannot edit this field.
Pool for the accounting data
If you want to archive old accounting data, you need a pool where the data is
stored. This pool is defined here. Enter the pool name in this syntax:
<Archive>_<Pool>, for example, DD_HD for the HD pool in the DD archive. If the
field is not displayed, select View > Display undefined values in the Server
Configuration. Then right-click the field and choose Define value. Now you can
enter the pool name.
Days until organizing accounting files
Defines the number of days after which the old data is archived or deleted. You
should only delete or archive the old data after you have evaluated it. The job
processes only data which is older than this number of days.
Method to organize old accounting files
Defines the behavior of the Organize_Accounting_Data job:
let them stay in the file system
The old data is kept in the file system, nothing happens.
archive into given pool
The accounting job archives the old data in the pool you specified in the field
Pool for the accounting data.
delete them
The old data is deleted irretrievably. Make sure that you evaluated it before.
Expiration time of cookies (in ASCII format)
Defines the expiration time of the cookies that were set.
SEARCH 14 ANALYZE_SEC 34
SEARCHFREE 15 RESERVEID 35
DGET 16 SETDOCFLAG 36
GETATTR 17
If you archive the old accounting data, you can also access the archived files. The
Organize_Accounting_Data job writes the DocIDs of the archived accounting files
into the ACC_STORE.CNT file which is located in the accounting directory (defined in
Path to accounting data files).
To restore archived accounting files, you can use the command
dsAccTool -r -f <target directory>
The tool saves the files in the <target directory> where you can use them as usual.
18.3.1 Configuration
The configuration parameters for the STORM statistics are defined in the Server
Configuration under Storage Manager (STORM) > Parameters for STORM
Statistics. You can define the following settings:
Path to files containing the statistics information
Insert the path of directory where the statistics files should be created. The
default value is <IXOS_ROOT>\var\stats.
Enable Statistics
Check this option to enable STORM statistics.
Additional values could be set in server.cfg section 'statistics'. Reasonable
default values guarantee the statistics feature is available even if this section is
missing.
The following additional keys could be set:
[ box ]
id = CDsim
changer = 72596 1 0 0 0
drv_0 = 72597 9 18
drv_1 = 72597 0 0
drv_2 = 72597 0 0
drv_3 = 72596 0 0
[ endsection box ]
[ box ]
id = WORMsim
changer = 72595 3 0 0 0
drv_0 = 72598 9 11
drv_1 = 72598 0 0
drv_2 = 72597 0 0
drv_3 = 72597 0 0
[ endsection box ]
[ volumes ]
d0 = "IXOS_A1_cd_0004B" 3 1034927362 72596 0 0 0 0
e338d009 = "IXOS_A1_cd_0005A" 4 1034927844 72596 0 0 0 0
[ general ]
build = IXOS-eCONserver STORM 5.5A (Revision 5.5.2002.1023)
uptime = 72601
[ endsection general ]
[ statistics ]
version = 1
period = 600
path = "D:\IXOS-eCON/var/stats\jbd_stat"
created = 1035535135
intermediate = TRUE
[ endsection statistics ]
Statistics section
version identifier describing layout version for parsing clients
period interval at which statistics are collected
created timestamp when these statistics were created
path absolute path to statistics file
intermediate written by DPC (in comparison with shutdown)
Volumes section
Crc32ofUVI "HS-name" vid creationTime onlineTime to_medium_requests
in_medium_requests I/O_to I/O_from
Box section
id unique name of jukebox.
Archive The Archive Administration Utilities are the Archive Web Monitor, the Document
Administration Pipeline Info and the Archive Administration. All three programs are explained in
Utilities
detail in this documentation. You can find a short summary of their use in
“Everyday monitoring of the archive system” on page 197.
System tools The most important error messages are also displayed in the Windows Event
Viewer or in the UNIX syslog. This information is a subset of the information
generated in the log files. Use these tools to see the error messages for all
components at one place.
You can prevent the transfer of error messages to the system tools in general or for
single componenents with the setting Write error messages to Event Log / syslog,
see “Log settings for the Archive Server components (except STORM)” on page 257.
To start the Windows Event Viewer, click
Start > Control Planel > Administrative Tools > Event Viewer.
The syslog file for UNIX is configured in the file /etc/syslog.conf.
Important
Stop the Spawner before you delete the log files!
On client workstations, other log files are used. For more information, refer to the
Livelink Imaging documentation.
UNIX
$ORACLE_HOME/network/log/listener.log (log file)
$ORACLE_HOME/network/trace (trace file)
$ORACLE_HOME/rdbms/log/*.trc/* (trace files)
Starting
Windows To start Archive Server using the Windows Services, proceed as follows:
Services
Stopping
Windows To stop the Archive Server components using the Windows Services, proceed as
Services follows:
1. On the desktop, right-click the My Computer icon and choose Manage.
The Computer Management window now opens.
2. Open the Services and Applications directory and click Services.
3. Right-click the following entries in the given order and choose Stop:
• IXOS Spawner (archive components)
• Oracle<ORA_HOME>TNSListener (Oracle database)
• OracleServiceDS (Oracle database) or MSSQLSERVER (MS SQL database)
Command line To stop the Archive Server components from the command line, enter the following
commands in this order:
You can also subsequently start or shut down individual components with these
specific scripts. <IXOS_ROOT> is the Archive Server installation directory. Under
UNIX, this directory can also be accessed with the soft link /usr/ixos-archive.
Starting
Use the commands listed below to restart the Archive Server after the archive
system has been stopped without shutting down the hardware.
1. Log on as root.
2. Start the archive system including the corresponding database instance with:
HP-UX /sbin/rc3.d/S910ixos start
Stopping
Enter the commands below to terminate the Archive Server manually.
1. Log on as root.
2. Terminate the archive system and the database instance with:
HP-UX /sbin/rc3.d/S910ixos stop
2. Check the status of the process with spawncmd status (see “Analysing
processes with spawncmd” on page 253).
3. Enter the command:
spawncmd {start|stop} <process>
Description of parameters:
{start|stop}
To start or stop the specified process.
<process>
The process you want to start or stop. The name appears in the first column of
the output generated by spawncmd status.
Important
You cannot simply restart a process if it was stopped, regardless of the
reason. This is especially true for Document Service, since its processes must
be started in a defined sequence. If a Document Service process was
stopped, it is best to stop all the processes and then restart them in the
defined sequence. Inconsistencies may also occur when you start and stop
the monitor program or the Document Pipeliner this way.
• stopall
You can execute the commands startall, stopall, exit and status in the
Archive Administration, with the corresponding commands in the File > Spawner
menu.
Process status To check the status of the processes, do one of the following:
• In the Archive Administration, on the File menu, choose Spawner > Status.
• Enter spawncmd status in the command line.
A brief description of some processes is listed here:
• T means the process was terminated. This is the normal status of the
processes chkw (check worms), stockist, and dsstockist; and under
Windows, additionally db and testport. If any other process has the status
T, it indicates a possible problem.
1. Check in the Archive Web Monitor in which component of the Archive Server
the problem has occurred.
2. Locate the corresponding log file in Explorer. The protocol is written
chronologically and the last messages are at the end of the file.
Note: The system might write several log files for a single component, or
several components are affected by a problem. To make sure you have the
most recent log files, sort them by the date.
Log file analysis When analizing log files, consider the following:
• The message class - that is the error type - is shown at the beginning of a log
entry.
• The latest messages are at the end of the file.
Note: In jbd.log, old messages are overwritten if the file size limit is
reached. In this case, check the date and time to find the latest messages.
• Messages with identical time label normally belong to the same incident.
• The final error message denotes which action has failed. The messages before
often show the reason of the failore.
• A system component may fail due to a previous failure of another component.
Check all log files that have been changed at the same or similar time. The time
labels of the messages help you to track the causal relationship.
• permanent log levels that cannot be changed; messages are always loggend,
• log levels for troubleshooting,
• log levels for internal development.
These levels are not described in this documentation.
Permanent log The following incidents are always written to the log files, and usually also to the
levels Event Viewer or Syslog. You cannot switch off the corresponding log levels.
• Fatal errors indicate fatal application errors that mostly lead to server crashes
(message type FTL).
• Important errors (message type IMP).
• Security errors indicate security violations such as invalid signatures (message
type SEC ).
• Errors indicate serious application errors (message type ERR). This level can be
switched off for the BASE package.
• Warnings indicate potential problem causes (message type WRN). This level can
be switched off for the BASE package and the Document Service.
Log levels for The following log levels are relevant for troubleshooting. You can change them in
troubleshooting the Server Configuration, see “Setting log levels” on page 256.
Important
Higher log levels can generate a large amount of data and even can slow
down the archive system. Reset the log levels to the default values as soon as
you have solved the problem. Delete the log files only after you have
stopped the spawner.
Time setting Additionally to the log levels, you can define the time label in the log file for each
component. Normally, the time is given in hours:minutes:seconds. If you select
Log using relative time, the time elapsed between one log entry and the next is
given in milliseconds instaed of the date, additionally to the normal time label. This
is used for debugging and fine tuning.
The STORM uses the trace levels 0 to 4. 0 means no logging, level 4 is the highest
one. Level 1 is the default trace level for most components. Level 4 is default for the
last words log files. Contact the Open Text Customer Support before you change
any trace level!
Archiving in DS wc.log
Windows Click Start > Programs > Livelink ECM - Archive Server > Livelink
ECM - Archive Administration.
UNIX In the shell, go to the directory $<IXOS_ROOT>/bin and issue the
command startIxadm. Condition: The X-Terminal must be available,
the environment variable DISPLAY must be set and the IXOS profile
must be active.
There is no logon window for the start window, you log on later when you connect
to a server.
The Archive Administration start window opens.
In this window, you add the Archive Servers for administration. To structure the list
for a better organization, you can create meaningful groups.
Usage Most of the commands in the Archive Administration are available on context
menus (shortcut menus). Some commands are also available on menus in the menu
bar, via buttons or via access keys. The shortcut menu is the preferred method to
access commands.
Context menu The following context menu items are available in the start window:
Add
To extend the servers list:
Host
Adds an Archive Server to the selection
Group
Adds a server group to simplify your administration tasks.
Edit
You can change the name of the group or the server and the associated
description, depending on which item you selected.
Cut
Copies the specifications of the selected server or group to the clipboard.
Paste
Inserts the specifications of the server or group from the clipboard at the cursor
position.
Delete
Deletes the selected server or group.
Connect
Establishes the connection to the selected Archive Server.
1. Right-click the Default group, or the groups to which the new group will
belong to, and choose Add > Group.
2. Enter:
Name
Name of the group. Recommendation: maximum length of 32 characters, no
umlauts.
Description
Description of the group. Recommendation: maximum length of 64
characters, no umlauts.
To add a server
1. Right-click the group and choose Add > Host.
2. Enter:
Host
Exact name of an existing Archive Server. The name is resolved with DNS,
you can also use the IP address or fully-qualified host name.
Description
Description of this server, for example, what it is used for, or where it is
located. Recommendation: maximum length of 64 characters, no umlauts.
Utilities menu
Contains the utilities defined for the administered Archive Server, see “Utilities”
on page 305.
Help menu
Opens the help and shows the version information.
Buttons These buttons are available in the administration window:
Login To the right of the buttons, you see the name and operating system of the
information administered Archive Server and the current user's name.
Notifications Configuration of the message service. Here you define events (e.g.
errors, warnings) and message types (e.g. e-mail) and specify which
message is to be sent when a specific event occurs (see “Notifications
tab” on page 301).
Users Administration of users, user groups and rights (see “Users tab” on
page 302).
Status bar Displays the status of the current task and the system time at the administration
client. The system time is primarily used for job planning (see “Job protocol” on
page 198). You can define the system time in different ways (see “Defining general
settings” on page 269).
Show warnings
Select this check box to display the warning Long time consuming task. The
warning appears when you print job protocols or policies. You can disable this
warning directly in the message with the option Don't show this dialog in the
future.
Save GUI settings
Select this check box to save the settings of the graphical user interface of the
Archive Administration.
Use vaults on eject
Select this option if you want to enter the vault information immediately when
you eject an optical medium.
Job protocol page size
Specify the number of entries in the job protocol that you consider important.
Each page of the job protocol displays this number of entries. This only affects
the number of entries displayed, not the total number of entries in the protocol.
No maintenance mode
No restrictions to access the server.
Documents cannot be deleted,...
Deletion is prohibited for all archives, no matter what is defined for the archive
access, and...
errors are returned
a message informs about deletion requests.
left pane: All functions that are important for configuration operations and
Directory information purposes are located below the name of the administered
area server.
Archives
Logical archives as well as the media pools and assigned hard disk
buffers and jobs for each archive.
Cache Partitions
Cache partitions of the Document Service cache.
Buffers
Disk buffers for all pools except for HDSK pool.
Devices
Storage devices used by Archive Server: physical jukeboxes, virtual
jukeboxes of storage systems, hard disks.
R/3 Systems
Connection to the SAP systems.
R/3 Gateways
Subnet gateway addresses via which the Archive Server accesses the SAP
systems.
Vaults
Physical storage locations of a media which has been removed from the
jukebox.
Known Servers
Other Archive Servers which are known to the administered Archive
Server and whose documents can also be accessed.
right pane: Detailed information for the directory or item that is selected in the direc-
Detail area tory area.
If you expand the Archives directory in the directory area you see the following
items:
Context menu
Jobs The commands relating to the selected job are displayed in a context
menu. These commands can also be found in the Jobs tab.
Edit
Modifies the scheduling, see “Job command and scheduling” on
page 72.
Enable
Activates a deactivated job. The job is then executed as scheduled.
Disable
Deactivates an activated job. The job is not executed.
Run Now
Starts the selected job immediately irrespectively of whether it is
activated or deactivated.
Messages
Displays the log messages for the selected job.
Utilities Choose Analyze Security Settings to display the security settings for
the selected archive, see “Analyze Security Settings” on page 95.
Description
Detailed specification of the selected archive. Click Edit to modify the
description.
Security
Security settings for the selected archive. Click Edit to modify the configuration.
See also: “Defining access to the archive” on page 47
Configuration
Document Service configuration of the selected archive. Click Edit to modify the
configuration.
See also:“Editing the archive configuration” on page 49.
Retention
Retention settings and handling of deleted documents. Click Edit to modify the
configuration.
See also: “Editing retention settings” on page 52
Assigned certificates for authentication
These certificates are used for leading applications and components sending
URLs to the Archive Server. For secure communication you can use signed
URLs: SecKeys. To verify the SecKey, a certificate with a public key is required.
These certificates are sent from the leading application or imported, stored in the
key store and displayed in this table. You can enable, disable, delete, and view
the certificates.
See also: “SecKeys / Signed URLs” on page 79.
Assigned cache path
Cache servers
Indicates the cache servers that the selected archive can work with. The Archive
Cache Server stores archived documents on its local disk and forwards these to
the client on request. The table columns are explained in the description of the
Add Cache Server dialog box (see “Connection to a cache server” on page 282).
R/3 system config.
Indicates the SAP systems for which the Archive Server is configured (see “SAP
system configuration” on page 98).
Archive
The name of the selected archive.
Original server
Archive Server containing the original archive.
Use servers in this order
The server at the top of this list is accessed first when a document is requested. If
access is refused, the request is routed to the second server in the list. The list
must contain at least one server. The original server must be present in this list.
You can specify that a server first searches in its own replicated archives before
searching in the original archive on the original server.
Other avail. servers
The servers that have also been configured as remote standby servers have
replicate archives on the selected server. If they are not selected in this list then
they are not selected when documents are requested from this archive.
Subnet mask
Specifies the sections of the IP address that are evaluated. Thus you can restrict
this evaluation to individual bits of the corresponding address section.
Subnet address
Specifies the complete subnet address where the cache server is located.
Cache server
Name of the Archive Cache Server which is to be accessed.
original
Original partition
backup
Backup of an original partition
replicate
Replicate partition on a remote standby server
missing
Replicate partition for a replicated archive is missing and must be ini-
tialized
Priority Displays the sequence in which partitions are to be written. Only rele-
vant for IXW and hard disk pools.
Status Displays the current status of the partition: The individual statuses can
also be combined.
• M (Modified): This is set automatically if documents have been written
to the partition since the last status change.
• L (Locked): The partition is read and write-protected. The
administrator sets this status in the context menu with Change Status.
• W (Write-locked): The partition is write-protected. The flag is set
automatically when the partition is full. The administrator can set this
status with Change Status.
• O (Offline): This is set automatically if the partition is not currently
available in the archive.
• F (Full): This is automatically set if the partition is full.
To sort the partition list on any column in ascending or descending order, click the
corresponding column title. A small arrow displayed next to the column name
indicates whether the list is sorted in ascending or descending order.
Detach Partition Removes a partition from the pool or disk buffer. This is used if a dif-
ferent partition is to be used for a disk buffer. Any data present is not
lost when the detach operation is performed. Instead, it remains avail-
able for processing until it is stored on an optical medium.
Change Status Changes the access status of partitions. The current status is displayed
in the Status column.
Clear Backup Deletes the entry for the selected replicated or backup partition on the
Status original server. The command is used if the partition was defective on
the remote standby server and has been removed from the jukebox.
The entry reappears in the list if a new backup partition is successfully
written with the next backup job.
Filter Filters on partition names. You can use wildcards (* and ?) when speci-
fying the name. The filter mechanism distinguishes between uppercase
and lowercase.
Utilities Various utilities are available depending on the selected pool type.
default
contains all document components which are not assigned explicitly.
ASCII_ANNO
Common annotations. Annotations are the only document components
which the client user can delete from the archive at any time if he has the
required rights.
OLE2_ANNO
Annotations that are appended via OLE 2 applications
notice
Notes
Migration
For migration of documents within one archive.
Available partitions
List of unassigned partitions.
Priority
Shows the next assignable priority. If you select more than one partition, they are
automatically assigned priorities in sequence. The priority of the partitions
specifies the sequence in which they are written to. A partition with priority 1 is
written first. When this is full, a partition with priority 2 is written and so on. Do
not assign successive priorities to the two sides of a single IXW medium.
Documents might still be accessed from the full partition for reading while other
documents have to be written to the other side. The IXW medium must be
physically turned and this increases the access times.
Write-locked
Prevents write access to the partition. Use it if you suspect a defective
medium to prevent further writing to it.
Locked
Prevents both read and write access to the partition.
Cache Partitions
List of the default cache partitions.
Cache Paths
List of the cache pathes.
Note: Any changes to this list render the current cache contents invalid, i.e. the
cached files are no longer retrievable. The changes take effect on the next
Archive Server restart.
See also:
• “Local cache” on page 36
Create Buffer Creates a new disk buffer. A disk buffer can also be created directly
when a pool is set up (see “Purge Buffer settings” on page 34).
Edit Buffer Modifies the parameters and scheduling of the Purge_Buffer job which
is used to purge the selected disk buffer (see “Purge Buffer settings” on
page 34). This command is also available in the Archives directory.
Delete Buffer Deletes the selected disk buffer.
If you select a buffer, the buffer partitions are listed in the detail area. The table
columns and the context menu are identical with those available for a pool partition
table (see “Selected pool” on page 283).
Unavailable Lists the partitions which were not available when a user tried to
access them.
See: “Re-establishing document access” on page 157
Configure SCSI con- If you want to connect a new jukebox or add additional slots to
trolled / fibre channel an existing jukebox, reconfigure the connection to the Storage
connected jukeboxes Manager. Choose the option that corresponds to your hardware.
For a detailed description of automatic jukebox detection, refer to
“Automatic detection of physical jukeboxes” on page 40.
Start / Stop STORM Starts or stops the Storage Manager.
Properties Displays information on the selected jukebox, such as device
type, number of slots and used slots, jukebox status, STORM
server name and run level (mode).
Utilities The following utilities are available for jukeboxes containing
optical media:
• Import not tested media, see “Offline import” on page 153
• Test Slots in Jukebox, see “Testing jukebox slots” on page 153
If you right-click a partition in the detail area, the following commands are
available:
Create Creates a new hard disk partition in the archive system, i.e. a partition
which has already been set up at operating system level is declared to the
Archive Server. Once created, a hard disk partition can be assigned to a hard
disk pool or disk buffer.
See: “Creating a hard disk partition” on page 32
A hard disk partition can also be created when a pool is created. In this case,
the pool is automatically assigned to the pool or disk buffer.
Rename Renames a partition. The new name has no effect on documents that are
stored on this partition. This option is used if you wanr to import a repli-
cated partition.
See: “Rename HD Partition” on page 294
Filter Filters on partition names. You can use wildcards (* and ?) when specifying
the name. The filter mechanism distinguishes between uppercase and low-
ercase.
Utilities You can select the utilities relevant for the selected partition.
23.4.2 Jukeboxes
Physical jukeboxes for optical media and virtual jukeboxes for storage systems are
handled in the same way. Look at the Media column to indicate the jukebox type.
Slot Number of the slot in which the partition is located. The slots are
numbered differently depending on the manufacturer. In general
the slot number is encoded in the jukebox hardware.
Media The media format is indicated here. HD-WO indicates a virtual
jukebox of a storage system.
If you have to manage many partitions, you can use the filter options by clicking
Define Filter, see “Defining the filter” on page 294. If you have more partitions than
shown on one page, use the Browse buttons below the list to navigate between
pages.
The following commands are available for partitions:
You can either store the partitions in a vault automatically when a medium is
removed from the jukebox or assign the partitions manually with Add partition, see
“Assigning partitions to a vault” on page 156.
Replicate This command is only required if you are working with a remote
standby server. It duplicates an archive or a disk buffer from the
original server to the remote standby server. (but as yet not the
data they contain). Here the original server is the known server and
the remote standby server is the administered server. The buffer or
the archive and its associated pools are copied to the administered
server. When this is done, an identical structure is created. The par-
titions must be assigned separately on the administered remote
standby server. After this, the Synchronize_Replicates job is
used to transfer the contents of the archives and buffers.
Delete Replicate Removes the selected replicate of a disk buffer or archive from a
remote standby server. When an archive is deleted, so too are its
pools. Partitions which were assigned to these pools are separated
from the pools but are not deleted. They can be attached to a differ-
ent pool.
Server Priorities In a remote standby configuration, documents can be requested by
both the original server and the remote standby servers. You use
this command to define the sequence in which the servers are ac-
cessed for each replicated archive. It is usually quickest and most
efficient to access the next located server.
It is not important on which server you specify the server access
sequence. The setting affects all the entered servers.
Jobs table
The list presents the name, the associated command, the scheduling and the runtime
limitations of each job. The icon at the start of the line indicates a job's Scheduler
status. The overall status of the scheduler is displayed above the job table (running
or stopped).
The job instance is waiting for a resource that is currently being used by an-
other job instance and for which no (additional) parallel accesses are possible.
This icon is displayed when you have chosen Stop Archive Processes from
the menu. The server itself determines the time in which a controlled termina-
tion of the job is possible.
You can add, edit and delete jobs. Additionally, the following context menu
commands are available:
Some jobs cannot be interrupted, and the menu command is not available:
• backup
• backup_pool
• synchronize
• Write_CD
Protocol Displays the log messages for all jobs, see “Job protocol” on page 198
Stop/Start Deactivates or activates the Scheduler, i.e. the component which performs
Scheduler job scheduling. If the scheduler is active, all active jobs are executed as
scheduled.
See also:
• “Simplify monitoring with notifications” on page 199
• “Defining events” on page 199
• “Configuring and assigning notifications” on page 203
The left-hand table displays the scan workstations with the associated archiving
modes. The right-hand table displays the existing archiving modes together with the
archive where the documents are stored.
See also:
• “Configuring archiving for Enterprise Scan” on page 103
• “Creating an archive mode” on page 103
• “Archive mode settings” on page 104
• “Assigning an archive mode to an Enterprise Scan workstation” on page 109
• “Scan host assignments” on page 109
In the left-hand table you see the user groups, and in the right-hand table the users.
The user-to-group assignment is not directly visible in the list.
See also:
• “Concept” on page 111
• “Creating and checking policies” on page 112
• “Adding users” on page 115
• “Setting up user groups” on page 117
Enable Activates the selected certificate. The authenticity of the certificate must
be checked with the help of the fingerprint before activating it.
Disable Locks the selected certificate, e.g. when it has been revoked.
Delete Deletes the selected certificate. Do not delete any certificate, except for
those that were imported by mistake.
View Opens a dialog with a detailed description of the certificate (see “View
Certificate” on page 93).
Utilities You can you can import existing certificates for different purposes, see
“Set Encryption Keys” on page 92 and “Importing certificate for authen-
tication” on page 83.
This utility can help to solve problems in scan scenarios with Livelink ECM - BPM
Server where a workflow is defined in the archive mode (see “Workflow” on page
105).
Name
Enter the workflow name in UTF-8 and press Encode.
The name is converted to Base64 format. Use this feature if you want to
manipulate the COMMANDS file or if you are looking for entries in the log files.
Base64 encoded
Enter the Base64 encoded workflow name and press Decode.
The name is converted to UTF-8 format.
ADMS
See: Administration Server (ADMS)
Annotation
The set of all graphical additions assigned to individual pages of an archived
document (e.g. coloured marking). These annotations can be removed again.
They simulate hand-written comments on paper documents. There are two
groups of annotations: simple annotations (lines, arrows, highlighting etc.) and
OLE annotations (documents or parts of documents which can be copied from
other applications via the clipboard).
See also: Notes.
Archive ID
Unique name of the logical archive.
Archive mode
Specifies the different scenarios for the scan client (such as late archiving with
barcode, preindexing).
ArchiveLink
The interface between SAP system and the archive system.
Buffer
Also known as “disk buffer”. It is an area on hard disk where archived
documents are temporarily stored until they are written to the the final storage
media.
Burn buffer
A special burn buffer is required for ISO and IFS poosl in addition to a disk
buffer. The burn buffer is required to physically write an ISO image. When the
specified amount of data has accumulated in the disk buffer, the data is prepared
and transferred to the burn buffer in the special format of an ISO image. From
the burn buffer, the image is transferred to the storage medium in a single,
continuous, uninterruptible process referred to "burning" an ISO image. The
burn buffer is transparent for the administration.
Cache
Memory area which buffers frequently accessed documents.
The archive server stores frequently accessed documents in a hard disk partition
called the Document Service cache. The client stores frequently accessed
documents in the local cache on the hard disk of the client.
Cache Server
Separate machine, on which documents are stored temporarily. That way the
network traffic in WAN will be reduced.
Device
Short term for storage device in the Archive Server environment. A device is a
physical unit that contains at least storage media, but can also contain additional
software and/or hardware to manage the storage media. Devices are:
• local hard disks
• jukeboxes for optical media
• virtual jukeboxes for storage systems
• storage systems as a whole
Digital Signature
Digital signature means an electronic signature based upon cryptographic
methods of originator authentication, computed by using a set of rules and a set
of parameters such that the identity of the signer and the integrity of the data can
be verified. (21 CFR Part 11)
Disk buffer
See: Buffer
DocID
See: Document ID (DocID)
DocTools
Programs that perform single, discrete actions on the documents within a
Document Pipeline.
Document ID (DocID)
Unique string assigned to each document with which the archive system can
identify it and trace its location.
DP
See: Document Pipeline (DP)
DPDIR
The directory in which the documents are stored that are being currently
processed by a document pipeline.
DS
See: Document Service (DS)
Enterprise Scan
Workstation for high volume scanning on which the Enterprise Scan client is
installed and to which a scanner is connected. Incoming documents are scanned
here and then transferred to the Archive Server.
mount path on the operating system level before they can be referred to in the
Archive Administration.
Hot Standby
High-availability Archive Server setup, comprising two identical Archive
Servers tightly connected to each other and holding the same data. Whenever the
first server becomes out of order, the second one immediately takes over, thus
enabling (nearly) uninterrupted archive system operation.
ISO image
An ISO image is a container file containing documents and their file system
structure according to ISO 9660. It is written at once and fills one partition.
Job
A job is an administrative task that you schedule in the Archive Administration
to run automatically at regular intervals. It has a unique name and starts
command which executes along with any argument required by the command.
Known server
A known server is an Archive Server whose archives and disk buffers are known
to another Archive Server. Making servers known to each other provides access
to all documents archived in all known servers. Read-write access is provided to
other known servers. Read-only access is provided to replicate archives. When a
request is made to view a document that is archived on another server and the
server is known, the inquired Archive Server is capable of displaying the
requested document.
Log file
Files generated by the different components of the Archive Server to report on
their operations providing diagnostic information.
Log level
Adjustable diagnostic level of detail on which the log files are generated.
Logical archive
Logical area on the Archive Server in which documents are stored. The Archive
Server may contain many logical archives. Each logical archive may be
configured to represent a different archiving strategy appropriate to the types of
documents archived exclusively there. An archive can consist of one or more
pools. Each pool is assigned its own exclusive set of partitions which make up
the actual storage capacity of that archive.
Media
Short term for “long term storage media” in the Archive Server environment. A
media is a physical object: optical storage media (CD, DVD, WORM, UDO), hard
disks and hard disk storage systems with or without WORM feature. Optical
storage media are single-sided or double-sided. Each side of an optical media
contains a partition.
MONS
See: Monitor Server (MONS)
Notes
The list of all notes (textual additions) assigned to a document. An individual
item of this list should be designated as "note". A note is a text that is stored
together with the document. This text has the same function as a note clipped to
a paper document.
Partition
A partition is a memory area of a storage media that contains documents.
Depending on the device type, a device can contain many partitions (e.g. real
and virtual jukeboxes), or is treated as one partition (e.g. storage systems w/o
virtual jukeboxes). Partitions are attached - or better, assigned or linked -
logically to pools.
Pool
A pool is a logical unit, a set of partitions of the same type that are written in the
same way, using the same storage concept. Pools are assigned to logical archives.
RC
See: Read Component (RC)
Remote Standby
Archive Server setup scenario including two (ore more) associated Archive
Servers. Archived data is replicated periodically from one server to the other in
order to increase security against data loss. Moreover, network load due to
document display actions can be reduced since replicated data can be accessed
directly on the replication server.
Replication
Refers to the duplication of an archive or buffer resident on an original server on
a remote standby server. Replication is enabled when you add a known server to
the connected server and indicate that replication is to be allowed. That means,
the known server is permitted to pull data from the original server for the
purpose of replication.
Servtab files
Configuration files of the spawner which specify which processes to start.
Slot
In physical jukeboxes with optical media, a slot is a socket inside the jukebox
where the media are located. In virtual jukeboxes of storage systems, a slot is
virtually assigned to a partition.
Spawner
Service program which starts and terminates the processes of the archive system.
Storage Manager
Component that controls jukeboxes and manages storage subsystems.
Tablespace
Storage space in the database. If there is not sufficient free storage space
available, no further archiving is possible.
Timestamp Server
A timestamp server signs documents by adding the time and signing the
cryptographic checksum of the document. To ensure evidence of documents, use
an external timestamp server like Timeproof or AuthentiDate. The IXOS
Timestamp Server is a software that generates timestamps.
Volume
Volume is a technical collective term with different meaning in STORM and
Document Service (DS). A DS volume is a virtual container of partitions with
identical documents (after the complete backup is written). A STORM volume is
a virtual container of all identical copies of a partition. For ISO partitions, there is
no difference between DS and STORM volumes. Regarding WORM (IXW)
partitions, the STORM differenciates between original and backup, they are
different volumes, while DS considers original and backup together as one
volume.
WC
See: Write Component (WC)
Windows Viewer
Component for displaying, occasional scanning with Twain scanners and
archiving documents. The Windows Viewer can attach annotations and notes to
the documents.
WORM
WORM means Write Once Read Multiple. An optical WORM disk has two
partitions. A WORM disk supports incremental writing. On storage systems, a
WORM flag is set to prevent changes in documents. UDO media are handled like
optical WORMs.
Write job
Scheduled administrative task which regularly writes the documents stored in a
disk buffer to appropriate storage media.
H Jukeboxes
Hard disk partition Activating 125
Attaching 216 Automatic detection 291
Creating 292 Parallell import 125
HDSK pool 26
Creating 57 L
Leading application 19
I Retrieval clients 20
IFS partition Livelink ECM - BPM Server
Write job configuration 63 Troubleshooting 306
IFS pool 26 Livelink Imaging Clients 19
IFSpool Log files
Creating 56 Location 255
Implicit user 118 Most important files 260
Import Certificate for Timestamp STORM 259
Verification 305 Log levels
Importing Archive Server except STORM 257
Damaged media 143 STORM 259
Partitions 140 Where and how 256
Installation directory 15 Log settings
Intializing Archive Server except STORM 257
Automatic 151 STORM 259
Manual 216 Logical archive 25
ISO media Logical archives 43
Backups 180 Naming conventions 58
Recovery 306 Lost&Found 143
ISO partition
Write job configuration 58 M
ISO pool 25 Monitor Server 21
Creating 56 Monitoring
IXOS Support Center 13 Accounting 235
IXOS_ROOT 15 Configuring notifications 301
ixoscert.pem 84 Document Service 215
IXW file system information 124 Resources 197
IXW media MONS 313
Backups 177
Recovery 179 N
IXW partition Naming conventions 58
Write job configuration 61 Notification Server 21
IXW pool 26 Notifications 301
Creating 56 Assigning 206
Cancel assignment 206
J Configuring 301
Jobs 91 Event examples 201
Changing scheduling 138 Event specification 301
Modifying Write jobs 278 Events 301
Purge_Buffer 75 Tab 301
Typs 69 Types and settings 204
U
User groups 111
Assigned policies 118
Setting up 303