Professional Documents
Culture Documents
Content Server 53 SP6 Release Notes
Content Server 53 SP6 Release Notes
Revision History:
May 2008: Initial Release.
Table of Contents
List of Figures
List of Tables
Content Server is the core of the EMC Documentum content management platform. Content Server
governs the content repository and enables a rich set of content management services for controlling
both content and processes throughout distributed enterprises. Documentum Content Server lets you
store, manage, and deploy all types of content, including HTML and XML, graphics, and multimedia.
Content Server provides services such as integrated workflow, lifecycle, and process automation;
version control, robust security, and a data dictionary for capturing and configuring business rules.
With EMC Documentum Content Server, users can share and reuse trusted content on demand within
and between business units. Administrators can define, organize, automate, and monitor all the
functions and tasks of even complex business processes.
The SP3 release adds to these capabilities. For example, the release adds high availability support for
fulltext indexing. For a listing of all new features, refer to Chapter 2, New Features and Changes.
The SP6 release is a full release. It does not need to be applied to an existing 5.3 installation.
This section lists new features in the Content Server 5.3 set of releases.
10. Ability to use digital shredding to remove content files stored in file store storage
areas. (This feature requires a Trusted Content Services license.)
11. A change to the connection pooling implementation that changes how the
connect_recycle_interval value in the dmcl.ini file is interpreted
12. The Content Storage Services license, an optional license for Content Server that
allows you to define storage and migration policies for content files (5.2.5 SP2)
13. The Collaborative Services license, an optional license for Content Server that
provides support for the Collaborative Editions of EMC Documentum clients.
14. Support for Retention Policy Services, an optional product that allows you to define
retention policies to govern document retention in the repository.
15. During installation, on UNIX and Linux platforms, most required environment
variables are now set by a script. Environment variables (UNIX and Linux), page
76, describes this change in more detail.
16. Adding an attribute or type to a repository does not require reindexing the
repository. Refer to Full-text indexing and adding types or adding attributes to a
type, page 127 for a description of this feature.
The first policy, Throttle has 3 values. The first value indicates the number of active
ACS requests that must be present for this policy to take effect. The second value
indicates how many new threads per user that are allowed once the policy has taken
effect. Basically, this means the maximum number of parallel threads a single user is
allowed. The third value dictates how many threads a user may hijack in order to
satisfy their content transfer request. This should almost always be set at 0.
The second policy ReplaceRequest comes into play when the ACS server has
reached a certain level of activity.
The first value indicates the threshold at which the policy is enforced. When there
are 50 active threads in the example above, then the ReplaceRequest policy will be
activated. The second value indicates how many threads a user may hijack from
other individual content transfers in order to perform their own content transfer.
For example, if 5 users were consuming 10 threads each to perform their export
operation, and a 6th user came along, the user could take only one thread. This
thread will be taken from the user who is consuming the most number of threads. In
our example, where all 5 previous users had 10 threads each one random user will
loose one thread. If there were more than 50 concurrent content transfers for that
ACS server, then each client would be allocated only a single thread until the level
of activity had decreased to a more manageable level.
By default, a single UCF client is permitted to use a maximum of 5 threads or streams
during content transfer.
To configure a maximum number of streams that an individual client can consume,
add the max.parallel.download.streams parameter to the ucf.client.config.xml file as
follows:
<option name="max.parallel.download.streams">
<value>10</value>
</option>
When determining whether a file should be segmented into multiple streams, the
UCF client also considers the setting for min.parallel.segment.size, also configured in
the ucf.client.config.xml.
<option name="min.parallel.segment.size">
<value>1048576</value>
</option>
<option name="measurement.time.interval">
<value> 300</value>
</option>
<option name="single.thread.throughput">
<value>131072</value>
</option>
This value for min.parallel.segment.size specifies the smallest segment that can be
requested. Therefore, files that are smaller than this value will not be segmented.
In addition, when large files are segmented, and if the final chunk is smaller than
this value, it will not be requested. The previous request will continue reading the
remaining bytes of the file.
The default value for min.parallel.segment.size is 131072 (or 128Kb).
The values for measurement.time.interval and single.thread.throughput are used to
determine if the WAN conditions warrant the enabling of parallel streaming. Due to
the disk I/O cost associated with concurrently downloading multiple streams of data,
it is sometimes faster to download the content as a single stream rather than break it
into multiple streams. Parallel streaming will be turned on if the number of bytes
specified in single.thread.throughput is greater than the number of bytes actually
transferred in the number of milliseconds specified by measurement.time.interval.
For example, if single.thread.throughput is set to 1048576 (1Mb), and
measurement.time.interval is set to 500, the UCF client will opt to turn on parallel
streaming if less than 1MB has been transferred in the first 500ms of the content
transfer operation.
Note:
The ucf.client.config.xml file is located in the client machine
<ucfInstallsHome>\<HostMachineName>\<appId>\config.
The values of “ucfInstallsHome” and “appId” can be fetched from ucf.installer.config.xml
(inside <appRoot>\wdk\contentXfer on appserver).
Typically ucfInstallsHome = $java{user.home}\Documentum\ucf
And appId = shared
1. Support is added for the Content Services for EMC Centera license on the HP
Itanium platform.
2. CSEC support for application registration
With this release, the Content Server name and the server’s version level will be
recorded in the Centera SDK log files when Content Server communicates with
the Centera SDK.
3. Support for RSA Access Manager for user authentication
Refer to Support for RSA Access Manager, page 118, for complete information.
4. Introduces a new server.ini key, owner_xpermit_flag.
For information, refer to New server.ini option for extended permissions, page 118.
8. A new Centera SDK variable for all UNIX platforms is implemented and set
automatically to provide enhanced performance for the Centera plug-in.
The variable is named FP_OPTIONS_MAXCONNECTIONS.
9. The restriction against executing an Assemble method inside an explicit transaction
is lifted, with one exception.
The exception occurs if the Assemble method is issued with the interrupt_freq
argument. If this argument is included, then the method may not be executed inside
an explicit transaction.
1. New dmcl.ini keys to control DMCL trace file size and backups.
Refer to Job trace files, page 94 for a description of this feature.
2. New Database ID and Session ID for -osqltrace and a new server tracing option.
Refer to New tracing options, page 95 for a description of this feature.
3. New eSign Audit trail electronic sign operations.
Refer to eSign Audit trail electronic sign operations, page 111 for a description of
this feature.
4. Supports using the memory map interface to write files.
Refer to Using memory map interface to write files, page 128 for a description
of this feature.
5. Supports full-text version 4.3.1.
Refer to Full-text version in 5.3 SP4 is 4.3.1, page 129
6. Release 5.3 SP4 includes the following new features for the full-text indexing
capabilities:
• New DQL hint that may be passed to the query plugin to turn grammatical
normalization on or off for the query.
The query plug-in hint, page 129, describes this new hint.
• A new DQL hint that directs Content Server to try the query first as an FTDQL
query and if that times out, to retry the query against the repository metadata
tables.
The TRY_FTDQL_FIRST hint, page 137, describes this new hint.
• Performance improvements for non-FTDQL queries
Configuring batched returns for non-FTDQL queries, page 130, describes these
performance improvements.
• Improved large file handling.
Refer to Large file handling, page 131 for a description of this feature.
• Support for a load balancer process in high availability configurations.
• Improved implementation resulting in fewer query timeouts.
Note: The 5.3 SP4 release also supports EMC Documentum Content Server OEM
Edition, a separate product available only to our partners. Please contact your sales
representative for more information.
This release note also contains additional information about the hardware requirements
for full-text indexing and querying the full-text index. For complete information, refer to
Choosing the correct hardware for full-text indexing, page 70.
• -batch_size, which allows users to configure the size of the batches used to
process the content files being replicated
• -source_servers, which allows users to identify the servers to which the job is to
connect
9. A set of scripts supporting high availability. The scripts provide running or stopped
status information about Content Server, the connection broker, and the index agent.
The scripts can be integrated into commercial monitoring packages. For a list of the
scripts, refer to the Content Server Administrator’s Guide.
10. Dmclean has a new argument, -clean_aborted_wf. If specified, the method removes
all aborted workflows in the repository.
11. Saveasnew API has a new argument, to indicate where to store new copies of content
12. Lock API has a new argument.
The argument, validate_stamp, controls whether the vstamp value is validated
before a lock is placed on an object.
Changed features
This section lists the features that have changed from the product.
1. For a listing of the changes to the object type hierarchy, DQL, and DMCL API that
support the new features, refer to the EMC Documentum 5.3 System Migration Guide.
Removed features
This section lists the features that are removed from the product.
This section identifies problems and limitations, documented in the release notes of the applicable
minor version release or prior service pack(s), that have since been resolved.
Within two weeks of a release a more comprehensive list of fixed bugs is posted on the Powerlink
site (http://Powerlink.EMC.com). You will need to navigate to at Support > KnowledgeBase Search >
Documentation and White Papers Search and then select Fixed Bugs Lists in the Select Document
Type(s) dropdown and the name and version of your product. You must have a software support
agreement to log in and access this list of fixed bugs.
If any queue items fail with this message, resubmit them to the index agent using
Documentum Administrator.
This problem affects the procserver binary on HP-UX ia64 only. To work around the
problem, enable lazy swap allocation for the procserver binary. If lazy swap allocation
is enabled, the procserver processes will allocate swap space as required instead of
reserving swap space ahead of time.
where binary is the path to the procserver binary. You must have write access to the
binary and there cannot be any instances of the binary running when you issue
the command.
1. After deleting the index agent, fetch the object ID of its dm_ftindex_agent_config
object.
2. Use a Destroy API method to delete the object.
This indicates that the server has attempted to run the headstart script before the server
is completely started. To work around the problem, re-execute the panel.
This chapter lists the hardware (machine) requirements and software versions supported with
this release. Machine resources, non-Documentum software components (Operating System, Java
Runtime Environment, and so forth), and other EMC Documentum products determine the unique
environment for each EMC Documentum product. Hardware requirements, page 41, addresses basic
machine resource requirements. Software requirements, page 42, addresses specific software versions
that are required for the installation platform.
Content Server operates on a single environment (refer to Table 4–3, page 43), with a second server
used to manage full-text indexing.
Hardware requirements
This section lists nominal machine resources required for product installation and use.
Your individual machine requirements vary depending on factors such as the number
of products installed, size of your deployment, number of users, and network latency.
The Documentum System Sizing Tool dynamically generates estimates of your hardware
resource requirements based on your user and hardware profile.
You can download the Documentum System Sizing Tool from the Powerlink website
(http://Powerlink.EMC.com) by navigating to: Support > Technical Documentation and
Advisories > Software ~ D ~ Documentation > Documentum Systems > Systems Sizing.
of 6 GB of RAM is required (2 MB for Content Server and 4 GB for the index server). For
best performance, we recommend that you run the index server and Content Server on
separate host machines.
Note: The following notes apply to the Content Server machine requirements table:
[1] The amount of RAM that is available after taking into consideration all other
RAM utilization requirements.
[2] If you are installing Content Server on a UNIX or Linux system, 300 megabytes of
space are required in the /var/tmp directory.
[3] For better performance and if there is also a global repository housed on the same
host, along with other Documentum services deployed on that machine (such as
CIS or SCS).
Note: The following notes apply to the index server machine requirements table:
[1] The amount of RAM that is available after taking into consideration all other
RAM utilization requirements.
Software requirements
This section provides information on supported software environments.
The tables in this section reflect the latest versions of third-party products, upon which
the EMC Documentum product depends, that are supported at the time of this release.
For information on currently supported environments and future updates, refer to
Exceptions
• eSignature manifestations for Trusted Content Services do not support Adobe PDF
8.0.
Notes
• For Content Server running with Oracle 10g (10.1.0.x) databases, the minimum
required version is 10.1.0.3.
• AIX version 6.0, version 7.0 (with all current APARs applied), and version 8.0 C++
runtimes are supported.
• According to IBM, DB2 8.1 FixPak 7a is functionally equivalent to DB2 8.2 and DB2
8.1 FixPak 8 same as DB2 8.2 FixPak 1.
• The following Japanese, Korean, and Simplified Chinese localized RDBMS are
supported: Oracle, SQL Server, and DB2.
• The database can be either a local or remote installation. We only support those
operating system and database combinations that are listed in the table when the
database is installed locally. If a particular database version is not available on a
particular operating system, using the version specified in the 5.3 release notes. For
example, Oracle 10g Release 1 (10.1.0.4) is not available on AIX as of SP1, but Oracle
10g Release 1 (10.1.0.3) is supported because we supported it with our 5.3 release.
The database can be installed on any operating system supported by the database
vendor, provided the database client can be installed on the Content Server host. For
example, Content Server can be installed on a Windows host and use a database
installed on a Solaris host. When the database is installed remotely, verify that you
can connect to the database by using a database client from the system where you
intend to install Content Server.
• We also support 100% Intel compatible processors. Customers will have to obtain
the 100% compatibility statement and guarantee/assurance from the processor and
OS vendors and log bugs directly with those vendors if any compatibility issues
are discovered.
• XWindows is required for the GUI installer on UNIX operating systems.
• The following Windows 2000 editions are supported: Server, Advanced Server,
Data Center Server
• The following Windows 2003 editions are supported: Standard, Enterprise, Data
Center
• A private copy of the Tomcat servlet container is installed by Content Server. This
application server is required for LDAP user synchronization, Java Lifecycles, and
the ACS server.
• We support 64-bit version of Red Hat and SuSE Linux through the 32-bit compatibility
mode with 100% Intel compatible processor like AMD64 or Intel EM64T.
• The EMC Centera SDK version 3.1 SP1 is included by the installer. For information
on supported EMC Centera Cluster versions with this Centera SDK version, see the
product documentation for EMC Centera.
• LDAP Servers:
— Active Directory for Windows 2000 SP4 & 2003 SP1
— Oracle Internet Directory Release 9.2.0.1
— Oracle Internet Directory 10g R2 (10.1.2) & 10g R3 (10.1.4)
— Sun Java System Directory Server 5.2 P5 & 6.0
• The RDBMS may be installed on a different machine from Content Server. Refer to
the Content Server Installation Guide for detailed descriptions of the Content Server
and RDBMS installation environment.
• The operating systems listed in this table include virtualized versions of the
operating system running in any version of VMware Intel-architecture (VMware ESX
Server, GSX Server and Workstation).
Exceptions
• The index server cannot be installed on the Microsoft Cluster Services environment,
but if installed on a separate host, the index server can index and respond to queries
from a repository installed on Microsoft Cluster Services.
• No multinode partitioning support.
Notes
Exceptions
Notes
None
Exceptions
For the additional operating environments supported with this release, the following
exceptions apply:
• No support for eTrust Siteminder on HP-UX 11i version 2 Update 2 for Itanium.
Notes
Cross-product dependencies
The following table lists optional and required versions of products that are depended
on in order to enable additional features.
You may have to install some of the products listed in Table 4–7, page 54 on separate
host or client machines due to differences in the DFC versions included with those
products. Before installing a product, check the product’s Installation Guide for supported
installation configurations.
By default, Content Server with Collaboration Services or RPS enabled can accept only
version 5.3 or 5.3 SPx clients. Refer to the product documentation for instructions
on how to change this setting.
For information about the supported configurations for products listed in this table,
Refer to the Release Notes for the product.
** This version of DFC works properly with any EMC Documentum client product
with version number 5.2.5 or 5.3, with the exception of the following DFC features,
which work only when DFC is accessing Content Server version 5.3 or 5.3 SPx:
• Fetching service-based object (SBO) implementations from a repository’s global
registry (Pre-5.3 Documentum systems do not have global registries.)
• Web Services 5.3 or 5.3 SPx
This section identifies problems and limitations that may affect your use of the product.
Note: This section and the Technical Notes section may refer to platforms or features that are
not supported for this release of your product. Check Chapter 4, Environment and System
Requirements to verify requirements.
EMC Documentum makes the latest information about customer-reported issues and known
problems are posted on the Powerlink site (http://Powerlink.EMC.com). You must have a software
support agreement to log in and access the list of issues.
Known problems
This section describes known defects in EMC Documentum software that may affect
your use of the product.
After setting this option, execute the ’reinit’ command or restart the server.
Caution: Set the option in the index position [1] or higher. Do not overwrite the
connection string in a_storage_params[0].
Createaudit API
The Createaudit method, used to create audit trail entries for custom events, currently
does not record attribute values in the audit trail entry’s attribute_list attribute. To work
around this issue, you can use the Create, Set, and Save API methods to create the audit
trail entry.
If the total length of the attribute names and values that you want to record is greater
than the size of the attribute_list attribute, use Create, Set, and Save to also create a
dmi_audittrail_attrs object to store the overflow. If you require an audittrail attrs object
for the overflow, you must create and save that before you save the audittrail object.
Here is the sequence of actions needed:
API>create,c,dm_audittrail
...
<audittrail_obj_id>
API>set,c,l,attribute_list
SET>’attr1=value’,...’attr14=value’
...
OK
API>Create,c,dmi_audittrail_attrs
...
<audittrail_attrs_obj_id>
API>set,c,<audittrail_attrs_obj_id>,attribute_list
SET>’attr15=value’,’attr16=value’
...
OK
API>set,c,<audittrail_attrs_obj_id>,audit_obj_id
SET><audittrail_obj_id>
...
OK
API>save,c,<audittrail_attrs_obj_id>
...
OK
API>set,c,<audittrail_obj_id>,attribute_list_id
SET><audittrail_attrs_obj_id>
...
OK
API>save,c,<audittrail_obj_id>
querying for English may not work as expected. For instance, if you search for the word
’individuals’, it will not find a match for the word ’individual’ within the document.
be updated with information about the docbroker. If you want to change your docbroker
after the installer has run then you must do so manually.
Limitations
This section describes limits on the usability of current functionality. The limitations
may be part of the product design or may result from issues with associated third-party
products.
Note: Sybase 15.0 is supported on Windows, Red Hat Enterprise Linux, and SuSE Linux
Enterprise. Sybase 12.5.4 is supported on Suse Linux Enterprise, Red Hat Enterprise
Linux, and Solaris.
This section provides configuration and usability notes for current product features. The following
subsections are included:
• Installation notes, page 65
• Configuration notes, page 84
• Usability notes, page 120
• New object types supporting email archiving, page 138
Installation notes
This section contains information about installation requirements and installation
procedures that supplements the information in Content Server Installation Guide. The
following additional information is presented:
• Platform-independent installation and upgrade notes, page 67, including
— Full release, page 67
— Installation manuals, page 67
— Content Server upgrade paths, page 68
— Upgrading the remote servers in a distributed configuration, page 68
— ACS servers in an upgrade, page 68
— Content Server and index server on the same host, page 69
— Installing remotely, page 70
— Full-text indexing software not supported on VMWare, page 70
— Consolidated full-text indexing and high-availability indexing, page 70
— Choosing the correct hardware for full-text indexing, page 70
— Upgrading DFC, page 71
Full release
Content Server 5.3 SP6 is a full release that can be used to create new installations.
Content Server 5.3 SP6 does not need to be applied to an existing 5.2.x and 5.3
installations.
Installation manuals
The upgrade paths to Content Server 5.3 SP5 are the same as for 5.3 SP4, with the
addition that you can upgrade to 5.3 SP5 from 5.3 SP4. For more information on the
supported upgrade paths, refer to the Content Server Installation Guide.
If you upgrade to 5.3 SP1, SP2, SP3, SP4 or SP5 from an earlier Content Server version,
the acs.properties file is not populated correctly and the ACS server does not function
correctly. End users may not be able to access content files, Surrogate Get does not
work, and there are performance problems when content files are retrieved. You must
manually populate the acs.properties file on each host where Content Server is installed.
5. Set the repository.login parameter to the user login name of the Documentum
installation owner:
repository.login=install_owner_user_login_name
For example:
repository.name=iolanthe.rcs3
8. Save the acs.properties file.
9. Restart Tomcat and the ACS server.
10. Navigate to $DOCUMENTUM_SHARED/logs (UNIX) or %DOCUMENUM%\logs
and examine the AcsServer.log file to confirm that the ACS server is projecting
correctly to the connection brokers defined in the acs config object or server config
object.
11. Use the following command to confirm that the connection broker projections are
correct:
dmqdocbroker -t connection_broker_host -i -a -c
getservermap repository_name
If the Content Server and index server are installed on the same host, the values in the
hardware requirements tables for the two components must be added together. For
example, the Content Server requires a minimum of 512 MB of RAM and the index
server requires a minimum of 4 GB of RAM. If they are installed on the same host, a
minimum of 4.5 GB of RAM is required.
Installing remotely
Displaying the installer remotely across platforms is not supported. For example,
remotely displaying from Solaris to Solaris usually works, but Solaris to an HP UX
Server or Exceed is not supported.
The index agent and index server are not currently supported on VMWare. Statements in
the supported environments chapter indicating otherwise are incorrect.
1. Create the first consolidated indexing configuration with index agent A and index
server A.
2. Follow the high-availability instructions to create the duplicate indexing queue
and indexing user.
3. Create the second consolidated indexing configuration with index agent B and index
server B.
Indexing and querying the indexes are both high-I/O processes. Choosing the correct
hardware to support a full-text indexing installation is critical for adequate performance
and avoiding query time-outs. If the hardware does not have sufficient capacity, indexing
may take too long and full-text queries may time out.
The following restrictions apply to the hardware used for storing the index and
associated FIXML:
• Network Attached Storage devices (NAS) are not supported for index or FIXML
storage.
• Do not place the index or FIXML on a volume that is mounted as an NFS or CIFS
share or any other NAS protocol.
Use the following guidelines in determining the correct hardware for your full-text
indexing installation:
• Choose a disk system that provides high rates of I/O.
A large full-text index, reaching tens or hundreds of gigabytes in size, requires
disk I/O capacity of thousands of I/Os per second. A RAID or a high-performance
SAN-based disk array such as EMC Symmetrix is suitable for indexing.
• Choose network hardware, such as fiber, to provide the highest possible bandwidth
between the host and the SAN.
• If possible, increase the amount of memory cache before or associated with the disk.
Upgrading DFC
Upgrading DFC from the version installed by a particular Content Server version is
not supported.
If you install or upgrade Content Server with either a Collaborative Services license or a
Retention Policy Services license, the procedure sets:
• dm_docbase_config.oldest_client_version attribute to 5.3
• dm_docbase_config.check_client_version attribute to T
These settings mean that Content Server will accept connection requests from
Documentum client applications only if the applications are at or above the 5.3 version
level.
The oldest_client_version attribute identifies the lowest version level of a Documentum
client that is expected to connect to the repository. This value is used by the DFC to
determine which XML chunking algorithm to use for XML content when the content is
saved to the repository. (Refer to The oldest_client_version attribute, page 90 for more
information about this.) It is also used in conjunction with the check_client_version
attribute setting to control access to the repository in general.
When check_client_version is T, Content Server checks the value in oldest_client_version
and does not accept connection requests from Documentum client applications older
than the version specified in oldest_client_version.
Caution: You can change the settings. However, allowing users to connect from pre-5.3
clients will allow users to bypass the retention controls enforced by Retention Policy
Services (RPS) if you have RPS enabled and retention policies defined and assigned to
objects. Additionally, pre-5.3 clients will not recognize the access controls imposed by
rooms (a feature of Collaboration Services).
When users start a client such as Desktop Client or Webtop, the DMCL is initialized and
started. As part of that process, the DMCL creates a client-side type cache. The cache is
held as long as the DMCL continues to run. The DMCL continues to run while the user is
connected. Even if a user’s repository session (in Desktop Client or Webtop) times out or
the user disconnects from the repository, the DMCL continues—it does not stop.
If a repository upgrade occurs while the DMCL is running, the client-side type cache
becomes inconsistent with the repository. Users will receive errors when they attempt to
access the repository after the upgrade.
To resolve this issue, after the repository is upgraded, users on Windows client hosts
must log out of Windows and application servers must be rebooted.
Bug number 67875 has been fixed. An AEK.key file created in one platform, such as
Windows, can be used on a UNIX platform and an AEK.key file created on a UNIX
platform can be used on a Windows platforms. This now makes it possible to move a
repository between platforms without AEK errors.
In the Japanese environment, some jobs and methods do not run correctly if the
server_os_codepage attribute of the server config object and the client_codepage key
of the dmcl.ini file are set differently. When you install the server in the Japanese
environment, ensure that client_codepage in the dmcl.ini file on the server host is set to
the same code page as the server_os_codepage server config attribute.
During server installation or upgrade, the change checker process runs once per minute
by default. The process updates type caches as types are created or altered.
If you are upgrading, ensure that the database_refresh_interval key is set to 1 minute or
remove it from the server.ini file.
On Windows, user accounts are not case-sensitive, but Content Server installation fails if
you connect to the host using the incorrect case in the user name. For example, if the
account is set up as JPSmith and you connect as jpsmith, you can log in to the host, but
server installation fails.
When you are installing Content Server on a Windows machine in the Japanese locale,
automatically installing the data dictionary information for the Korean locale does not
function properly.
To install the Korean locale, you must run the data dictionary population script
from a remote client which is running on Korean Windows. The script is found in
%DM_HOME%\bin. The command line to execute the script is:
dmbasic -fdd_population.ebs -eLoadDataDictionary --
docbase_name docbase_owner owner_password
data_dictionary_ko.txt
If you are upgrading a distributed configuration on Windows, do not reboot the remote
hosts using Terminal Services. Reboot the remote hosts directly from those hosts.
When Tomcat application server is started on a UNIX platform, it now passes the
following to Java:
-Djava.libary.path=$DOCUMENTUM_SHARED/dfc
The command line that starts repository configuration program has changed from prior
releases. Refer to the Content Server Installation Guide for the new command line.
With this release, SSL communications are no longer controlled by the Trusted Content
Services license. Because of this change, an <service_name>_s entry in the etc/services file
is now required for all repositories. For information about setting up this service entry,
refer to the Content Server Installation Guide.
If you are on Content Server 5.2.5 or any 5.2.5 Service Pack version of the server and the
database is DB2 8.1.5, use the following upgrade order:
If you are installing Content Server on an AIX host, version 6.0 or version 7 of the C++
runtime library must be installed on the AIX host.
With this release, there is no longer a requirement for AIX to be in 32-bit mode. (The
Content Server Installation Guide released in March, 2005, incorrectly lists 32-bit mode as a
requirement for Content Server on AIX.)
Installing on HP-UX when the device name for the temp directory is
longer than 15 characters
InstallShield cannot reliably install on HP-UX when the device name for the temp
directory is longer than 15 characters. To work around this problem, start the installer
using the is:tempdir flag and point the installer to a temp directory on a device that
has a shorter path. The syntax is:
installer_name -is:tempdir new_directory_name
where installer_name is the name of the installer executable and new_directory_name is the
temp directory you have chosen. For example:
serverHPUXSuiteSetup.bin -is:tempdir /export/plecomet1/mydir
If you must create a new index for the repository, use the following procedure:
Caution: If you previously applied Fulltext hotfixes to SP2 and you mistakenly upgrade
the SP2 Fulltext components, you’ll have to reapply those hotfixes.
Conguration notes
This section contains information about the configuration of your installation. The
following topics are included:
• Setting the base URL for the ACS server, page 85
• New server.ini parameter — deferred_update_queue_size, page 86
• Additional steps for enabling thesaurus searching, page 86
• Content Storage Services license requirement, page 87
• DNS requirement for web-based client hosts in distributed environment, page 87
• Apache Tomcat application server, page 87
• Tracing change for dm_LDAPSynchronization job, page 88
• Content Server failover change, page 88
• Index needed for Retention Policy Services, page 88
• New user authentication attributes, page 88
• Host machine requirements for Surrogate Get and content replication, page 89
• Generating compatible login tickets in mixed Content Server version environments,
page 89
• Restart Content Server after importing or resetting a login ticket key, page 89
• The vpd.properties File , page 89
• New directories in installation , page 90
• The oldest_client_version attribute, page 90
• Surrogate Get change, page 90
• DMCL exception handling on UNIX platforms, page 91
• Storage policy updates, page 93
• Setting default_app_permit in docbase config , page 93
• Changes to the dm_event_sender script arguments, page 93
• Job trace files, page 94
• New tracing options, page 95
• Content-addressed storage notes, page 96
• Note regarding the Shutdown method and Windows platforms, page 98
• Updating the federation methods, page 98
• Migrating to Documentum Content Services for EMC Centera (CSEC) 5.3 from CSEC
1.2c or Prior, page 99
• Auditing content migration in lifecycle actions, page 101
• Fix for bug 118794 — Backwards compatibility problem in generated Docbasic code
for validations and workflow expressions, page 101
• Ability to set the default retention period as a number of days, page 104
Use the API or DQL to modify the value of the r_host_name portion of the URL.
1. Log in to the index server host as the user who installed the software.
2. Navigate to the $FASTSEARCH/etc directory (UNIX or Linux) or
%FASTSEARCH%\etc (Windows).
3. Back up the NodeConf.xml file by copying it to NodeConf.xml.init.
4. Copy the original NodeConf.xml file to NodeConf.xml.mod.
The first section of the file is:
<?xml version="1.0"?>
<!DOCTYPE nodes SYSTEM "NodeConf.dtd">
<nodes>
<node host="localhost">
<!-- Global options -->
<global>
<portrange base="13000" count="4000"/>
<shutdown_on_exit>true</shutdown_on_exit>
<startorder>
<proc>nameservice</proc>
<proc>httpd</proc>
<proc>logserver</proc>
<proc>configserver</proc>
<proc>contentdistributor</proc>
<proc>cachemanager</proc>
<proc>indexer</proc>
<proc>search-1</proc>
<proc>qrserver</proc>
<proc>statusserver</proc>
<proc>anchorserver-storage</proc>
<proc>nctrl</proc>
The Tomcat instance must be running in order to run the LDAP Synchronization job
and the ACS server.
This setting directs the Content Server to generate login tickets that are backwards
compatible with the 5.2x Content Server.
A basic Content Server 5.3 installation now includes the following directory
not found in the directory structure of previous releases. This directory is
%DOCUMENTUM%\fulltext ($DOCUMENTUM/fulltext).
A basic Content Server 5.3 installation contains an additional directory not found in
the directory structure of previous releases. This directory is %DM_HOME%\Oracle
($DM_HOME/Oracle). This directory will contain the language files needed by Oracle.
During installation, the environment variable, ORA_NLS33 is set to that location. Do not
remove that directory or reset that variable.
now writes information on each content file it retrieves into the method server log file.
The messages are pre-pended with identifying information to distinguish them from
messages written by other methods.
On UNIX platforms, the DMCL data structures are in the global process heap.
Consequently, there is no way to validate the DMCL after an exception. Therefore, on
UNIX, the DMCL is always allowed to continue on exception. Typically, if this happens,
the DMCL encounters more exceptions and terminates after some number of exceptions.
(The actual number depends on how the exception_count and exception_count_interval
attributes are set. Refer to the System Administrator’s Guide for information about those
attributes.)
When a DMCL exception occurs, a description of the exception and a DMCL stack
trace is written to a file. The file is named dmcl_err_pidpid_number_date_time.txt (For
information about the format of the name and where the file is stored, refer to the
documentation in the Content Server Administrator’s Guide.)
On some UNIX platforms, additional configuration, not described in the Content Server
Administrator’s Guide, is required to obtain the stack trace. Those platforms are:
• Solaris 9
• Solaris 8
• AIX 5.2 or 5.3
The following sections describe how to obtain DMCL stack traces on those platforms.
On Solaris 9, a DMCL stack trace is not included in the error report file
generated by the DMCL by default. To include a stack trace, set the
DM_ENABLE_DMCL_STACK_TRACING environment variable to 1 in the application
server environment:
DM_ENABLE_DMCL_STACK_TRACING=1
On Solaris 8, you cannot include a stack trace in the file that contains the exception
description. Use the following procedure to obtain a stack trace on Solaris 8.
On AIX 5.2 or 5.3, a DMCL stack trace is not included in the error report file generated by
the DMCL by default. To produce a stack trace in that file, use the following procedure.
New dmcl.ini keys to control DMCL trace le size and backups
The 5.3 SP4 release introduces two new dmcl.ini keys that control the size of the
DMCL trace file. It allows you to specify a maximum log file size and how many
log files to keep. The keys are max_file_size and max_backup_index. The keys are
used when the trace_file key is set to a folder path. These keys are specified in the
DMAPI_CONFIGURATION section of the dmcl.ini file.
The max_file_size key defines a maximum size for the DMCL trace file, in megabytes.
The max_backup_index key controls how many backup files are retained for each
DMCL trace file. These keys work together, and are only effective when both are set and
trace_file is set to a folder path.
For example, suppose a dmcl.ini file contains the following entries in the
DMAPI_CONFIGURATION section:
trace_level=10
trace_file=c:\temp\dmcl
max_file_size=100
max_backup_index=5
The trace_level value turns on full DMCL tracing. The generated file is stored in
c:\temp\dmcl. The file can reach a maximum size of 100 MB. When the file reaches that
size, it is backed up. The file may have a maximum of 5 back-up files. So, supposing
the file is named dmcl_trace_230515. When the file reaches 100 MB, it is renamed
dmcl_trace_230515.1 and a new dmcl_trace_230515 is started. When that file reaches 100
MB, the first backup file is renamed from dmcl_trace_230515.1 to dmcl_trace_230515.2,
the current dmcl_trace_230515 file is renamed to dmcl_trace_230515.1, and another
dmcl_trace_230515 file is started. The file named dmcl_trace_230515 always has the
most recent trace information and the file named dmcl_trace_230515.1 is the most
recent backup file.
If trace_file is set to an actual file name, max_file_size and max_backup_index are
ignored and the default behavior occurs — the files are written to the specified file
and the size of the file is controlled by the file size defined at the operating system
Release 5.3 SP4 introduces new Database ID and Session ID for -osqltrace, that allows
the content server to print out the Session ID and database ID for each SQL statement.
SessionID is the Documentum sessionID and DBID is the database connection ID (each
database gives a connection ID). So far DBID is valid for Oracle/SQL Server and Sybase.
• -osqltrace gives more details for SQL Server:
SessionID: DBID:51 Fetched 2 with batch hint 20
SessionID: DBID:51 SELECT GB_.r_object_id FROM dbo.dm_acl_s GB_
WHERE (GB_.owner_name=? AND GB_.object_name=?)
SessionID: DBID:51 :p00:agboan
SessionID: DBID:51 :p01:dm_4500014e80000500
rpctrace
Release 5.3 SP4 introduces a new server tracing option, rpctrace, that allows you to trace
RPC calls. The trace information is recorded in the server log file. There are two ways
to turn on RPC tracing:
• Specify -orpctrace on the server start-up command line.
• Use the SET_OPTIONS administration method as follows:
— apply,c,NULL,SET_OPTIONS,OPTIONS,S,rpctrace,VALUE,B,T
• To turn off RPC tracing, use SET_OPTIONS and specify the value as F:
— apply,c,NULL,SET_OPTIONS,OPTIONS,S,rpctrace,VALUE,B,F
The content of dm_plugin objects created for ca store plug-ins must be assigned to a file
store storage area. The storage area can be encrypted. Do not store such content in
a ca store storage area.
Setting clocks and time zones for Centera hosts and Content
Server hosts
The actual retention date stored in the Centera host for a content file is calculated using
the clock on the Centera host machine. Consequently, to ensure calculation of correct
retention periods, the time zone information and the internal clocks on Centera host
machines and Content Server host machines must be set to matching times (within the
context of their respective time zones). For example, if the Content Server host is in
California and the Centera host machine is in New York, when Content Server’s time is
1:00 p.m. PST, the time on the Centera host should read 4:00 p.m. EST.
Failure to synchronize the times may result in incorrect retention dates for the stored
content.
When a ca store is created for a Centera storage system, index position 0 of the
a_storage_params attribute is set to the IP address of the Centera node. With Centera
SDK 2.1 (packaged and installed with Content Server), a_storage_params[0] can contain
a comma-separated list of IP addresses, each representing a Centera node. When that
list is passed to the appropriate SDK function, the SDK connects to the first available
Centera node in the list.
The behavior of the Setfile and Setcontent methods was changed in release 5.2.5 SP3.
Previously, a user or application could execute these methods against an existing
document in a content-addressed storage area to replace a current page in the
document without checking out the document. That is no longer true. A document in a
content-addressed storage area must be checked out before executing either method to
replace an existing content page.
Similarly, you must now check out a document to execute a Removerendition method to
remove a rendition of the document that is stored in a content-addressed storage area.
Note: It is not necessary to update the method_verb attribute for the methods in
Docbases with versions of 5.2.5 SP1, 5.2.5 SP2, 5.2.5 SP3, or 5.3.
Caution: Performing the migration procedure described here does not remove the prior
version of CSEC. It is strongly suggested that you do not continue to use the old version
to store content in the Centera storage system after migrating to CSEC 5.3 SP1. If you
do, you must re-run the migration and full-text scripts to migrate and index any content
stored using the old version, as that content cannot be handled using CSEC 5.3. To
ensure that content is not archived using the older version of CSEC, stop the CSEC
Archiver that is part of the older product.
Migrating to CSEC 5.3 SP1 does not actually move any content. The migration simply
updates the dmr_content objects and the SysObjects that contain that content to reflect
the implementation of CSEC for 5.3 SP1.
All the operations performed during migration are performed within a single transaction.
The transaction is committed if all operations succeed or aborted if any operation fails.
For each content object to be migrated, the migration operation
1. Retrieves the r_object_id and set_file attribute values of the content object.
2. Uses the set_file attribute value to create a relative path to the content.
The Documentum CSEC 5.3 SP1 plug-in uses the relative content path to access the
content.
3. Updates the dmr_content object.
The i_contents attribute is set to the relative content path, the storage_id attribute is
set to the object ID of the CA storage area, and the data_ticket attribute is set to 1.
4. Determines which SysObject objects have that content as page zero and, for those
meeting that criteria, sets the a_storage_type attribute to the name of the CA store
object and increments the i_vstamp attribute.
The SysObjects are checked out prior to updating and unlocked after being updated.
Use a text editor to set the constant to any positive integer to migrate only that number of
objects when the script is executed. For example, suppose you set the constant as follows:
Const MaxMigrate As Long = 1000
When the script runs, only 1000 content objects are migrated.
Note: You can direct the output of the script to a text file. To do so, append the
filename to the end of the command line using the following format:
>filename.txt
Argument Description
docbasename Name of the Docbase that contains the storage areas.
username User name used by the script to connect to the
Docbase. This is the user’s user_os_name value. This
must be the Docbase owner.
password Password for the user account identified in username
Argument Description
current_storage_name Storage name of the storage used by CSEC 1.2c (or
prior)
If you turn on tracing for the method, the trace file is named dm_migrate_to_ca_store_
trace.out and is stored n the current working directory.
referred to a value greater than 2**16. In 5.3 SP2, the fix for bug 104218 resolved the
issue by changing Content Server so that the generated Docbasic code used the Long
datatype. However, this caused backwards compatibility problems with expressions
that were generated prior to 5.3 SP2. To resolve the backwards compatibility issue,
reported in bug 118794, in 5.3 SP3, the changes for 104218 are backed out and a new
resolution is provided.
In 5.3 SP3, whether the generated Docbasic code uses Integer or Long is dependent on
the setting of a new environment variable called DM_DOCBASIC_COND_EXPR_DATA_
TYPE. If this variable is set to LONG, the code is generated using the Long datatype. If
the variable is not set or is set to anything other than LONG, the code is generated using
the Integer datatype. This environment variable is not set by default. Consequently, the
default is to use Integer in the generated code.
Using the Long datatype has the following consequences:
• You must upgrade all EMC Documentum clients to at least version 5.3 because
DFC-based clients at version 5.2.x or earlier will not be able to perform validation
if there are any constraint expressions.
• To allow DFC 5.3-based Documentum clients to perform validation successfully, you
must migrate all constraint expressions to Java.
The use of Integer or Long in the generated Docbasic code in a repository should be
consistent. All generated code should use Integer or all should use Long. To resolve any
inconsistencies for sites that have generated code under varying release versions, the 5.3
SP3 release provides a script that you can run to recreate all generated code to use either
Integer or Long datatype. The script is named dm_recreate_expr.ebs, and it is found in
the .../install/admin directory. The script provides the following update options:
• All generated Docbasic code for expressions in workflow transitions and attribute
value validations
• All generated Docbasic code for expressions in workflow transitions only
• All generated Docbasic code for expressions in attribute value validations only
This option operates on the attribute value validation expressions defined for
user-defined object types (those types whose names do not start with ’dm’ prefix).
Note: Only user-defined custom object types can have attribute value validation
expressions defined in the data dictionary. Therefore, when the script is run to update
these expressions, only custom types are affected.
Depending on DM_DOCBASIC_COND_EXPR_DATA_TYPE, the script will regenerate
the Docbasic code to use either Integer datatype or Long datatype. If the environment
variable is not set or is set to any value other than LONG, the script regenerates
the Docbasic code using Integer datatype. If the variable is set to LONG, the script
regenerates the code using Long datatype.
that the associated Content Server will use for write operations and read operations.
The specified secondary cluster is the cluster the server will use if an attempt to read
content from the primary cluster fails.
The connection string is specified in a_storage_params[0], in the ca store object. The
format for the connection string when you are identifying primary and secondary
clusters for one or more Content Servers is:
srv_config_name="primary=cluster_id,secondary=cluster_id[?Centera_
profile]"{,srv_config_name="primary=cluster_id,secondary=
cluster_id[?Centera_profile]"}
where
• The primary cluster_id is the name or IP address of the Centera cluster to which
the Content Server will write
• The secondary cluster_id is the name or IP address of the Centera cluster from which
the Content Server will read if it cannot read from the specified primary cluster
Note: Including a Centera profile specification is optional.
The a_storage_params attribute is 1024 characters. Consequently, you must assign
names to the Centera cluster nodes that are short enough to allow the full connection
string to fit within the attribute.
The ca store plug-in must be in a storage area that is accessible to the Content Server or
servers using the plug-in. If the Content Servers are all on one host machine, the ca store
plug-in may be stored in any file store storage area accessible to all the servers. If the
Content Servers are on different host machines, either:
• Store the plug-in in a distributed storage area (The plug-in object’s a_storage_type
attribute should be set to dm_distributedstore) in which each Content Server has at
least one near component
• Store the plug-in in a file store that is shared by all Content Servers
Example of use
Suppose you have single-repository distributed configuration with two Content Servers
at different sites and a Centera cluster at each site. The server config names are:
• sc1 for Content Server 1
• sc2 for Content Server 2
The names for the Centera cluster nodes are:
Figure 6-1. Content Server and Centera cluster conguration in single-repository distributed environment
not be large enough. If the buffer is full, Centera pages out the overflow to files on the
local disks. This can be a performance issue. Release 5.3 SP3 introduces a pool option
setting that allows you to resize the buffer.
The pool option is specified in the a_storage_params attribute of the ca store object (at
index position 1 or greater). The format of the specification is:
pool_option:clip_buffer_size:integer
where <integer> is an integer number representing the number of kilobytes. For example,
suppose you want to enlarge the buffer to 200 kilobytes. You would specify the following
in a_storage_params:
pool_option:clip_buffer_size:200
by relevance ranking, highest to lowest, the first batch will contain the most relevant
results. The full-text indexing engine makes use of the temp_table_batch_size option, if it
is set, if a query meets the following conditions:
• The query must be the “outermost” query.
If the query contains a subquery or subselect, the temp_table_batch_size parameter
value is not applied to any fulltext search clause in that subquery or subselect.
• The query does not contain a selected value or clause that operates on the entire
set of returned values
For example, the count function requires the full set of returned rows. Similarly, the
GROUP BY clause operates on the full set of returned rows. Consequently, if either
of these, or any other that operates on the full set of returned rows, is included in the
query, batching results does not occur.
• The query is not an FTDQL query.
Use the temp_table_remove_dup_size parameter to control whether duplicate rows are
removed from full-text query results. The value directs the full-text engine to remove
duplicate rows in any result set whose total number of result rows is less than or equal to
the size specified in the parameter. For example, if you set temp_table_remove_dup_size
to 2000, then duplicate rows are removed from all result sets that are <= 2000 rows. If a
result set has more than 2000 rows, duplicates are not removed.
These new options operate independently of each other. You can set one or the other or
both.
The retry_interval property in an ldap config object defines the time interval between
attempts by Content Server to contact the LDAP server represented by the ldap config
object. This value is used when the LDAP server is chosen as the primary server for a
user’s authentication if the first attempt to contact the server fails. By default, the value is
set to 5 seconds. So, if retry_count is set to 3 (its default), Content Server waits 5 seconds
between each attempt to contact the primary LDAP server.
You can change the default. You can also override the property setting to define different
intervals between each attempt.
To override the property setting, you set two environment variables:
• LDAP_RECONNECT_TIME_SECONDS
• LDAP_RECONNECT_INCREMENT_SECONDS
If the second attempt fails, Content Server waits 7 seconds before attempting for a third
time:
3+(1*4)=7
• Creates an Audit Record when the e-sign operation succeeds with the relevant
justification provided by the user
• Creates an Audit Record when the e-sign operation fails with the relevant error
message(s) and or informational messages
• All informational, warning and error messages are logged in the Docbase log so that
even if Auditing is not enabled, errors will be logged in a centralized file
• Removes redundant logging from esign_pdf.ebs file
• Removes confusing informational message from esign_pdf.ebs files
• ‘dm_addesignature’ Audit Entry is created for a successful ‘addesignature’ API Call
• ‘dm_addesignature_failed’ Audit Entry is created for an unsuccessful ‘addesignature’
API Call
• ‘dm_addesignature_failed’ Audit Entry is not created for the following categories:
— DM_SYSOBJECT_E_ESIGN_CANT_CREATE_TEMP_DIRECTORY
— DM_SYSOBJECT_E_ESIGN_CANT_CREATE_TEMP_FILE
— DM_SYSOBJECT_E_ESIGN_CANT_WRITE_TO_TEMP_FILE
— DM_SYSOBJECT_E_ESIGN_CANT_DELETE_TEMPORARY_FILE
Reason: All the above error messages are not Audited since their upper level
error messages are Audited.
— DM_SYSOBJECT_E_ESIGN_PRIMARY_CONTENT_NOT_FOUND
— DM_SYSOBJECT_E_ESIGN_CONTENT_NOT_FOUND
— DM_SYSOBJECT_E_ESIGN_CONTENT_HASH_FAILED
Reason: Content is required to calculate hash, even to create a
dm_addesignature_failed event. Since the content related operation itself has
failed dm_addesignature_failed, Audit Event will not be done for all the above
error messages.
— DM_SYSOBJECT_E_ESIGN_CURSOR_ERROR
— DM_SYSOBJECT_E_ESIGN_OBJECT_UNSIGNED
Reason: DB related error has occurred.
— DM_SYSOBJECT_E_ESIGN_AUDIT_RECORD_NULL_ID
— DM_SYSOBJECT_E_ESIGN_AUDIT_RECORD_UNSIGNED
— DM_SYSOBJECT_E_ESIGN_CANT_VERIFY_AUDIT_RECORD
— DM_SYSOBJECT_E_ESIGN_CANT_GET_VERIFY_AUDIT_RESULTS
— DM_SYSOBJECT_E_ESIGN_INVALID_AUDIT_RECORD
— DM_SYSOBJECT_E_ESIGN_INVALID_HASH_FORMAT_AUDIT
— DM_SYSOBJECT_E_ESIGN_AUDIT_HASH_MISMATCH
— DM_SYSOBJECT_E_ESIGN_INVALID_HASH_FORMAT_AUDIT
— DM_SYSOBJECT_E_ESIGN_AUDIT_HASH_MISMATCH
— DM_SYSOBJECT_E_ESIGN_SIGNATURE_NUMBER_ERROR
— DM_SYSOBJECT_E_ESIGN_SIGNATURE_NUMBER_MISSING
— DM_SYSOBJECT_E_ESIGN_UNSUPPORTED_HASH_ALGORITHM
— DM_SYSOBJECT_E_ESIGN_VERIFICATION_FAILED
Reason: All the above error messages will not be audited because of the
inconsistent and wrong Audit Records.
• dm_addesignature_failed Audit Trail record will be created for the following error
conditions:
— DM_SYSOBJECT_E_ESIGN_OBJECT_NOT_CHECKED_IN
— DM_SYSOBJECT_E_ESIGN_CANT_FIND_METHOD
— DM_SYSOBJECT_E_ESIGN_USER_MISMATCH
— DM_SYSOBJECT_E_ESIGN_CANT_DECRYPT_PASSWORD
— DM_SYSOBJECT_E_ESIGN_CANT_AUTHENTICATE_USER
— DM_SYSOBJECT_E_NO_RELATE_ACCESS
— DM_SYSOBJECT_E_ESIGN_INVALID_PRE_SIGNATURE_HASH_FORMAT
— DM_SYSOBJECT_E_ESIGN_PRE_SIGNATURE_HASH_MISMATCH
— DM_SYSOBJECT_E_ESIGN_PREEXISTING_SIGNATURE_INVALID
— DM_SYSOBJECT_E_ESIGN_SIGNATURE_SOURCE_ALREADY_EXISTS
— DM_SYSOBJECT_E_ESIGN_CANT_WRITE_CONTENT_TO_FILE
— DM_SYSOBJECT_E_ESIGN_CANT_UPDATE_CONTENT
— DM_SYSOBJECT_E_ESIGN_SIGNATURE_METHOD_NOT_RUN
— DM_SYSOBJECT_E_ESIGN_SIGNATURE_METHOD_FAILED
If the option is set to F or not set, the single-instancing configuration defined in the
Centera system is used.
The max_connections option is used to configure the maximum number of socket
connections that the Centera SDK can establish with the Centera storage system. The
default number is 99. If an application needs to establish more connections than that, you
can reset the maximum, up to 999. To do so, add the following in the a_storage_params
property:
pool_option:max_connections:integer
Enhancements to MIGRATE_CONTENT
The MIGRATE_CONTENT administration method has been enhanced with the
following changes:
• The log file now records performance metrics. The metrics show the amount of time
spent in the repository and in the storage area. These metrics are recorded for each
object successfully migrated, and as accumulated totals for every 100 objects and a
grand total at the completion of the method.
• A new argument is added that allows you to specify a dm_sysobject as the target of a
DQL query to select items to migrate.
Refer to MIGRATE_CONTENT now supports subtypes as predicate target, page
115, for details.
For example, suppose you want to migrate all content of a subtype called
“loan_documents”, and that subtype has a property named customer_state. The
following MIGRATE_CONTENT statement will execute against all loan_document
objects and moves the content for those belonging to customers in CA (California) state:
EXECUTE migrate_content
WITH target_store=’storage_005’,
query=’customer_state=’CA’’,
sysobject_query=T,
type_to_query=’loan_documents’
This optional configuration parameter forces the permission checks carried out against
search results to be conducted against the repository. The system will check the ACL
identified in the acl_name and acl_domain properties in the repository for each returned
object.
This parameter is not set by default. Setting it to true ensures that the permissions used
to determine an object’s accessibility are up-to-date, but does incur some performance
overhead for the search.
dm_format.mime_type lengthened
The mime_type property in dm_format is lengthened from string(64) to string(256).
where value is the number to which you wish to reset the limit.
You must restart Content Server after setting the variable.
If the option is not set, an object’s owner has all the extended permissions available to an
object owner by default.
Values in this key are specified in seconds. The default value expiration interval is 20
minutes. Unused sessions in the pool are released from the DFC session pool after the
expiration interval.
LIST_SESSIONS output
The description of LIST_SESSIONS output is given below:
• typelockdb_session_id
Database session id for type locking
• tempdb_session_ids
List of temporary database sessions
• last_rpc
The last RPC that the server ran for the session
• current_rpc
The current RPC being run by the server for the session.
Note:
After the RPC is completed, the server will not clear this field. Therefore, last_rpc
and current_rpc will be same in the situation where the server has completed an
RPC for the session and has not executed another RPC for the session. However, if
last_rpc and current_rpc are the same, it could also mean that the server is running
the same RPC again.
• last_completed_rpc
The last time the server completed an RPC for session
Usability notes
This section contains miscellaneous items regarding the use and usability of the product.
The following topics are included:
• Adding DocProcessors to the index server, page 121
• Using WF_PromoteLifecycle method in automatic workflow activities, page 122
• New administration method—FIX_LINK_CNT, page 122
• String datatype maximum length on SQL Server, page 123
• Dmbasic lifecycles and Retention Policy Services, page 123
• DQL hints and FTDQL queries, page 123
• Remote hosts failed or ACS not available messages, page 123
• Workaround for bug 81615, page 123
• WORKFLOW_AGENT_DIED error, page 124
• Tracing default for surrogate get is changed, page 124
• Supported versions in repository federations, page 124
• Java method server, page 125
• Printing from lifecycle programs, page 125
• Indexable formats, page 125
• Index agent warning message, page 126
• Excluding object types from indexing, page 126
• Note on using LDAP directory servers with multiple Content Servers, page 126
• Swedish grammatical normalization, page 127
• New server.ini key, page 127
• Full-text indexing and adding types or adding attributes to a type, page 127
• Netegrity plug-in use, page 128
• Change to dm_retention_managers group, page 128
• Support for Sybase ASE 15.0 on Windows and Linux platforms, page 128
• Content encryption in TCS, page 128
• Full-text version in 5.3 SP4 is 4.3.1, page 129
• Using memory map interface to write files, page 128
You see the following, in which the DocProcessor is referred to by its alternate
name, procserver:
Connecting to Node Controller at localhost:port_number...
Attempting to add the following processes: procserver
On UNIX and Linux systems, you must ensure that the libjvm.so or libjvm.sl and
libverify.so or libverify.sl files are in the shared library path.
left set to T. (It should be set to F at that point.) The remote_pending attribute designates
which queue items need to be copied to a remote repository. In this case, when the work
items are first generated, the attribute is set to T so that any users in the group whose
home repository is a remote repository receive the task. After the task is completed, it
should be reset to F.
When the attribute is T, the dm_QueueMgt administration job, which is responsible for
removing old and unneeded queue items, does not remove those queue items. The job
does not remove queue item objects that have remote_pending set to T.
To workaround this problem, ensure that the dm_distOperations job is scheduled to run
before the dm_QueueMgmt job. By default, the dm_distOperations job runs continuously
and polls for queue items to distribute every 5 minutes, and the dm_queueMgmt job
runs once a day. The dm_distOperations job will change the remote_pending attribute to
FALSE on these left-behind queue items and then the Queue Mgmt job will remove them.
To ensure that dm_distOperations runs prior to dm_QueueMgmt, you can create a job
sequence that runs dm_DistOperations before running dm_QueueMgt.
WORKFLOW_AGENT_DIED error
When a Content Server is started, the procedure may log the following error:
[DM_WORKFLOW_W_AGENT_DIED]
This error means that the workflow agent stopped and was restarted during the server
startup process. So long as the agent is running after the Content Server is started and
running, you can ignore this error in the log.
• Prior releases do not support dynamic groups. Consequently, any dynamic groups
defined in the governing repository are propagated to any pre-5.3 members as
standard, non-dynamic groups.
• Similarly, prior releases do not support access restricting (AccessRestriction,
ExtentendRestriction) entries in ACLs. If the federation’s federation mode is
replicating ACLs with those kinds of entries to pre-5.3 member repositories, the
entries are ignored by pre-5.3 Content Servers.
• The restricted_folder_ids attribute for users (introduced in release 5.3) is a local
attribute. This means that any restricted users in the governing repository are
propagated as unrestricted users in the member repositories. (If the member is a 5.3
repository, you can set that attribute locally if desired.)
Indexable formats
Format objects in the repository define which file formats Content Server recognizes.
These format objects also identify which formats are indexable (through the can_index
attribute). However, the format objects often represent multiple versions of the same
format. For example, the format object for the MS Word format may represent multiple
versions of MS Word. In some instances, not all versions of the format can be indexed. For
a list of indexable formats and versions, refer to the Content Server Administrator’s Guide.
Change to dm_FTCreateEvents
In previous releases, the queueperson argument of the dm_FTCreateEvents tool was
hard-coded to dm_fulltext_index_user. Because high-availability configurations
require the existence of more than one full-text user in the repository, the value of the
queueperson argument can now be explicitly specified when the tool is used. If not
specified, the value defaults to dm_fulltext_index_user.
1. Install the Content Server software and create a Content Server and repository.
2. Install the LDAP directory server and follow the directions in to properly configure
the directory server and repository for LDAP authentication.
3. Create the nonprimary Content Servers.
4. Using Documentum Administrator, connect to one of the nonprimary Content
Servers.
5. Navigate to the existing ldap config object.
6. Re-enter the Binding Name and Binding Password for the LDAP directory server.
If grammatical normalization is turned on globally, but you want to execute one query
without using that feature, add the following to the end of the query:
ENABLE(dm_fulltext(’qtf_lemmatize=0’))
For example:
SELECT r_object_id, object_owner FROM dm_document
SEARCH DOCUMENT CONTAINS ’specification’
ENABLE(dm_fulltext(’qtf_lemmatize=0’))
Setting the hint to 0 directs the query plug-in not to perform a grammatically normalized
search, but only return results that match the search term or phrase exactly.
If grammatical normalization is turned off globally, but you want to execute a query
using that feature, add the following to the end of the query:
ENABLE(dm_fulltext(’qtf_lemmatize=1’))
For example:
If the ENABLE option also includes the FTDQL or NOFTDQL hints, these hints are
ignored.
The batch size used to populate the temporary table for result processing is controlled
by the temp_table_batch_size parameter for the full-text engine configuration. This
parameter is set in the dm_ftengine_config object. The parameter name is set in
param_name property and the value is set in the param_value property. These are
repeating properties, so you must set the name and value at the same index position
within the property. The value is an integer number representing the number of results
in each batch.
Reinitialize Content Server after setting this parameter.
If this parameter is not set, the default batch size is 20000. Note that setting this
parameter to 0 disables the use of batching the returns.
The size of each batch of results processed for duplicate checking is controlled by
the temp_table_remove_dup_size parameter for the full-text engine configuration.
This parameter is set in the dm_ftengine_config object. The parameter name is set in
param_name property and the value is set in the param_value property. These are
repeating properties, so you must set the name and value at the same index position
within the property. The value is an integer number representing the number of results
in each batch.
Reinitialize Content Server after setting this parameter.
If this parameter is not set, the default batch size is 20000.
where objectID is the object ID of the document containing the rejected content.
Do not reset the rejection threshold unless told to do so by EMC Documentum Support
Services.
If you receive that error and have a high-avialability environment, check to determine if
the RTS search component on both index servers are down.
According to Microsoft: “Although you receive this error message in the SQL Server
error log, you can safely ignore this error message.”
The rollback operation should have completed successfully without causing any
problems. The bug that causes this error is expected to be fixed in SQL Server 2005 SP2.
...
01800000 12288 12288 12288 - 4M rwx-- [ heap ]
...
Note: You may need to add the /usr/lib directory (or the corresponding location for
mpss.so.1) to the list of trusted directories by issuing the following command from
the O/S as well:
$ crle -u -s /usr/lib
Then restart the connection broker and Content Server processes. If the Content Server
and/or connection broker process are already started, you will need to first shut both
processes down, set the environment variables as above and then restart both the
connection broker and Content Server. Afterwards, verify using the pmap command
again:
# pmap -sx 11966
11966: ./documentum -docbase_name naveed_solora53oski
-security acl -init_fil
Address Kbytes RSS Anon Locked Pgsz Mode Mapped File
...
020F0000 384 384 352 - 8K rwx-- [ heap ]
Additional information regarding MPSS, optional pagesize values and the effect
of modifying this parameter can be found at Sun’s website. One such reference is
http://www.sun.com/software/solaris/performance.jsp
Figure 6–3, page 136, shows the compression figures for medium-size files in various
formats.
Figure 6–4, page 136, shows the compression figures for large files in various formats.
If the ENABLE option also includes the FTDQL or NOFTDQL hints, these hints are
ignored.
Caution: These object types are used by certain Documentum clients and must not
be modified or deleted.
dm_message_address
Supertype: Persistent Object
Subtypes: None
Internal Name: dm_message_address
Object type tag: 00
A message address object records a unique email address found in the header of an email
message. A unique address is an address not currently represented in the repository
by another message address object. Each time a unique address is found in an archived
message, a message address object is created to record the address. Message address
objects are primarily used by personal and compliance archiving applications.
The following attributes are defined for this type:
dm_message_archive
Supertype: Document
Subtypes: None
Internal name: dm_message_archive
Object type tag: 09
A message archive object is used to store an email message. The header information in
the message is stored in the object’s attributes and the actual content of the message
is stored as content. Message archive objects are primarily used by personal and
compliance archiving applications.
The following attributes are defined for the type:
0, email message
1, contact entry
2, calendar entry
3, task entry
4, posted note
5, journal entry
6, sticky note
7, schedule entry
8, document
9, delivery report
255, unknown
0, normal
1, low
3, medium
5, high
message_link_ integer S Link number in a SysObject
count chain
message_sensitivity string(1) S Specifies the sensitivity of
the message. Valid values
are 1 to 255.
message_size integer S Size of the complete email
message, including headers
and routing information, in
bytes
message_subject string(256) S Subject line of the message
parent_message_id string (24) S Message identifier of the
root message if this message
is an embedded message.
receive_date Date S GMT date and time at which
the message was received
dm_message_attachment
Supertype: Persistent Object
Subtypes: None
Internal name: dm_message_attachment
Object type tag: 00
A message attachment object records the names of attachments sent with email messages.
The attachments represented by this object type are bound to the parent email message
by the value in the message_object_id attribute. Message attachment objects are created
when an email message that has an attachment is archived. Message attachment objects
are primarily used by personal and compliance archiving applications.
The following attributes are defined for the type:
dm_message_route
Supertype: Persistent Object
Subtypes: None
Internal name: dm_message_route
Object type tag: 00
A message route object is created when an email message is archived. Message route
objects store the routing information found in the To, From, bcc, and cc lists. Each
object records one address in the message. Message route objects are primarily used by
personal and compliance archiving applications.
The following attributes are defined for the type:
1, meaning To
2, meaning From
3, meaning cc
4, meaning bcc
dm_message_user_data
Supertype: Persistent Object
Subtypes: None
Internal name: dm_message_user_data
Object type tag: 00
A message user data object records application-specific information about a user
referenced in an archived message. Message user data objects are created when an
email message is archived. The objects are primarily used by personal and compliance
archiving applications.
The following attributes are defined for the type:
Table 6-8. Attributes dened for the message user data type
As part of this refocusing effort, the information about creating workflows using the API
has been removed, and the workflows and lifecycle chapters are rewritten from the
context of the user interface products (Workflow Manager, Business Process Manager,
and Lifecycle Editor) more typically used to create and manage these features.
The appendix called “Using DQL” has been moved to the DQL Reference Manual.
Chapter 5, Server Internationalization, incorrectly refers to a document named
"Managing XML Content in Documentum". The correct name of that document is "XML
Application Development Guide".
Evaluating the starting condition under the section How execution proceeds (page
242), the content should read as “When a workflow is created, the trigger_threshold
value is copied to the r_trigger_threshold attribute in the workflow object.” instead of
“When a workflow is created, these values are copied to the r_trigger_threshold and
r_trigger_event attributes in the workflow object.”
The disk space requirements for full-text indexing and installing the software are as
follows:
• Sufficient space to install the indexing software
Refer to the Content Server release notes for this space requirement.
• On UNIX and Linux, a minimum of 1GB of free space in the /tmp directory during
installation.
• If the index server and full-text index are on the same drive, a minimum of 3 GB
of free space is required on that drive.
• If the index server and full-text index are on different drives, a minimum of 3 GB of
free space is required on each drive (6 GB total).
• Sufficient space for the full-text index
Depending on the content being indexed, this may vary from approximately
one-third the space taken up by the content files to several times the amount of
space taken up by the content files. For example, 10 GB of content may produce an
index ranging in size from 3 GB to 35 GB.
• Transient space for full-text indexing operations
During the time period when the index server is adding entries to an index, a copy
of the index is used for querying operations. After the index entries are added,
the updated index is used for querying and the copy is deleted. Before the copy is
deleted, the disk space used by the index may increase by as much as 50% over the
disk space used before the index updates began.
The following two attributes of the dm_workflow object type are obsolete:
• r_pre_timer
• r_post_timer
When a repository is upgraded, the dm_wfTimer_upgrade.ebs script is executed.
This script finds all existing dm_workflow objects that have values for r_pre_timer,
r_post_timer, or both and creates timer objects representing those values.
Addnote API
The documentation of the Addnote API is unclear about the required permissions to use
this method. To clarify:
If you are adding a note to a document that is not part of a workflow, you must have
Relate permission on the document. However, if the document is part of a package in a
workflow, you must not only have Relate permission on the document but must also
be a performer in the workflow.
Some languages use accents and diacritical marks on some characters or syllables in
words. Searches are insensitive to accent and diacritical marks. When you search on a
word or phrase, the search returns all objects that contain the word or phrase, even if
some matches also contain an accent or diacritical mark. Similarly, when you search on a
word or phrase that contains such marks, the search ignores the marks and returns all
objects that contain the word or phrase, spelled with or without the accent or diacritical
mark.
For example, suppose you issue the following query:
SELECT owner_name,r_creation_date FROM dm_document
SEARCH DOCUMENT CONTAINS ’cote’
The query returns all documents that contain, in metadata or content, the word cote,
including those with instances of the word with accents or diacritical marks (côte, côté,
and so forth).
Now, suppose you issue the following query that specifies a search term that includes
an accent:
SELECT owner_name,r_creation_date FROM dm_document
SEARCH DOCUMENT CONTAINS ’coté ’
That query also returns all documents that contain, in metadata or content, the word
cote, including those with instances of the word with accents or diacritical marks (côte,
côté, and so forth).
This section describes the media in which the software is available, the organization of the product
components in the available media, and the file names for all available product components which
can be downloaded.
Software Media
This product is available as an FTP download from the Powerlink site
(http://Powerlink.EMC.com). You should have received instructions through email
regarding how to download products.
Organization
The Powerlink site (http://Powerlink.EMC.com) provides access to https://emc.
subscribenet.com/control/dctm/index where a complete listing of products is available
for download.
Files
The following modules\files comprise the contents of this release:
• bofcollaborationSetup.jar
• bofworkflowSetup.jar
• consistency_checker.ebs
• dfcoperatingsystemSetup.jar
• serveroperatingsystemSetup.jar
This section contains instructions for installing the product. If your system meets the requirements
listed in Chapter 4, Environment and System Requirements, you are ready to install the software.
The Content Server Installation Guide, version 5.3 SP3 and Content Server Full-Text Indexing System
Installation and Administration Guide, version 5.3 SP4, contain the instructions for new installations and
upgrades. The topic Installation notes, page 65 contains additional important installation information.
Note: The Content Server configuration program now includes a dialog box for enabling Records
Manager in the repository. You must provide a license key for Records Manager.
Documentum’s technical support services are designed to make your deployment and management of
Documentum products as effective as possible. The Customer Guide to EMC Software Support Services
provides a thorough explanation of Documentum’s support services and policies. You can download
this document from the Powerlink site (http://Powerlink.EMC.com) by navigating to: Support >
Request Support > Software Customer Guide and Offerings.