Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

EMC Centera®

Version 4.0 Patch 2

Global Services Release Notes


REV A36

June 11, 2008

These release notes contain supplemental information about CentraStar


version 4.0 including 4.0 patch 2. Topics include:
◆ Product description.............................................................................. 2
◆ Fixed in this patch ................................................................................ 2
◆ New features and changes .................................................................. 3
◆ Fixed problems ................................................................................... 20
◆ Known problems and limitations .................................................... 35
◆ Environment and system requirements .......................................... 82
◆ Technical notes .................................................................................... 82
◆ Documentation ................................................................................... 84
◆ Software media, organization, and files.......................................... 84
◆ Installation ........................................................................................... 84
◆ Troubleshooting and getting help .................................................... 84

1
Product description

Product description
These release notes support EMC® CentraStar® version 4.0 including
4.0 patch 2 and supplement the EMC Centera® documentation. Read
the entire document for additional information available about
CentraStar version 4.0 including 4.0 patch 2. It may describe potential
problems or irregularities in the software and contains late changes to
the product documentation.

Fixed in this patch


This patch contains the following fixes:
◆ 35398CEN
◆ 35399CEN
Refer to Fixed problems on page 20 for a description of the two fixes
introduced in this release as well as the fixes from the previous
releases included in this patch (4.0 and 4.0 patch 1).

2 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

New features and changes


The CentraStar 4.0 release greatly enhances the scalability,
predictability, and configurability of the EMC Centera system. At the
same time it reduces the disk space overhead that is used for internal
processes.
The major technology improvement driving the gains in scalability
and predictability is the introduction of the CentraStar Hyper
Technology. The Hyper Technology optimizes the disk, file system,
and network I/O operations in order to accelerate the internal
self-healing operations and to limit the interactions between them.
The CentraStar 4.0 release also increases the level of configuration
that can be applied to the EMC Centera system. The configuration
enhancements include options to:
◆ Use different network segments to separate management traffic
from data traffic and to distinguish application data traffic from
replication data traffic.
◆ Schedule, run on demand, or temporarily halt Garbage Collection
tasks to best suit the application duty cycle in a particular usage
environment.
◆ Apply rules for the creation of passwords for administrative
access.
◆ Filter EMC Centera email messages according to their type and
importance in a particular customer environment.
◆ Set EMC Centera-specific notification messages, which will be
shown to all administrative users when logging on to the system.
◆ Increase the object count beyond 50M per node for environments
that have the appropriate combination of hardware and
application usage patterns.
◆ Increase the command set and capabilities for on-site service
personnel. Many procedures that previously required an
escalation to complete can now be fully executed on site.
◆ Use an automated upgrade procedure for upgrading from any
CentraStar 3.1 release, which reduces the risk of user error and
allows the service engineer to leave the site while the upgrade is
proceeding (any failures after this point are alerted).

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 3


New features and changes

New CentraStar Hyper Technology


The CentraStar 4.0 release introduces the CentraStar Hyper
Technology to the Location Cache, Disk Regeneration, and Garbage
Collection components.
Hyper Technology is an advanced technology to enhance the
scalability of EMC Centera. It is also a promising technology for
further enhancements in future CentraStar versions.
This section details the Hyper Technology advantages for the current
release.

Improved Location The Location Cache (or ranges) is distributed across all the nodes in
Cache the cluster and is used to quickly determine the location of stored
objects regardless of cluster size or object count on the nodes.
Improvements to the Location Cache include:
◆ The optimization of the cache filling algorithms. This reduces the
cache initialization time with a factor 100, reduces the impact on
cluster performance during the initialization, and it reduces the
space requirements for the cache. The resulting gain in space can
be used for storing data objects.
◆ The use of bulk update operations for cluster operations which
affect the location of multiple objects, such as regeneration and
Garbage Collection. The bulk operations are more resource
efficient both for the client as well as for the cache. This
contributes to faster regeneration and Garbage Collection
processes and a reduced overhead for managing the location
cache.
◆ The use of appropriate cache reinitialization techniques to limit
the impact of cache rebuilds on mainline read and write
performance.
◆ A gradual migration of the location cache to nodes that are added
to the cluster. This further streamlines the process of adding
capacity to an EMC Centera cluster and avoids any performance
impact during a capacity upgrade.
◆ All size clusters now use an improved reliable lookup query
(RLQ) as fall-back for cache database lookup.
◆ The reliability of the ranges indicates the chance of finding the
location in this cache. It is expected and normal that this number
will degrade over time. Ranges will be initialized periodically

4 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

and when the reliability drops below the defined threshold.


Additionally, EMC Service can initiate a range initialization at
any time using Centera Viewer > Commands > Ranges >
Overview > GenerateNewRangeLayout.
◆ When nodes are added or replaced they will not automatically
participate in the ranges. Although it is not required that all
nodes participate in the ranges, EMC Service can generate a new
range layout in Centera Viewer and include the added/replaced
nodes. Note that when the range layout is changed, this may
temporarily affect read performance.

Faster Blob Index CentraStar utilizes a Blob Index (BI) which is located on each disk in a
initialization and Disk node and contains information about each object stored on that disk1.
Regeneration When CentraStar detects an issue with the consistency of a Blob
Index it will reinitialize the Index (a self-healing operation). If a disk
fails, the data on the disk will be re-protected on other disks in the
system (Disk Regeneration).
CentraStar 4.0 improvements to the Blob Index initialization and Disk
Regeneration include:
◆ The application of the Hyper Technology to the Blob Index
reinitialization in combination with Disk Regeneration. The
synergy of those two self-healing operations results in one
predictable, deterministic self-healing operation. While
concurrent self-healing operations are more resource intensive
and take longer to complete than individual operations, the
application of the Hyper Technology allows combined
self-healing tasks to complete with less resources and time than
the sum of the individual tasks.
◆ A faster initialization of the Blob Index in combination with Disk
Regeneration (up to 10 – 25 times faster depending on the
hardware generation).

Improved Disk Improvements to Disk Regeneration include:


Regeneration
◆ A new regeneration dashboard in Centera Viewer which shows
more detailed information about the status and progress of
individual disk regenerations. Additionally the new dashboard
provides the possibility to resolve dual fault situations which lead
to incomplete cluster integrity situations on a per disk basis.

1. Gen 2 and Gen3 nodes only have one index for the entire node.

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 5


New features and changes

◆ The regeneration manager will retry stuck disk regenerations.


Retries are automatically done when nodes come back on-line
and when read-only databases are enabled for write operations
again. EMC Service can also manually initiate a retry using the
new regeneration dashboard in Centera Viewer.
◆ The addition of a new cluster-wide consistency check (conditional
regeneration) to validate the completeness and correctness of data
on disks or nodes which are reintroduced into the cluster. Any
instances of corruptions or missing data which are identified will
be fixed immediately. Conditional regenerations are shown in the
regeneration dashboard in Centera Viewer and in Centera
Console as a data healing operation.
◆ If a disk failure occurs, a disk regeneration is scheduled. The
regeneration starts after the disk regeneration delay, which is 20
seconds by default. If the disk is back on line before the delay, the
regeneration will be cancelled. If the disk is back on line after the
delay, the regeneration continues with conditional regeneration.
◆ The processing of a node regeneration as four disk regeneration
operations. If a node failure occurs, a regeneration is scheduled
for each disk. The disk regeneration starts after the node
regeneration delay, for which the default depends on the cluster
size. If the node comes back on line before the delay, the
regeneration will be cancelled. If the node is back on line after the
delay, the regeneration continues with conditional regeneration.
◆ When a node is in maintenance mode, no node regeneration will
be started. Disk regenerations will ignore maintenance mode.
◆ Updates to the Blob location Cache will no longer be done during
regeneration.

Improved Garbage Garbage Collection (GC) is the process that reclaims space by
Collection cleaning up blobs (user data) that are no longer referenced because
the corresponding C-Clips have been deleted by the application.
Improvements to Garbage Collection include:
◆ The introduction of new Garbage Collection (GCii) as a
replacement of the Incremental and Full Garbage Collection
processes used in previous CentraStar versions.
◆ A significantly faster and more deterministic reclamation of space
taken by unreferenced user data (on an average cluster GC will
complete in less than a day).

6 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

◆ New scheduling options. By default, GC runs every four weeks.


However, you can set the schedule to the configuration you want
(for example, to run as frequently as every week on a particular
day or time). GC can run on demand to meet unusual conditions
in the duty cycle where a faster turnaround is desired.
Additionally, you can adjust the execution speed to achieve the
best balance between speed and performance impact on the
regular cluster operations.
Progress and completion statistics for GC are available in Centera
Viewer and the daily Health Report, including details on how much
space has been reclaimed by the last completed run.
By default, new Garbage Collection will run for the first time four
weeks after the upgrade has completed. If desired, the system
administrator can schedule the first run to run earlier, provided that
the upgrade has completed more than two weeks beforehand.

Upgrade notes The implementation of the new Hyper Technology to all affected
components requires an additional upgrade completion step after the
software upgrade to CentraStar v4.0.
The upgrade completion will enable the following new/improved
features:
◆ Location Cache — Once enabled, the cache will be initialized
which may affect read performance at the start of the
initialization. Read performance can drop to maximum 50%
during maximum 48 hours and is gradually restored while the
cache is being populated.
◆ Blob Indexes Initialization
◆ Garbage Collection — You can now schedule this feature.
◆ Regeneration buffer hard-stop policy.
Enabling these new features will typically take less than 20 minutes,
however the completion of the cache initialization may take longer.
If the upgrade completion step is not run immediately after the
upgrade, you will be asked to schedule this step within a week after
the upgrade. You will receive one of the following alerts:
Ready to schedule completion at convenient time
◆ 1.1.12.1.02.01 — The upgrade completion is not yet scheduled.
Use the CLI command set cluster upgradecomplete to schedule
the upgrade completion.

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 7


New features and changes

Completion not scheduled yet


◆ 1.1.12.1.02.02 — The upgrade completion is not yet scheduled.
The upgrade completion will automatically start within three
days. Use the CLI command set cluster upgradecomplete to
schedule the upgrade completion at a more convenient time if
needed.
If you receive the following alert, EMC Service needs to check the
condition of your cluster because it appears not to be ready for the
upgrade completion:
Not ready for completion
◆ 1.1.12.1.01.01 — The setting of the new capacity reservations
failed. An EMC Service action is required.

Note: Alerts 1.1.12.1.01.01, 1.1.12.1.02.01, and 1.1.12.1.02.02 are not shown in


Centera Console.

Improved utilization of raw capacity


The CentraStar version 4.0 release introduces the following
improvements to reduce the system overhead which results in
additional available capacity to store user content.

Reduced reservation CentraStar utilizes disk capacity to track, manage, and heal stored
requirements objects. The amount of space necessary for these purposes grows
linearly with the number of objects that are stored. CentraStar will
reserve the space necessary for the supported object count (that is the
maximum number of objects that can be stored on a node).
With CentraStar 4.0 the following reduction in reservation
requirements is achieved:
◆ On Gen4 and Gen4LP nodes the reservations are reduced by
approximately 40% (approximately 80GB per node) at a
supported object count of 50M per node.
◆ On Gen 2 and Gen3 nodes the reservations are reduced by
approximately 15% (approximately 20GB per node) at a
supported object count of 30M node.
As a result more than 95% of the raw capacity on Gen4LP nodes is
available to store and protect user content (assuming that the default
supported object count is set to 50M objects per node).

8 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

Configurable CentraStar 4.0 doubles the maximum supported number of objects1


supported object (up to 25M per disk for Gen4/Gen4LP nodes). In addition, the
count maximum number of objects that can be stored on a node is now
configurable by EMC Service (supported object count). The
supported object count reflects the capacity that is reserved to store
that number of objects.
You can view the supported object count2 with the Centera Viewer
Node list and the CLI command show objects detail.
These are the improvements and implications related to the
introduction of supported object count:
◆ Setting the supported object count directly impacts the capacity
reserved for internal data processing. This allows for
optimization of the raw capacity usage for a given average user
file size and protection scheme.
◆ For each new setting of the supported object count, the capacity
reservations will approximately be adjusted with 2.5GB per 1M
objects.
◆ By setting a new supported object count, the hard-stop limit for
writing will also be adjusted to the same value as the supported
object count.
◆ The maximum supported object count for Gen4 and Gen4LP
nodes running CentraStar 4.0 is increased from 50M to 100M
objects per node (or 25M objects per disk). The maximum for
Gen2 and Gen3 nodes remains 30M objects per node.
◆ The factory default for supported object count is 50M objects per
node (or 12.5M objects per disk).
◆ EMC Service can configure the supported object using Centera
Viewer > Tools > Service > Configuration > Set Supported
Object Count.

Upgrade notes The upgrade to CentraStar 4.0 does not change the current capacity
reservations. CentraStar 4.0 will base the supported object count on
the capacity that was reserved by the CentraStar version from which
the upgrade is done. As a result, the supported object count on

1. Older hardware remains at its current maximum-supported object count.


2. For previous CentraStar versions, only a fixed maximum object count is
shown (known as the hard-stop object count). New writes to the node are
limited by this maximum object count.

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 9


New features and changes

upgraded 4.0 nodes may be different than what is seen on new


systems. For example, upgrading from CentraStar 3.0 may result in a
supported object count of nearly 30M per Gen4 node, while
upgrading from CentraStar 3.1 may result in nearly 50M per Gen4
node. EMC Service can change the supported object count if required,
provided that there is sufficient capacity to do this.
Because specific hardware generations may have a different
maximum object count, it is possible that nodes in a cluster show a
different supported object count.
After an upgrade the maximum object count (also known as the
hard-stop object count) has not been changed and will remain at 50M
objects per node. However when EMC Service changes the supported
object count, the maximum object count will become the same as the
supported object count.

Network segmentation
Previous CentraStar versions required that application, replication,
and management traffic all use the same physical network, with
limited options for segregating or controlling the network traffic
according to its type. CentraStar 4.0 supports the use of multiple
physical networks allowing each traffic type to be segregated,
monitored, and managed according to the appropriate per-site
policies. For environments where such separation of traffic is not
necessary, a single physical network may still be used for all network
traffic.

New node roles To support network segmentation, CentraStar 4.0 introduces two new
node roles: the management and replication role. By assigning the
individual node roles to distinct nodes in the cluster, the different
types of network traffic will be segregated. For example:
◆ Separate management traffic from data traffic: Assign the access
and replication roles to a set of nodes and the management role to
a different set of nodes. All application traffic (SDK operations
initiated by applications) will flow to the nodes with the access
role, and all replication traffic (user data being replicated to a
remote cluster for disaster recovery purposes) will flow to the
nodes with the replication role. Management traffic (management
commands from management applications such as Centera
Console, CLI, or Centera Viewer, SNMP events, emails, and more)
will be sent through the nodes with the management role. As the

10 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

management nodes are distinct from the nodes which transfer


user data, the management traffic can flow over a different
physical network and can be controlled and managed separately.
◆ Separate replication traffic from application traffic: Assign the
access role to a set of nodes and the replication role to a different
set of nodes. All application traffic will flow to the nodes with the
access role, while the replication traffic will flow over the nodes
with the replication role. Both data flows can potentially use
different physical networks (for example perhaps over a
site-to-site network link dedicated to disaster recovery traffic).
The three node roles (access, management, and replication) are
collectively referred to as external node roles, as all require a network
connection to systems outside the EMC Centera cluster.
Each external node role type must be on the same network and there
can only be one separate network per external node role. For
example, a separate replication network is possible but it is not
allowed to assign replication roles to nodes that are configured for
different networks. The same applies to the management and access
role.
The CLI and Centera Viewer have been updated to support the
management of the new node roles. Use Centera Viewer >
Commands > Configuration warnings or the CLI command show
config warnings to check if the external node roles have been
assigned correctly. To change the node roles use the Centera Viewer
nodelist or the CLI commands set node role add or set node role
remove.

Configuration notes ◆ During the upgrade to CentraStar 4.0, existing nodes with the
access role will automatically be assigned both the management
and replication role. There will be no changes for the storage role.
◆ For clusters with only Gen2 and Gen3 hardware a license for the
storage on access feature is required to be able to assign external
node role to a Gen3 node that has a storage role.
◆ For customers that have the RPQ solution Management Network
on Storage Node it is required to undo this configuration before
the upgrade and to reconfigure it after the upgrade using the
CentraStar 4.0 management role functionality. Refer to Procedure
Generator for more details.

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 11


New features and changes

◆ For the configuration of nodes with the management and


replication role the same rules for selecting nodes apply as for the
configuration of the access role.
◆ CE+ clusters will not allow a management connection over the
network to nodes which are also used for access and/or
replication traffic. Monitoring functionality such as email home
(SMTP), syslog, and SNMP will however be possible on these
nodes using the management role.
◆ Ensure that you still meet your compliancy requirements when
configuring node(s) on CE+ clusters with only a management role
to manage the cluster from the network.

New regeneration buffer policy


Previous CentraStar versions provided the ability to use the
regeneration buffer in alert-only mode. This mode could be used to
allow data to be written even though the buffer threshold had been
reached. Although an alert was sent when the threshold was reached,
this mode required manual intervention in order to prevent that the
cluster could reach a state where it could not complete self-healing
operations after a disk failure. The best practice was thus not to use
the alert-only mode. With CentraStar 4.0 the alert-only option has
been removed completely.

Upgrade notes ◆ Nothing changes for clusters that already used the hard-stop
policy (best practice).
◆ Clusters that used the alert-only mode may notice a change in
available capacity and/or a change in the regeneration buffers1.
These changes occur when the upgrade completion step is
performed.
◆ If the change to hard-stop mode causes the available capacity to
drop to 5% or below, it can cause the regeneration buffer to be set
to the factory default of one disk or even to 0. If this occurs, you
must add new capacity as soon as possible and set the
regeneration buffer back to the recommended number of disks.

1. The buffer may be reduced to factory default (one disk) or 0 in case the
available capacity is very limited and not sufficient to switch to hard stop.

12 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

◆ The recommendation is to have the customers change the


regeneration buffer mode to hard-stop prior to the upgrade and
have them check the regeneration buffers after the upgrade.

Node regeneration on 4-node clusters


Entry-level clusters with 4-nodes do not perform complete node
regenerations if a node fails.
CentraStar 4.0 has the possibility to enable node regeneration on
4-node clusters. This setting has to be enabled by EMC Service.
Customers may consider this setting for remote locations with 4-node
clusters that can reserve sufficient available capacity for node
regeneration.

Note: The option to enable and disable node regeneration is only possible on
4-node clusters. On larger clusters you can not disable node regeneration.

Use the CLI command set cluster config to enable and disable node
regeneration on 4-node clusters (to disable regeneration, specify
infinite for the regeneration timeout).

Upgrade automation
With previous releases CentraStar upgrades were controlled by the
platform. Before an upgrade could be performed, a large number of
manual checks and actions were needed. If not executed correctly this
requires complicated recovery procedures. With CentraStar 4.0 the
upgrade control is done by the server (Filepool) which makes it
possible to automate the manual checks and procedures.
The upgrade automation provides the following improvements:
◆ A new upgrade manager built in the server, replacing the
platform upgrade process.
◆ Inclusion of the automatic node upgrade capabilities already
introduced in CentraStar version 3.1.
◆ More visibility of the cluster state and conditions for a more
predictable cluster upgrade behavior.

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 13


New features and changes

◆ The possibility for each upgrade path to have its own unique set
of checks and actions for an upgrade. The new upgrade manager
performs these unique checks and actions automatically
providing efficiency, eliminating erroneous user input, and
increasing reliability.
◆ Alerting of failed checks and actions at the completion of the
upgrade. EMC Support will diagnose the reported problems and
correct any failures.
◆ The ability for the service engineer to start a non-disruptive
upgrade and leave the site before the upgrade has completed.
◆ New CLI commands:
• install
• activate
• set cluster upgradecomplete
• show upgrade status
• show upgrade actions
• show upgrade nodehistory
Note: Upgrades from CentraStar version 3.0 and lower still require the
platform controlled upgrade with the manual checks and actions. Upgrades
from CentraStar version 3.1 and higher will use the upgrade automation.

Other enhancements
In addition to the improvements and changes already mentioned in
the sections above, the following enhancements have been made to
Centera Viewer, CLI, and the monitoring and reporting tools:

ConnectEMC This release includes the following improvements to ConnectEMC:


◆ The subject line now identifies the type of message and allows
filtering of specific notification types by the email client. The
following identifiers are added to the subject line of a
ConnectEMC notification:
• HR for Health Report messages
• ALERT for alerts
• FIELD for messages sent by the service engineer
◆ The ability in Centera Viewer to clear the current ConnectEMC
outbox of all pending messages and then restart the ConnectEMC
service.

14 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

Health Report If your cluster is configured with Email Home, an HTML Health
Report will also be sent by email. The following enhancements have
been made to this HTML Health Report:

◆ The protection scheme values are now presented as follows:


• M2 as CPM
• R61 as CPP
• Unknown values will be displayed as is.
◆ The following information has been added:
• Storage On Access: yes/no
• ICMP enabled: yes/no
• SymmIP: installed yes/no and version
• Updates to reflect changed and added features in this release

New Disk Protected CentraStar sends an alert when a disk fails. EMC Service will wait
alert until the failed disk is re-protected before replacing it with a new
disk.
From CentraStar 4.0 onwards an additional alert will be sent when
the failed disk is successfully re-protected indicating that all of the
data is fully protected and that the disk can be replaced. The new
alert has symptom code 2.1.2.1.04.01.

Note: Alert 2.1.2.1.04.01 is not shown in Centera Console.

CentraStar already uses an alert (with symptom code 2.1.2.1.03.01)


that will be sent if the disk regeneration may not be able to complete.

Message of the day You now can set a message of the day that will be displayed to all
users that access the cluster using EMC Centera Viewer, CLI, or EMC
Centera Console (version 2.2) and to EMC Service/Support using
service access methods.
EMC Service can set an additional message only visible to EMC
Service users to communicate special service conditions to all service
users that access the cluster.
The commands set motd and show motd have been added to the CLI
to support this feature.

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 15


New features and changes

Password complexity It is now possible to set rules for the creation of new or updated
rules profile passwords. The commands set password rules and show
password rules have been added to the CLI to support this feature.
Notes
◆ Existing passwords will not be affected unless they are updated.
◆ The rules do not apply to generated passwords.
◆ After the upgrade a default password rule will be applied that is
more strict than any restrictions to password creation before the
upgrade. Verify after the upgrade that the updated password
restriction rule meets your needs or change it accordingly.

Blob Information The Blob Information Viewer is a new feature in Centera Viewer to
Viewer query details on a specific C-Clip or blob. The following information
is available and can be queried using any of the location lookup
methods that the EMC Centera uses internally:
◆ Metadata stored in the Blob Index database
◆ Information related to the file system
◆ Replication details
◆ Blob ID check
◆ Locations of the fragments
◆ Blobs referenced by a C-Clip
Additionally a repair can be initiated (submit for BFR).

Regeneration Centera Viewer provides a dashboard (Centera Viewer > Commands


Dashboard > Regeneration) to monitor the ongoing and historic regeneration
activity. This dashboard allows you to:
View per regenerated disk
◆ Status, progress, and ETA
◆ History
◆ Detail task view per regeneration
◆ Integrity details
◆ Stuck regeneration details
Controls
◆ Pause and resume a regeneration
◆ Force a regeneration to success

16 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

◆ Retry a regeneration that does not progress (a stuck regeneration)


◆ Remove cyclic dependencies between regenerations
Cluster-wide integrity states
The state is given per protection scheme (CPM and CPP):
◆ Complete (fully protected)
◆ Vulnerable (not fully protected, but fully available)
◆ Incomplete (not fully available)

Note: Obsolete Regeneration and Integrity MAPI/scripts are no longer


available.

Additionally you can view regeneration status with Centera Viewer


> Commands > Statistics > RegenerationManager statistics.

Range Information A new Range Information Viewer (Centera Viewer > Commands >
Viewer Ranges) is shown when connected to a CentraStar 4.0 cluster
providing:
◆ Status of the Ranges
◆ Manual start of Range initialization
◆ The possibility to create a new Range Layout (for the
participation of nodes that were added since the last Range
Layout).
To view the Hyper Range statistics use Centera Viewer > Commands
> Statistics > RangeLocationManager statistics.
The Health Report contains two new entries related to Ranges:
Cluster.RangeReliability and Cluster.RangeReliabilityThreshold.

Lifeline Viewer The Lifeline Viewer (Centera Viewer > Commands > Lifeline)
displays the lifeline of the cluster. Each node has two neighbors for
which it has to check the connectivity.

Statistics Viewer The Statistics Viewer (Centera Viewer > Commands > Statistics) is
new in Centera Viewer 4.0 and provides:
◆ The possibility to navigate through the statistics available for each
node in the cluster
◆ The option to export specific statistics or portions of a tree to a file

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 17


New features and changes

Service menu The Service Menu in Centera Viewer has been changed to provide a
changes logical and hierarchical access to the available service options. Before
the service options were all listed under one item which was not as
user friendly as it is now.

Additional security for Enhanced security for service scripts that require additional
service scripts privileges. These scripts are now signed to ensure the authenticity
before executing.

New platform The EMC Service CE is now able to perform the following actions
commands for that previously required emcexpert access with earlier releases:
emcservice
◆ View drive properties using the “smartctl” system command
(FPsmartctl)
◆ Kill Integrity Checker processes (pkill IC4.pl)
◆ Run platform API report (platformAPIReport.sh)

Capacity alerting The thresholds for the alerts with symptom codes 5.2.2.1.03.01
(warning level) and 5.2.2.1.03.02 (error level) can now be set with the
CLI command set capacity alerts. EMC highly recommends to set the
warning level to the percentage of total capacity that reflects the
capacity usage for the coming 6 months and the error level alert to the
percentage of total capacity that reflects usage for the coming 3
months. Refer to the EMC Centera Online Help for more information.

Governance Edition With CentraStar version 3.0, Compliance Edition was re-branded as
rebranding Governance Edition. The CLI, the Health report, and Centera Console
reporting still referred to Compliance Edition (CE). With this release,
all reporting and documentation will refer to Governance Edition
(GE). There are no changes to Compliance Edition Plus.

Two node Gen4LP CentraStar 4.0 supports adding capacity expansions as small as two
expansion Gen4LP nodes at a time to cubes consisting of Gen4 and/or Gen4LP
nodes. Before this release the smallest capacity expansion was four
Gen4LP nodes.

Discontinued support
From this release onwards the following features and functionality
will no longer be supported.

Replication to Clusters running CentraStar version 4.0 do not support replication to


CentraStar 3.0 and clusters running CentraStar version 3.0 and lower. Note that EMC
below

18 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


New features and changes

Centera is designed to run the same CentraStar version on replicating


clusters. EMC Centera supports replication and restore between
different CentraStar versions only to support CentraStar upgrade
procedures and restore use cases. Refer to the EMC Centera Online
Help for more information.

CenteraMonitor The CenteraMonitor tool does not support clusters running


CentraStar version 4.0 or higher.

Profile-Driven The Profile-Driven Metadata (PDM) tool is no longer required to set


Metadata tool profile metadata. CentraStar version 3.1.3 already introduced the CLI
command update metadata to set the profile metadata. The
stand-a-lone PDM tool does not support clusters running CentraStar
version 4.0 or higher.

FPshutdown The FPshutdown utility is no longer required to shutdown an EMC


Centera from scripted procedures. CentraStar version 3.1.3 already
introduced a CLI command to shutdown an EMC Centera which can
be scripted. The stand-a-lone FPshutdown utility does not support
clusters running CentraStar 4.0 or higher.

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 19


Fixed problems

Fixed problems
This release includes fixes to the following problems:

Monitoring

Issue Number 29598CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Access node may reboot when 5 or more users access Centera Console at the same time

Symptom Access node may reboot when 5 or more users access Centera Console at the same time.

Fix Summary Fixed

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Fixed in Version 4.0.0

Impact Level 3 - Low

Issue Number 34011CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Email client may not show the correct time and/or time zone

Symptom ConnectEMC assumes an EST time zone instead of UTC. This causes the header of the email
sent to display incorrect time zone information causing the email client to display the wrong time
that the email was sent. However, the HTML message itself shows the correct time the Health
Report was generated.

Fix Summary Fixed

20 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 3 - Low

Issue Number 33870CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem ConnectEMC may stop functioning with certain mail servers after upgrade to CentraStar 3.1.3

Symptom After upgrading to CentraStar 3.1.3 ConnectEMC may stop functioning with certain mail servers.
This problem was reported for a Sun Java System Messaging Server 6.1, but other mail servers
may be affected as well.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 32526CEN

Fix Number n.a.

Host OS Linux

Host Type Any Host

Problem ConnectEMC may send a large number of email alerts with symptom code 5.2.5.7.02.01

Symptom ConnectEMC may send a large number of email alerts with symptom code 5.2.5.7.02.01 for one
problem because of a bug in the creation of the alert.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 21


Fixed problems

Fixed in Version 4.0.0

Impact Level 1 - Critical

Issue Number 32425CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem A false alert indicates that the internal network switch is down

Symptom When there is a problem retrieving the state of a network switch when CentraStar starts, a false
alert can be generated indicating that the switch is down although it is up and running. Use the
CLI command show health to check the actual status of the network switch.

Fix Summary

Found in Version 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 3 - Low

Issue Number 28746CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Swapped fields in regeneration corruption report

Symptom In the regeneration corruption report, the values of the fields "stuckCurrent" and
"stuckCumulative" have been swapped.

Fix Summary

Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

22 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Fixed in Version 4.0.0

Impact Level 3 - Low

Issue Number 24232CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Incorrect FROM email address is used by ConnectEMC

Symptom When ConnectEMC is configured with the CLI command set notification or set cluster
notification and the From email address field is not filled in, ConnectEMC uses an incorrect
default value for the From email address (<not configured> instead of ConnectEMC@emc.com).

Fix Summary The default FROM email address is now 'ConnectEMC@emc.com'

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 3 - Low

Issue Number 24080CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The health report may not report the cpu temperature for access nodes that were configured to
storage on access

Symptom For clusters running CentraStar 3.0 or 3.1 the health report may not report the cpu temperature
for access nodes that were configured to storage on access.

Fix Summary

Found in Version 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2,
3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 23


Fixed problems

Fixed in Version 4.0.0

Impact Level 3 - Low

Configuration

Issue Number 35398CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Adding new nodes may lead to undetermined system behavior

Symptom When new nodes are added to a cluster with a lower sealed version than the factory version for
the new nodes, not all configuration parameters of the new nodes will be properly updated and
reflect the actual state of the rest of the cluster. This can lead to undetermined system behavior
of the new nodes.

Fix Summary Prior to adding new nodes to an existing 4.0 cluster, the cluster should be upgraded to 4.0 patch
2 or higher

Found in Version 4.0.0, 4.0.0p1

Fixed in Version 4.0.0p2

Impact Level 1 - Critical

Server

Issue Number 33467CEN

Fix Number n.a.

Host OS Linux

Host Type Any Host

Problem Blob Index read-only alerts while sufficient capacity

24 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Symptom Under rare conditions CentraStar may start on the node while not all capacity is mounted and
available. In such a case, insufficient capacity may be detected and may put the Blob Index in
read-only.

Fix Summary Centrastar will not be allowed to start in such case

Found in Version 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 32176CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Location of the BLC could not be found

Symptom After removing a disk from a node, the system would sometimes not rediscover the location for
its BLC database. Attempts to set the location manually could result in the node no longer
starting up.

Fix Summary Fixed

Found in Version 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 3 - Low

Issue Number 32038CEN

Fix Number n.a.

Host OS Linux

Host Type Any Host

Problem Problems with locating blobs due to duplicate VolumeIDs in nodesetup file

EMC Centera Version 4.0 Patch 2 Global Services Release Notes 25


Fixed problems

Symptom Duplicate VolumeIDs in the nodesetup file may cause problems locating blobs. As a result it will
take longer than usual to read the blob or the blob cannot be read at all. If the blob cannot be
found, the API returns an error indicating that the blob does not exist.

Fix Summary When the blob partitions are mounted, the mounted volume list is validated in the nodesetup file

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 3 - Low

Centera CLI

Issue Number 33440CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Update of IP restrictions for anonymous profile has no effect

Symptom The CLI command update ip restrictions allows you to update the IP restrictions for the
anonymous profile. It is in fact not possible to enforce these restrictions to the anonymous profile
so no changes are made.

Fix Summary The CLI no longer allows you to set IP restrictions for the anonymous profile.

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 3 - Low

Self Healing

Issue Number 35399CEN

Fix Number n.a.

Host OS Any OS

26 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Host Type Any Host

Problem The optimized self-healing functionality in 4.0 may run in a degraded mode after a disk
replacement

Symptom The optimized self-healing mechanisms introduced in 4.0 may not interpret the configuration
correctly after a disk in a node is replaced and fails to initialize properly. In combination with
simultaneous failures in the future, this may lead to a degraded level of protection.

Fix Summary Upgrade to 4.0.0 Patch 2

Found in Version 4.0.0, 4.0.0p1

Fixed in Version 4.0.0p2

Impact Level 1 - Critical

Issue Number 33381CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem CPP regeneration does not take into account Safe Free Capacity (SFC) or object count

Symptom CPP regeneration may regenerate data to nodes which have no more safe free capacity or
which have already exceeded their supported object count. As a result, nodes may go beyond
the supported object count or write into the reserved storage space.

Fix Summary CPP regenerations avoids regenerating to nodes with no available capacity or object count

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 1 - Critical

Issue Number 31985CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

27 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Problem DBinit progressing very slowly

Symptom The initialization speed reported by CV (Statistics > ProtectionLocationManager > initmodule >
initializationunit_X > pull or Task List > ProtectionInitializationTask_<disk identifier>) may
indicate that a DBinit is progressing very slowly for CentraStar 3.1.3. This is due to a suboptimal
file verification at file system level.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 31421CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem EDM requests repair for a drive that is dead

Symptom EDM continues to request a repair for a drive even after it has been marked dead.

Fix Summary

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 31327CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Regeneration is cancelled when a failed node is removed from the nodelist

28 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Symptom When a node is removed from the nodelist while an active regeneration is running for that node,
the regeneration is cancelled although the node still needs to be regenerated.

Fix Summary Regeneration will not be cancelled when removing a node from the nodelist.

Found in Version 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 1 - Critical

Issue Number 30114CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Unnecessary self healing of full database volume

Symptom When a database volume is full while a node is started it may happen that the database is
cleared and a self-healing task is triggered for it. The database should be put to read-only
instead.

Fix Summary Database volume will be put to read only.

Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 3 - Low

Issue Number 27261CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem EDM may terminate unexpected after marking a disk as dead

29 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Symptom The Enhanced Disk Manager (EDM) internal process may terminate unexpected after a disk has
been marked as dead because of the improper handling of an internal data structure.

Fix Summary

Found in Version 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 25678CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The use of large embedded blobs may occasionally cause node reboots

Symptom The use of large embedded blobs may occasionally cause node reboots because of limitations
in the C-Clip parser.

Fix Summary The parsing of C-Clips has been improved.

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 1 - Critical

Upgrades

Issue Number 33350CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem FPshred may block the upgrade process

30 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Symptom The FPshred process may block the progress of an upgrade because FPgrub-install is trying to
sync the disks. As a result the upgrade will be (temporarily) slower.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 32373CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Upgrade fails after the pre-upgrade script has run

Symptom When a node upgrade is aborted after the pre-upgrade script has run, the script version in the
/etc/upgrade_script_version file is not the same as the actual script that has run. As a
consequence a following execution of the install command will be based on incorrect
information.

Fix Summary The incorrect upgrade_script_version file will now be removed.

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 32336CEN

Fix Number n.a.

Host OS Linux

Host Type Any Host

Problem Pre-upgrade scripts will fail if the BLC value in the safecapacity.conf file is missing

Symptom The pre-upgrade scripts that are run via Centera Viewer before an upgrade will fail if the BLC
value is missing from the safecapacity.conf file.

31 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Fix Summary The pre-upgrade scripts now test for the existence of the BLC value and will create a default
value if it is missing.

Found in Version 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 26624CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem CentraStar does not start on a node with a non-functional disk after an upgrade

Symptom CentraStar does not start on a node with a non-functional disk after an upgrade.

Fix Summary The CentraStar software has been modified so that it can start even though a disk is not
functioning

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

Centera SDK

Issue Number 24948CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Reading CentraStar 3.1 audit C-Clips with the SDK will fail

32 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Symptom Audit C-Clips written by CentraStar version 3.1 have a CDF content that cannot be read by SDK
versions compatible with CentraStar 3.1. This will cause FPClip_getname() and
FPClip_fetchnext() to fail when trying to read audit C-Clips. This does affect tools such as c:get
and CAScopy when reading audit C-Clips.

Fix Summary Fixed

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Fixed in Version 4.0.0

Impact Level 2 - Medium

Issue Number 32291CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Possible read failure after upgrading to CentraStar 3.1.2

Symptom After upgrading to CentraStar 3.1.2, reading C-Clips from an application pool may fail
occasionally (returning error -10021, FP_CLIP_NOT_FOUND_ERR) because the Blob Index
does not contain the pool ID for the C-Clip.

Fix Summary

Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 1 - Critical

Support

Issue Number 31708CEN

Fix Number n.a.

Host OS Linux

Host Type Any Host

33 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Fixed problems

Problem Failed disks still contain the service and support directories

Symptom A failed disk that has not been mounted still contains the service and support directories in the
/mnt/* directory. This gives the incorrect impression that service tools can store content in those
directories.

Fix Summary The service and support directories now only exist on mounted disks.

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2

Fixed in Version 4.0.0

Impact Level 2 - Medium

34 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Known problems and limitations


The following are known issues for this release:

Replication

Issue Number 34543CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Replication cannot be disabled if the replication roles on source or target are removed

Symptom If you want to disable replication completely, you first need to disable replication with the CLI
command set cluster replication before you remove the replication roles of the source and target
cluster. In case replication cannot be disabled because all replication roles are removed, first
add the replication role to two nodes on the source and target cluster and then disable
replication.

Fix Summary

Found in Version 4.0.0

Impact Level 3 - Low

Issue Number 33896CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem CentraStar may not replicate mutable metadata for an XSet

Symptom CentraStar may not replicate mutable metadata for an XAM XSet when the user has deleted this
XSet on the source cluster and the Global Delete feature is disabled.

Fix Summary

35 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 4.0.0

Impact Level 1 - Critical

Issue Number 31715CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem If a C-Clip is re-written without being changed and the C-Clip has triggered an EBR event or has
a litigation hold set, the C-Clip is replicated again

Symptom If a C-Clip is re-written to the cluster without being changed (no blobs added, no metadata
added or changed) and the C-Clip has triggered an EBR event or has a litigation hold, the C-Clip
is replicated again, although this is not necessary. Besides the extra replication traffic, there is no
impact.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 30225CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The reported number of C-Clips to be replicated may show a higher number than what actually is
still due to be replicated

Symptom In certain situations the reported number of C-Clips to be replicated may show a higher number
than what actually is still due to be replicated. This is caused by organic self-healing cleaning up
redundant C-Clip fragments before replication has processed them. Organic self-healing does
not update the number of C-Clips to be replicated when it is cleaning up.

Fix Summary

36 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 25151CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem No apparent replication progress with CLI command show replication detail

Symptom When replication of deletes is enabled and many (100,000s) deletes are issued in a short time
period it appears as if replication is not progressing when monitored with the CLI show
replication detail command. Replication is, in fact, processing the deletes.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 24883CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Increasing Replication Lag

Symptom When the Global Delete feature is enabled and C-Clips are deleted soon after they have been
written, the Replication Lag value may increase.

Fix Summary Work-a-round: disable Global delete or increase the time span between creating and deleting
the clip

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

37 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 18384CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Replication with failed authentication gives back wrong error message in some cases

Symptom When replication is started with a disabled anonymous profile, the SDK returns the error code
FP_OPERATION_NOT_ALLOWED (-10204) to the application and replication pauses with
paused_no_capability. When replication is started with a disabled user profile, the SDK returns
the error code FP_AUTHENTICATION_FAILED_ERR (-10153) and replication pauses with
paused_authentication_failed. This does not affect the operation of the application.

Fix Summary Consider both error messages as valid for this use case

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 18261CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Replication does not pause when global delete is issued and target cluster does not have delete
capabilities granted

Symptom When the replication profile has no delete capability granted and a global delete is issued, the
deleted C-Clips go to the parking lot. Replication does not get paused.

Fix Summary An alert will be sent when the parking is almost full

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

38 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Tools

Issue Number 34506CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Execution of service scripts that need to download and run code on Centrastar server are
refused

Symptom From CentraStar version 4.0 onwards service scripts are signed. When these scripts download
and try to run code from the Centrastar server the digital signature of the script is checked. Part
of the validation is a check whether the certificates used are still within the validity period. If the
time is incorrectly set on the cluster it is possible that the time is outside the validity period of the
certificate and the script cannot be executed.

Fix Summary 1. Change time on cluster if possible (Primus solution emc103181) 2. If time cannot be
changed for some reason: issue service certificate which is valid according to the time on the
cluster.

Found in Version 4.0.0

Impact Level 3 - Low

Issue Number 31650CEN

Fix Number n.a.

Host OS Linux

Host Type Any Host

Problem When DBMigration runs simultaneously with a cluster upgrade DU is possible

Symptom Two nodes might become unavailable at the same time when the DBMigration tool runs
simultaneously with a cluster upgrade. This may cause temporary data unavailability.

Fix Summary Stop DB/Disk migration before starting a cluster upgrade.

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

39 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 29616CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Disk-to-Disk Tool does not handle bad write errors on the target node

Symptom In the unlikely event that a write error occurs on a new target hard disk, the bad sector will not be
overwritten nor fixed. Even if the corresponding sector on the source disk is good, the data on
the target disk will remain corrupt.

Fix Summary

Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 28653CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem IntegrityChecker tool can fill the database volume

Symptom If the output of IntegrityChecker is generated on the volume or partition that contains the
database it may fill up this volume or partition completely leaving no space for the database to
grow. In CentraStar 2.4.2, 3.0.2, and 3.1.1 and lower this may result in database self-healing
(Range init and/or DBinit) events. CentraStar 2.4.3, 3.0.3, and 3.1.2 and higher will put the
database in read-only when this occurs and an alert will be sent.

Fix Summary

Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

40 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Upgrades

Issue Number 33998CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Upgrade can be stuck in paused state when nodes go offline unexpectedly

Symptom An upgrade may remain paused when a storage node goes down after all access nodes have
failed to upgrade. This occurs on clusters where the first upgraded node was a spare node. To
resolve the issue, bring the storage node back online.

Fix Summary

Found in Version 4.0.0

Impact Level 1 - Critical

Issue Number 33836CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Upgrade completion does not work correctly when a node is removed while downgrading

Symptom The upgrade completion does not work when a node is removed while an automatic node
upgrade is downgrading from a 4.0 version to a lower 4.0 version. The advanced ranges stay
enabled. As a workaround you should restart the principal node.

Fix Summary

Found in Version 4.0.0

Impact Level 3 - Low

41 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 33307CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The 'install' and 'show config version' command can fail sporadically when the cluster is under
heavy load

Symptom When the cluster is under heavy load, the CLI commands 'install' and 'show config version' may
sporadically fail due to time outs while processing installation images.

Fix Summary

Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 31925CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem FPupgrade_v2 command cannot find upgrade images

Symptom When the FPupgrade_v2 tool is used to activate an image, it can fail reporting that it "cannot find
version number". This is caused by the invalid assumption that all nodes being upgraded have
the images in the same location. This can be worked around by manually ensuring that the
upgrade images are in the same location for all nodes that need to be upgraded.

Fix Summary

Found in Version 4.0.0

Impact Level 2 - Medium

Issue Number 31765CEN

Fix Number n.a.

42 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Host OS Any OS

Host Type Any Host

Problem Corrupt boot partition aborts upgrade from 3.1.0 to 3.1.2

Symptom Upgrade from 3.1.0 to 3.1.2 aborts because of a failed FPgrub-install in the upgrade log.

Fix Summary Re-image the boot partitions with the old boot image and re-start the upgrade.

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 30375CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem When adding multiple nodes not all nodes may have been automatically upgraded

Symptom When adding multiple nodes, a node may sometimes not be upgraded automatically. As a result
the node can come online with its original software version.

Fix Summary Restarting CentraStar on the node will upgrade the non-upgraded node when it tries to come
online again

Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 30069CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem No Health Reports being sent after an upgrade to CentraStar version 3.1

43 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom When upgrading from CentraStar version 3.0 or lower to CentraStar version 3.1 the From
address for ConnectEMC may not be set. As a result no Health Reports will be sent if after the
upgrade the notification settings are updated without setting the From address. Make sure that
you set the From address after the upgrade using the CLI command set notification.

Fix Summary Set the From address manually after the upgrade using CLI command set notification

Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 28791CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The mirrored copies of a C-Clip can be written to the same mirror group during a non-disruptive
upgrade from CentraStar 2.3.3

Symptom The mirrored copies of a C-Clip can be written to the same mirror group during a non-disruptive
upgrade from CentraStar version 2.3.3 to 2.4 and above. Self-healing will eventually correct the
situation. Integrity Check TT25648 Script is available to EMC Service to immediately resolve any
existing occurrence.

Fix Summary Integrity Check TT25648 Script is available to EMC Service to immediately resolve any existing
occurrence.

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 28142CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem If a node reboots while it is upgrading it may become unbootable

44 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom During upgrades, there can be a small window during a reboot where a node may not complete
the reboot. In most cases the screen will then display GRUB.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 26976CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Clusters upgraded to a 3.1.0 release cannot upgrade to a 3.1.2 release or higher without first
being upgraded to an incremental release

Symptom Clusters running CentraStar versions 3.1.0 or 3.1.0 patch 1 cannot be upgraded to CentraStar
3.1.2 and must first be upgraded to an incremental release. The incremental upgrade version
depends on the 3.1.0 version that the cluster is currently running. Refer to the Centera
Procedure Generator for specific instructions.

Fix Summary An incremental upgrade procedure has been defined to enable a previously upgraded 3.1.0
release to upgrade to 3.1.2.

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 18693CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Cluster unreachable when node goes down or is unavailable

Symptom When a node with the access role goes down or becomes unavailable, it may happen in some
circumstances that the cluster becomes unreachable. This can for example happen during an
upgrade.

45 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Fix Summary As a workaround, the application has to be restarted.

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 18010CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Read errors during upgrade

Symptom Upgrading may cause read errors at the moment that one of the nodes with the access role is
upgraded. The read errors will disappear after the upgrade.

Fix Summary

Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 7653CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Client errors when upgrading the cluster to CentraStar 2.4

Symptom Upgrading your cluster to version 2.4 SP1 may cause client errors on the application that runs
against the cluster. The following circumstances increase this risk: 1) When the cluster is heavily
loaded. 2) When the application is deleting or purging C-Clips or blobs. 3) When the application
runs on a version 1.2 SDK. Especially when the number of retries
(FP_OPTION_RETRYCOUNT) or the time between the retries (FP_OPTION_RETRYSLEEP) is
the same as the default values or less. When you have a retrycount of 3, set the retrysleep to at
least 20 seconds.

46 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Fix Summary

Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Data Integrity

Issue Number 33945CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem EDM can fail with segmentation fault due to invalid volume information in configuration files

Symptom The EDM process can fail with a Segmentation Fault due to missing volume information in the
nodesetup and edm.conf file. The missing volume information in these files is considered invalid.
Follow the service procedures for correcting the node setup file.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 33343CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Filepool may continuously attempt to start on a node with a bad disk

Symptom If a disk is corrupt and the Linux OS flags it as read only, Filepool may continually attempt to start
on the node. As a resolution, replace the disk and ensure that after the procedure the disk is
mounted read/write.

Fix Summary

47 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 4.0.0

Impact Level 1 - Critical

Issue Number 29151CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Unclear when it is safe to reset cluster integrity and may have serious consequences

Symptom It is not clear when it is safe to reset cluster integrity. Resetting the cluster integrity may be
required when for example a cluster has two disk failures, A and B, with stuck regenerations (due
to circular dependency between fragments on the two disks). However, resetting cluster integrity
in un-safe situations may have serious consequences. Resetting cluster integrity must be done
with great care.

Fix Summary

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 1 - Critical

Issue Number 17497CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem On nearly full clusters, it is not possible to delete a C-Clip because there is no room to write the
reflection.

Symptom When many embedded blobs are written to the same C-Clip, CentraStar may have an issue
parsing the C-Clip which in an extreme case could cause the node to reboot.

Fix Summary Contact EMC Customer Support for assistance.

48 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Centera SDK

Issue Number 33933CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem During upgrade from CentraStar version 3.0.* to 4.0 the application may receive read errors

Symptom During an upgrade from CentraStar version 3.0.* to 4.0, the application may receive read errors
from the cluster. Retrying the read operation will normally succeed.

Fix Summary

Found in Version 4.0.0

Impact Level 1 - Critical

Self Healing

Issue Number 33668CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem No-write functionality may not always be successful to protect the node from repetitive reboot

Symptom In rare and unique cases the disk no-write functionality may not be successful to protect the
node from repetitive DBinit and reboot cycle. Examples of rare and unique cases are: during
upgrade from a version without no-writes functionality (version before 2.4.3, 3.0.3 and 3.1.2) and
service actions that include accidentally put large files in the blob partitions suddenly causing a
very low space situation.

49 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Fix Summary

Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 33423CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Primary Fragment Migration statistic shows RUNNING while the task is paused

Symptom When the FeedOrganicSingleStep task (Primary Fragment Migration) is paused from the Task
List in Centera Viewer, the statistic OrganicManager.FeedOrganicSingleStep.running_status
shows RUNNING instead of PAUSED.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 31453CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem It looks as if EDM killed a disk while in fact it was FPhealthcheck.

Symptom When EDM takes more than 23,5 hours to complete, FPhealthcheck may kill EDM and reports
this in the status or /var/log/platform.log file. It looks as if EDM killed the disk though.

Fix Summary Look in the status or /var/log/platform.log file to find out what killed the disk.

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

50 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 31300CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem EDM fails to skip a disk that dies while the system is running and will loop forever

Symptom EDM will fail to skip a disk that dies while the system is running and it is not detected or marked
dead. This will cause EDM to loop indefinitely.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 29983CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem EDM may not recognize a failed disk if smartctl information cannot be read

Symptom If a disk failure occurs and the startctl data cannot be read, EDM will not realize that the disk is
bad and will not attempt a repair. This could lead to underprotected data and potential data
unavailability. There is a very small chance that this will happen.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 29865CEN

Fix Number n.a.

51 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Host OS Any OS

Host Type Any Host

Problem DB init and regenerations may get stuck and log files are filled with retry logging

Symptom CentraStar may switch the hard disk I/O mode from DMA to PIO when the disk experiences
transient or persistent errors. This results in a decreased hard disk performance and can impact
the overall performance of a busy cluster as much as 50%.

Fix Summary

Found in Version 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3,
3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 29000CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Organic Regeneration and BFR may not always detect a stealth corruption of CPP blobs in
certain situations

Symptom Organic Regeneration and Blobs For Review (BFR) may not always detect a stealth corruption of
CPP blobs in certain situations. Other self-healing functions will ultimately deal with them.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 24368CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

52 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Problem Continuous EDM repair activity slows down Garbage Collection

Symptom Due to subsequent EDM requests, a node may become inaccessible preventing Garbage
Collection to progress. This can be an issue for clusters that are nearly full and on which the
application is consistently issuing writes and deletes.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 23602CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem A regeneration task on a node that restarts will wait until the regeneration timeout expires

Symptom If a node regeneration task is running on a node that is restarted, the task will wait until the
regeneration timeout expires before it restarts. This means that the node will wait longer to
regenerate than necessary.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 15801CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Regeneration buffer might be too small on heavily loaded clusters

53 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom The regeneration buffer is by default set to 2 disks per cube or 1 disk per mirrorgroup per cube. If
the cluster is heavily loaded, this could cause the cluster to run out of space when a node goes
down. As a workaround, set the regeneration buffer to 2 disks per mirrorgroup per cube. Use the
CLI command: set capacity regenerationbuffer to set the limit to 2 disks.

Fix Summary Set the regeneration buffer to 2 disks per mirrorgroup per cube.

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Server

Issue Number 33613CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Garbage Collection overall percentage complete shows less then 100%

Symptom A node that is added after Garbage Collection is started will not be part of the current Garbage
Collection run. It should report 100% complete for that added node but instead reports 0%
complete. This causes the overall percentage to not reach 100% complete. The next run of
Garbage Collection will correct the completion display.

Fix Summary

Found in Version 4.0.0

Impact Level 3 - Low

Issue Number 33311CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Reason for aborted GC run reports wrong scheduling mode

54 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom When a manually started GC run is aborted due to a non-uniform cluster version and the
auto-scheduling mode is enabled, the reason for the aborted run will report the auto-scheduling
mode instead of the manual mode.

Fix Summary

Found in Version 4.0.0

Impact Level 3 - Low

Issue Number 32983CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Disabling NTP may cause time stamp inconsistencies

Symptom Disabling NTP on a node to perform a service procedure, may cause time stamp inconsistencies
between the C-Clip and the metadata. When a service procedure requires you to disable NTP,
make sure that Filepool is no longer running. When restarting Filepool after the service
procedure, make sure that NTP is also started.

Fix Summary

Found in Version 4.0.0

Impact Level 1 - Critical

Issue Number 32320CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Bouncing nodes because of database corruption

Symptom Nodes may go on and offline because of database corruption. To investigate this please refer to
Primus case emc165864.

Fix Summary

55 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 31530CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem A delete of a C-Clip fails if its mirror copy is located on an offline node

Symptom An SDK delete fails when the mirror copy of a C-Clip resides on an offline node. The client will
receive error code -10156 (FP_TRANSACTION_FAILED_ERR) in this case.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 31310CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Problems with the blob partitions can cause Filepool to hang at startup

Symptom In exceptional cases it can happen that during startup Filepool cannot open the blob index
databases and platform cannot restart Filepool. This results in a node on which Filepool does
not run.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

56 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 30520CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Garbage Collection aborts on fragments with incorrect name

Symptom Garbage Collection (GC) aborts when it encounters a fragment that has an incorrectly formatted
name. As long as the fragment remains on the system GC cannot run. As a consequence the
number of aborted runs will increase and no extra space will be reclaimed. EMC Service has to
investigate the failed runs and fix the problem. To view the GC status of GC use Centera Viewer
> Garbage Collection.

Fix Summary

Found in Version 4.0.0

Impact Level 2 - Medium

Issue Number 30027CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem A node may continuously reboot due to a bad disk which is not healed by EDM

Symptom A node may continuously reboot due to a bad disk which is not healed by EDM.

Fix Summary Primus procedure: either force EDM to run and/or bring the bad disk down for regeneration

Found in Version 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2,
3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 1 - Critical

Issue Number 26748CEN

Fix Number n.a.

57 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Host OS Any OS

Host Type Any Host

Problem During Centera self-healing actions, suboptimal load balancing can cause degraded SDK
performance

Symptom In rare circumstances, Centera's load balancing mechanisms fail to spread data traffic evenly
and can cause temporary performance degradations. This may happen during periods of heavy
I/O load to Centera while data self-healing and internal database repair operations are taking
place.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 24931CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Possible false error message when removing access role and immediately adding it again

Symptom When removing the access role from a node and then immediately add the access role back to
it, in very rare cases an error message may be displayed even though the role change was
performed successfully.

Fix Summary Verify access role to be sure the change was performed. Normally the error message can be
ignored.

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 3 - Low

Issue Number 24115CEN

Fix Number n.a.

58 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Host OS Any OS

Host Type Any Host

Problem Range tasks do not progress due to database corruption

Symptom In very exceptional cases a corrupted database may not be noticed and could cause range tasks
such as init, copy, or cleanup to loop without making any progress.

Fix Summary

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 2 - Medium

Issue Number 23792CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Incorrect error code when C-Clip is unavailable due to corrupted CPP blob

Symptom The SDK may return an incorrect error code (-10036, FP_BLOBIDMISMATCH_ERR) when a
CPP blob is unavailable due to a corrupted fragment and a disk with another fragment that is
offline. The correct error code is -10014, FP_FILE_NOT_STORED_ERR.

Fix Summary

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 3 - Low

Other

Issue Number 32984CEN

Fix Number n.a.

59 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Host OS Any OS

Host Type Any Host

Problem No clear message when Filepool cannot start because of NTP problems

Symptom There is not enough logging to indicate that Filepool may have failed to start because of an NTP
problem. The best indicator for this problem is to look in the /var/log/fp-status file and check if
there is a message indicating that Filepool has started (such as "Restarted filepool agent") after
NTP sync logs (such as "Node is not time synced yet. Waiting").

Fix Summary

Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 31615CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem In a small CPP cluster (4 to 8 nodes), large file read performance may degrade under highly
threaded workloads

Symptom In a small CPP cluster (4 to 8 nodes), large file read performance may degrade under highly
threaded workloads.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 31446CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

60 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Problem There may be inconsistencies between parameters and node roles

Symptom There can be inconsistencies between nodeparams, localnodemanager.cml and parameters in


memory with respect to node roles. As a consequence for example storage nodes may not have
a storage role, and will not be selected to store data.

Fix Summary

Found in Version 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0,
3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 31431CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem In certain cases a node will fail to come up completely and instead hang in NetDetect

Symptom In certain cases a node will fail to come up completely and instead hang in NetDetect.

Fix Summary A reboot is required to fix it.

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 31394CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The reverse lookup portion of EDM repairs on system partitions can cause random characters to
be written to the console.

Symptom The reverse lookup portion of EDM repairs on system partitions can cause random characters to
be written to the console.

61 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 4 - Enhancement

Issue Number 31279CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Disk errors may cause Filepool to restart

Symptom If Filepool encounters a bad sector on a disk it will retry reading the disk. This may take some
time before Filepool gives up. This delay may cause lockups and/or timeouts, which in turn can
result in Filepool restarts or other undesired behavior.

Fix Summary

Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 28293CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Write performance degradation may occur for small files under very high loads

Symptom Write performance of small files (<200 KB) at extremely high thread counts (50 threads per
Access Node) can be up to 3% lower than using CentraStar v3.1.0 and 10% lower than using
CentraStar v3.1.1.

Fix Summary

Found in Version 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

62 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 22044CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem It is possible, due to capacity constraints, that the BLC overflows on a node.

Symptom It is possible, due to capacity constraints, that the BLC overflows on a node. This may result in
the node being unable to restart FilePool. To resolve the problem, capacity needs to be made
available. Please escalate to L3 for proper intervention.

Fix Summary

Found in Version 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 21204CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Blob deletion can be delayed and performance of Incremental Garbage Collection might be
reduced by up to a factor 5 on systems with very low object count (less than 5000 objects per
node).

Symptom Blob deletion can be delayed and performance of Incremental Garbage Collection will be
reduced on systems with very low object count (less than 5000 objects per node). This is due to
the interaction of Garbage Collection with OrganicOn.

Fix Summary

Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

63 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 19440CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Imbalanced capacity load across mirror groups may lead to unused free space

Symptom In multi-rack environments, it may happen that the used capacity on both mirror groups is
substantially different. If one of the mirror groups gets full, the free capacity left on the other
mirror group cannot be used anymore for writing new C-Clips/blobs. This problem is greater on
CPM clusters. It is caused by nodes failing on a full cluster of the multi-rack and being
regenerated on the second cluster of the rack (also known as Cross-HRG regeneration). If this is
a potential problem to your customer, add more nodes or call L3.

Fix Summary

Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 17790CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem When powering down a Gen3 node, the command may display the
SMBUS_INVALID_RESPONSE error

Symptom When powering down a Gen3 node, the command may display the
SMBUS_INVALID_RESPONSE error. The node does in fact power down to standby mode
despite the error message.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

64 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 17745CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Before adding any nodes to a cluster that has been downgraded, ensure that all nodes have in
fact been downgraded.

Symptom Before adding any nodes to a cluster that has been downgraded, ensure that all nodes have in
fact been downgraded.

Fix Summary

Found in Version 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2,
3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 12489CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem When a node fails during a sensor definition update, the sensor manager is inconsistent in its
reporting.

Symptom When a node fails during a sensor definition update, the sensor manager is inconsistent in its
reporting. It reports that the update failed, even when the update was successful. If you get a
message that the sensor definition update failed, perform a list sensors command to verify if the
update was accepted or not. Retry if the update was not accepted. Also check if there are no
failed nodes on the cluster.

Fix Summary

Found in Version 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3,
3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

65 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Configuration

Issue Number 32980CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Importing a pool definition may fail

Symptom Importing a pool definition to a cluster in Basic mode that was exported from a cluster in GE or
CE+ mode will fail. Both clusters must run the same configuration to import a pool definition.
Furthermore, importing a pool definition to a cluster without the Advanced Retention
Management (ARM) feature that was exported from a cluster with the ARM feature will fail. Both
clusters must have the ARM feature.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Monitoring

Issue Number 32701CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Duplicate alerts with symptom code 4.1.1.1.02.01

Symptom Nodes going on- and offline may fire duplicate alerts with symptom code 4.1.1.1.02.01. These
are all instances of the same problem which EMC service has to follow up.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

66 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Issue Number 31644CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Removed startlog file still uses space in /var partition

Symptom Even if the startlog file has been removed from the /var partition it still consumes 730MB of
space. As a workaround restart Filepool to complete the deletion of the file and reclaim its
space.

Fix Summary

Found in Version 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3,
3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 31294CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem No alerts are sent

Symptom In some cases all sensor definitions can be lost on the cluster. This means no alerts will be sent.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 28204CEN

Fix Number n.a.

Host OS Any OS

67 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Host Type Any Host

Problem The audit log may have entries not related to an event

Symptom The audit log may contain entries such as 'Command {COMMAND} was executed ({result})'.
These messages do not relate to an actual event and can be ignored.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 27105CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem ConnectEMC email home messages are refused by a number of SMTP servers

Symptom ConnectEMC email home messages may be refused by SMTP servers due to lines longer than
1000 characters (RCF2822). The long lines are caused by incompatible new lines (LF instead of
CRLF) in messages sent by ConnectEMC.

Fix Summary Change SMTP server configuration to accept email with lines longer than 1000 characters

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 3 - Low

Issue Number 25955CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Domains disappear or show unexpected list of clusters

68 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom Centera domain names are case sensitive. Management of and presentation of domain names
may cause confusion since CV, CLI, and Console are not consistently case sensitive.

Fix Summary When managing domains use and enter the domains in the same case

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 4 - Enhancement

Issue Number 25259CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem CLI command set notification needed when setting the cluster domain

Symptom A cluster domain entered with the CLI command set cluster notification on CentraStar version
3.0.2 and below, is not saved when using Centera Viewer 3.1 or higher. Use the CLI command
set notification instead when setting the cluster domain.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 24332CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Recipients do not receive Health Reports and Alerts

Symptom If, in Health Reports and Alerts, more than one recipient is specified with spaces between the
email addresses, none of the addresses may receive email. None of the recipients may receive
Health Reports or Alerts.

Fix Summary Remove any spaces in the list of recipients and only use comma to separate recipients

69 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 22842CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Configuring a "reply to" address for ConnectEMC has no effect

Symptom Although a "reply to" address for ConnectEMC can be configured by EMC Service in CentraStar
2.4 and 3.0, this address is currently not set in the email message header. Since CentraStar 3.1
it is possible to change the "from" address which will be used as the reply address for emails
sent by ConnectEMC.

Fix Summary

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 3 - Low

Issue Number 19224CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Nodes in maintenance modes will activate the node fault light

Symptom Nodes in Maintenance Mode will activate the node fault light. However, there is no fault and no
action is necessary.

Fix Summary Check whether the node is in Maintenance Mode by using the nodelist in CenteraViewer.

70 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 3 - Low

Issue Number 15690CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem When an alert is sent about the CPU temperature, the principal access node responsible for
sending the alert identifies itself as the node with the problem.

Symptom When an alert is sent about the CPU temperature, the principal access node responsible for
sending the alert identifies itself as the node with the problem.

Fix Summary

Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 13717CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem When the eth2 fails on the pool service principal, no alert is sent.

Symptom When the eth2 fails on the pool service principal, no alert is sent.

Fix Summary

71 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 12915CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem When the principal role of a node with the access role is taken over by another node, SNMP
might miss initial events

Symptom When the principal role of a node with the access role is taken over by another node, SNMP
might miss initial events. This means that certain alert traps might be missed and the health trap
severity level is possibly incorrect.

Fix Summary

Found in Version 1.2.0, 1.2.1, 1.2.2, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1,
2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0,
3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 12766CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Only update the sensor definitions on a stable cluster.

Symptom All cluster nodes keep a copy of the current sensor definitions. The cluster principal is
responsible for distributing any changes to a definition across the cluster. If the principal is
unavailable when updating a sensor's definition, distribution of the update may fail. The failure
may not always be correctly detected, resulting in an OK response to the update command,
while it should be not OK. Only update the sensor definitions on a stable cluster.

Fix Summary

72 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1,
3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3,
3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 5072CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem There is no alert mechanism in place for uplink failures

Symptom There is no alert mechanism in place for uplink failures.

Fix Summary The CLI command show network detail shows Uplink info and the Health report: <port>
<portIdentification portNumber="49" type="uplink" speedSetting="" status="Up"/>

Found in Version 1.2.0, 1.2.1, 1.2.2, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1,
2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0,
3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Support

Issue Number 31885CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem FRU replacement disk cannot be added to node

Symptom Occasionally, a FRU replacement disk cannot be added to the node because the disk has not
been formatted automatically. To resolve this issue, manually format the disk and re-insert it into
the node following the procedures.

Fix Summary

73 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 31577CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The fpshell command fails with multiple NO RES errors

Symptom When the /var partition is full the fpshell command will fail with multiple 666 [NO RES] errors
because it uses this partition for temporary storage. Investigate the /var partition usage and
clean out unnecessary files.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 23520CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Prior to replacing a disk in a node, the status of regenerations needs to be verified

Symptom Prior to replacing a disk in a node, the following needs to be verified: 1) Are regenerations
running for that disk? 2) Does the node on which a disk needs to be replaced have failed
regenerations? 3) Do other nodes have failed regenerations for the node on which a disk needs
to be replaced?

Fix Summary

74 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Found in Version 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1,
3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2,
4.0.0

Impact Level 2 - Medium

Issue Number 18303CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem When a disk has been replaced incorrectly, you cannot insert the original disk back in the cluster.

Symptom When a disk has been replaced incorrectly (due to miswiring or another mistake), you cannot
insert the original disk back in the cluster. The spare that replaces the original disk has to be
removed as quickly as possible. This is because: a) as long as the spare is in place, the data on
pre-Gen4 hardware of the original disk will not be regenerated and, b) if the intent is to replace
the spare with the original disk again, new data written to the spare will need to be regenerated a
second time when the spare is removed. The sooner the spare is removed, the less data has to
be processed afterwards. Take out the spare and contact L3 for inserting the data from the spare
properly in the system and for replacing the original disk.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Pools

Issue Number 31292CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Query performance issues and inability to create new pools after adding new nodes

75 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom When adding new Gen4 nodes to a cluster that has pools enabled may cause pool migration
status no longer to show finished, may cause query to run slower and may not allow the user to
create pools.

Fix Summary Re-run pool migration to reset pool migration status

Found in Version 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 24093CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Not all C-Clips are mapped to the pool specified after pool migration

Symptom Pool migration does not take into account regeneration self-healing activity. In limited cases it
may happen that a C-Clip is not mapped to the appropriate pool when a regeneration
self-healing task runs during the pool migration. The C-Clip then remains in the default pool.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Protection

Issue Number 31006CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem A corrupt fragment might not get cleaned up causing a possible DU

76 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom A corrupt fragment might not get cleaned up when it would be permissible to clean it up. This
may result in DU as long as the corrupt fragment prevails. This will only happen if all of the
following conditions are true: the corrupt fragment has a redundant non-corrupt copy of the
fragment; another fragment in the protection chain is not available (disk or node offline); BLC is
gone; LocalPLQ or PF is disabled; and ShallowMLQ is disabled or fails.

Fix Summary

Found in Version 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 21762CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem The primary aware migration task, which migrates old style CPP fragments to primary
fragments, will automatically be stopped when any BlobIndex volume has less that 100 MB of
free space.

Symptom The primary aware migration task, which migrates old style CPP fragments to primary
fragments, will automatically be stopped when any BlobIndex volume has less that 100 MB of
free space. In order to reclaim space, the blobindex database can be deleted because a dbinit is
expected to result in a smaller blobindex due to size optimizations in Centrastar 3.1.

Fix Summary

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Security

Issue Number 30762CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

77 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Problem When upgrading from CentraStar 3.1 to CentraStar 3.1.2 or higher, the anonymous profile may
be enabled again

Symptom When upgrading from a newly installed cluster running CentraStar 3.1 with anonymous disabled
to CentraStar 3.1.2 or higher, the anonymous profile may have been enabled during the
upgrade. This happens if the profile was never updated.

Fix Summary

Found in Version 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2,
3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Issue Number 23993CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Insufficient capabilities when set/unset litigation hold

Symptom With only the hold capability enabled, a set or unset of a litigation on a C-Clip fails with
insufficient capabilities error. Add the write capability to work around this issue.

Fix Summary

Found in Version 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1,
3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Compatibility

Issue Number 22751CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Profile C-Clips cannot be written to CentraStar version 3.1

78 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom Profile C-Clips cannot be written to CentraStar version 3.1 or higher with an SDK version older
than 3.1 if the maximum retention period for the cluster is set to anything other than 'infinite'.

Fix Summary Upgrade to 3.1 SDK

Found in Version 3.1.0, 3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Query

Issue Number 13501CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Read and query of deleted C-Clips may temporarily succeed

Symptom The delete of a C-Clip will result in the deletion of both copies of the CDF, and the creation of a
pair of reflections. After that, Incremental GC will delete the underlying blobs. If one or more
copies of the C-Clip's CDF are located on offline nodes at the time of the delete, Full GC will
remove these copies when the nodes come back online. During the Full GC process, read and
query of the deleted C-Clip may temporarily succeed. During the Incremental GC process, read
of the underlying blobs may also succeed.

Fix Summary Full GC will eventually remove these copies

Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Issue Number 7676CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Subsequent queries may return duplicate ClipIDs with different time stamps.

79 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Symptom When writing duplicate C-Clips to the cluster (using backup/restore), a subsequent query may
return duplicate ClipIDs with different time stamps. The duplicates will eventually be cleaned up
by an organic background process.

Fix Summary

Found in Version 1.2.0, 1.2.1, 1.2.2, 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1,
2.4.1, 2.4.1p2, 2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0,
3.1.0p1, 3.1.1, 3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

Documentation

Issue Number 8184CEN

Fix Number n.a.

Host OS Any OS

Host Type Any Host

Problem Access nodes are rebooting when establishing many connections at the same time

Symptom The rate at which new SDK clients connect to a cluster is limited to 5 per minute. Care should be
taken when multiple clients boot up and connect simultaneously. If an excessive number of
connections are established at the same time, the node with the access role may reboot.

Fix Summary Change your client start-up procedure to avoid establishing too many simultaneous connections
at the same time

Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 2 - Medium

Hardware

Issue Number 6545CEN

Fix Number n.a.

Host OS Any OS

80 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Known problems and limitations

Host Type Any Host

Problem Disk failures as shown through CLI or Centera Viewer might not be shown on the front panel.

Symptom Disk failures as shown through CLI or Centera Viewer might not be shown on the front panel.

Fix Summary

Found in Version 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.2, 2.3.3, 2.4.0, 2.4.0p1, 2.4.1, 2.4.1p2,
2.4.2, 2.4.2p1, 2.4.3, 3.0.0, 3.0.0p1, 3.0.1, 3.0.1p1, 3.0.2, 3.0.2p1, 3.0.3, 3.1.0, 3.1.0p1, 3.1.1,
3.1.1p1, 3.1.2, 3.1.2p1, 3.1.2p2, 3.1.2p3, 3.1.3, 3.1.3p1, 3.1.3p2, 4.0.0

Impact Level 3 - Low

81 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Environment and system requirements

Environment and system requirements


Refer to the EMC Centera Quick Start Guide for a full listing of the
environment and system requirements.

Technical notes
The following table contains details of the currently shipping EMC
Centera Gen4 and Gen4LP hardware. Although other hardware
generations are supported, they are no longer shipped and so are not
presented here. For a list of all compatible EMC Centera hardware for
this release, go to E-Lab NavigatorTM on the EMC Powerlink®
website.

Table 1 EMC Centera Gen 4/Gen 4LP hardware details

Storage space 500 GB drives (Gen4)


• Raw Capacity: 8 to 32 TB per cube
• CPM Usable: 3.7 to 14.8 TB per cube
• CPP Usable: 12.7 to 25.4 TB per cube
750 GB drives (Gen4LP)
• Raw Capacity: 12 to 48 TB per cube
• CPM Usable: 5.7 to 22.8 TB per cube
• CPP Usable: 19.5 to 39.1 TB per cube
1 TB drives (Gen4LP)
• Raw Capacity: 16 to 64TB per cube
• CPM Usable: 7.7 to 30.8 TB per cube
• CPP Usable: 26.4 to 52.8 TB per cube

Number of cubes 1 to 8 per cluster

Number of nodes 4 to 16 per cube (maximum of 128 per


cluster)

Number of nodes with the access role Configurable 2 to 16 per clustera

Number of nodes with the management role Configurable 2 to 16 per clustera

Number of nodes with the replication role Configurable 2 to 16 per clustera

82 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Technical notes

Table 1 EMC Centera Gen 4/Gen 4LP hardware details

Number of disks per node 4

Size of disk in node 500 GB (Gen4)


or
750 GB (Gen4LP)
or
1 TB (Gen4LP)

External modems Worldwide: MT5634ZBA-V92-EMC

a.May not exceed half the number of nodes in a rack.

83 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Documentation

Documentation
Documentation for the EMC Centera, which can be downloaded from
GS Web site, includes the following:
◆ Technical Manuals (EMC Centera Hardware, PointSystem Media
Converter, Utilities, EMC Centera API)
◆ Software Release Notices (CentraStar, Linux, IBM/zOS, Solaris,
Windows, HP-UX, AIX, IRIX)
◆ Customer Service Procedures

Software media, organization, and files


There is no information in this section.

Installation
For instructions on installing and setting up Centera tools, refer to the
Procedure Generator following the path: CS Procedures > Centera >
Information Sheet > Management and Control.

Note: You need administrator rights to install EMC Centera software on your
machine.

Troubleshooting and getting help


◆ Before contacting the EMC Centera Technical Support, check the
connection between the application server and the EMC Centera
cluster by using CenteraVerify or CenteraPing.
◆ If a problem persists contact your EMC Centera Technical
Support at:
United States: (800) 782-4362 (SVC-4EMC)
Canada: (800) 543-4782 (543-4SVC)
Worldwide: +1 (508) 497-7901

Follow the voice menu prompts to open a service call, then select
Centera Product Support.

84 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Troubleshooting and getting help

Copyright © 2008 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its


publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC


CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF
ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this


publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Third-Party License Agreements


EMC Centera Software Development Kit

The EMC Software Development Kit (SDK) contains the intellectual property
of EMC Corporation or is licensed to EMC Corporation from third parties. Use
of this SDK and the intellectual property contained therein is expressly limited
to the terms and conditions of the License Agreement.

Use of Open Source Components

The EMC version of Linux, used as the operating system on the EMC Centera
server, uses open source components. The licenses for those components are
found in the Open Source Licenses text file, a copy of which can be found on
the EMC Centera Customer CD.

SKINLF

This product includes software developed by L2FProd.com


(http://www.L2FProd.com/).

Bouncy Castle

The Bouncy Castle Crypto package is Copyright © 2000 of The Legion Of The
Bouncy Castle (http://www.bouncycastle.org).

RSA Data Security

Copyright © 1991-2, RSA Data Security, Inc. Created 1991. All rights reserved.

License to copy and use this software is granted provided that it is identified
as the "RSA Data Security, Inc. MD5 Message-Digest Algorithm" in all
material mentioning or referencing this software or this function. RSA Data
Security, Inc. makes no representations concerning either the merchantability
of this software or the suitability of this software for any particular purpose.
It is provided "as is" without express or implied warranty of any kind.

85 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Troubleshooting and getting help

These notices must be retained in any copies of any part of this documentation
and/or software.

ICU License (IBM International Component on Unicode library)

Copyright (c) 1995-2002 International Business Machines Corporation and


others. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a copy


of this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, and/or sell copies of the
Software, and to permit persons to whom the Software is furnished to do so,
provided that the above copyright notice(s) and this permission notice appear
in all copies of the Software and that both the above copyright notice(s) and
this permission notice appear in supporting documentation.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY


KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO
EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN
THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT
OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER
RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE

ReiserFS

ReiserFS is hereby licensed under the GNU General Public License version 2.

Source code files that contain the phrase "licensing governed by


reiserfs/README" are "governed files" throughout this file. Governed files
are licensed under the GPL. The portions of them owned by Hans Reiser, or
authorized to be licensed by him, have been in the past, and likely will be in
the future, licensed to other parties under other licenses. If you add your code
to governed files, and don't want it to be owned by Hans Reiser, put your
copyright label on that code so the poor blight and his customers can keep
things straight. All portions of governed files not labeled otherwise are owned
by Hans Reiser, and by adding your code to it, widely distributing it to others
or sending us a patch, and leaving the sentence in stating that licensing is
governed by the statement in this file, you accept this. It will be a kindness if
you identify whether Hans Reiser is allowed to license code labeled as owned
by you on your behalf other than under the GPL, because he wants to know if
it is okay to do so and put a check in the mail to you (for non-trivial
improvements) when he makes his next sale. He makes no guarantees as to
the amount if any, though he feels motivated to motivate contributors, and
you can surely discuss this with him before or after contributing. You have the
right to decline to allow him to license your code contribution other than
under the GPL.

Further licensing options are available for commercial and/or other interests
directly from Hans Reiser: hans@reiser.to. If you interpret the GPL as not
allowing those additional licensing options, you read it wrongly, and Richard
Stallman agrees with me, when carefully read you can see that those
restrictions on additional terms do not apply to the owner of the copyright,

86 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Troubleshooting and getting help

and my interpretation of this shall govern for this license.

Finally, nothing in this license shall be interpreted to allow you to fail to fairly
credit me, or to remove my credits, without my permission, unless you are an
end user not redistributing to others. If you have doubts about how to
properly do that, or about what is fair, ask. (Last I spoke with him Richard was
contemplating how best to address the fair crediting issue in the next GPL
version.)

MIT XML Parser

MIT XML Parser software is included. This software includes Copyright (c)
2002,2003, Stefan Haustein, Oberhausen, Rhld., Germany

Permission is hereby granted, free of charge, to any person obtaining a copy


of this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.

87 EMC Centera Version 4.0 Patch 2 Global Services Release Notes


Troubleshooting and getting help

88 EMC Centera Version 4.0 Patch 2 Global Services Release Notes

You might also like