Professional Documents
Culture Documents
Performance Tuning Documentum Web Based Applications - EMC ...
Performance Tuning Documentum Web Based Applications - EMC ...
Performance Tuning Documentum Web Based Applications - EMC ...
Applications
Created by:
subir.rastogi@wipro.com
Created on:
9-Oct-2008
2
Subir Rastogi
EMC Proven Associate
Introduction
I have been doing performance testing custom application build on WDK/Webtop Documentum
5.3. I used JMeter to simulate load (virtual users) on the application to see how it behaves under
load on staging. Below is the high level approach I have followed:
Load Test the application with 1 User, 50 Users, 100 Users
Analyze the components which are performing slowly.
Suggest the new approach/changes in the components to make the performance better
like DQL tuning, simplifying custom logic, etc.
Analyze Application Server, Database and Content Server settings and suggest the
appropriate setting which could increase the performance of the application.
Below are the details of the environment on which I was performance testing.
Application Server Weblogic 8.1 SP6 with Sun JVM 1.4.2, Solaris Box
Webtop, DFC, BPM, Application Builder and Content Server 5.3
Documentum Products SP5
Database Oracle 9.2
4
Problem
Once the problematic areas were identified, I started consulting respected Weblogic
Administrators, Documentum Administrators and Database Administrators to decide the
appropriate values of the settings to resolve the problems. After having few discussions, it was
clear that there are no fixed rules to define most of the settings like Thread Count, Heap Size and
process count in DB and they are very specific to the application. I did receive the settings
recommended by Documentum with a warning that these might not be the ideal for your custom
application. So to decide the ideal settings I thought of below approach until we reach the best
response time and the optimal value.
5
This turns out to be a time consuming process as I have to do load test multiple times for just one
setting.
For e.g. shown below is the graph to find the optimal value for the thread count on the Application
Server. For finding the ideal setting, I had to perform at least 6 tests.
20
15 100
75
15
Response Time
25
10 50
0
0 20 40 60 80 100 120
Thread Count
6
Even though this turns out to be time consuming but we were sure of the output that it will give us
the ideal settings of the problem areas.
But while doing all this exercise, I had one more doubt. I am doing this exercise on the staging
environment and there is a significant difference in the hardware capabilities b/w the staging and
production environment. Production environment has more no. of CPU, processors and memory.
So these are the ideal settings for the staging environment not for the production and production
might behave differently with these settings. Since the production environment can not be load
tested, it was decided to proceed with these changes to production and production will be
monitored by the System Administrators and Database administrators.
Even though we did not receive any complains from business users but I was very curious to
know that how the production environment is behaving with these settings under actual load.
Have these settings actually increased the performance of the application in the production or still
there is possibility to tune the application further?
How is the system behaving under peak load?
Even though it is possible to get this data by contacting the respective administrator but that is
time consuming and error prone process as it is done manually.
Solution
We need a system which monitors itself automatically, captures all the relevant statistical data of
significant event happing within system and helps us decide, validate and analyze the problems
encountered in the system.
As described in the oracle 10g documentation “The Automatic Database Diagnostic Monitor
(ADDM) analyzes the data which get saved in Automatic Workload Repository (AWR, described
later) on a regular basis, then locates the root causes of performance problems, provides
recommendations for correcting any problems, and identifies non-problem areas of the system.
Because AWR is a repository of historical performance data, ADDM can be used to analyze
performance issues after the event, often saving time and resources reproducing a problem.”
ADDM finds the problem using a term “DB Time”. Areas which took significant portion of DB time
are reported as problem-areas. Oracle defines “DB Time” as the cumulative time spent by the
database server in processing user requests. It includes wait time and CPU time of all non-idle
user sessions.
7
CPU bottlenecks - Is the system CPU bound by Oracle or some other application?
Undersized Memory Structures - Are the Oracle memory structures, such as the SGA,
PGA, and buffer cache, adequately sized?
I/O capacity issues - Is the I/O subsystem performing as expected?
High load SQL statements - Are there any SQL statements which are consuming
excessive system resources?
High load PL/SQL execution and compilation, as well as high load Java usage
RAC specific issues - What are the global cache hot blocks and objects; are there any
interconnect latency issues?
Sub-optimal use of Oracle by the application - Are there problems with poor connection
management, excessive parsing, or application level lock contention?
Database configuration issues - Is there evidence of incorrect sizing of log files, archiving
issues, excessive checkpoints, or sub-optimal parameter settings?
Concurrency issues - Are there buffer busy problems?
Hot objects and top SQL for various problem areas
Short-lived performance problems
Degradation of database performance over time
1. CPU Utilization
3. Disk I/O
4. Instance Throughput
1. Running the SQL Tuning Advisor on high-load SQL statements or running the Segment
Advisor on hot objects. Documentum provides the description of how to use it.(Please refer
the resources section).
2. Automatic Workload Repository (AWR)
Database statistics provide information about the type of load on the database and the internal
and external resources used by the database. To accurately diagnose performance problems
with the database using ADDM, statistics must be available.Oracle Database generates many
types of cumulative statistics for the system, sessions, and individual SQL statements. Oracle
Database also tracks cumulative statistics about segments and services. The Automatic
Workload Repository (AWR) automates database statistics gathering by collecting, processing,
and maintaining performance statistics for database problem detection and self-tuning purposes.
The database statistics collected and processed by AWR include:
Time model statistics are statistics that measure the time spent in the database by operation type.
Wait events are statistics that are incremented by a session to indicate that it had to wait for an
event to complete before being able to continue processing. Wait event data reveals various
symptoms of problems that might be impacting performance, such as latch contention, buffer
contention, and I/O contention.
A large number of cumulative database statistics are available on a system and session level.
The Active Session History (ASH) statistics are samples of session activity in the database.
Active sessions are sampled every second, and are stored in a circular buffer in the system
global area (SGA). Any session that is connected to the database and using CPU, or is waiting
for an event that does not belong to the idle wait class, is considered an active session. By
capturing only active sessions, a manageable set of data is represented with the size being
directly related to the work being performed, rather than the number of sessions allowed on the
system.
SQL statements that are consuming the most resources produce the highest load on the system,
based on criteria such as elapsed time and CPU time.
The JMX specification defines architecture, the design patterns, the APIs, and the services for
application and network management and monitoring in the Java programming language.
10
JMX technology provides flexible means to instrument Java code, create smart Java agents,
implement distributed management middleware and managers, and smoothly integrate these
solutions into existing management and monitoring systems.
J2SE 5.0 has implemented version 1.2 of the JMX specification. J2SE 5.0 includes significant
monitoring and management features, including:
1. JVM instrumentation: The JVM is instrumented for monitoring and management providing
built-in, out-of-the-box management capabilities for local and remote access.
3. Management tools such as JConsole, which is a JMX-compliant monitoring tool that comes
with J2SE 5.0. It uses JMX instrumentation of the JVM to provide information on performance
and resource consumption of applications running on the Java platform.
11
Conclusion
Performance Tuning Documentum 5.3 Applications was lacking statistical data of how changed
settings are working in the production environment. With the new Documentum 6.0, since it
requires Oracle 10g and J2SE 5.0, we will have plenty of statistical data to compare and conclude
that the changed settings have really actullay increassed the performance of the system and
system will provide recommendations also if the system can be tuned further.
Resources
1. Nice article on using JMeter in WDK Application
http://ecmarchitect.com/wp-
images/Load_testing_Documentum_WDK/Load_testing_Documentum_WDK.pdf
2. http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/toc.htm
3. FAQ_Perf_Tuning_With_Oracle_Tuning_Advisor.pdf , document which gives you details
on using Tuning Adviser.
4. http://java.sun.com/developer/technicalArticles/J2SE/jmx.html