Professional Documents
Culture Documents
Detailed Performance Test Plan Example
Detailed Performance Test Plan Example
Revision History
Date
Version Author Modifications/Revisions
Modified
If there are any questions or issues regarding this document, please contact:
Page 1
1. Introduction
Page 1
2. Objectives & Scope
Objectives
The following points summarise the objectives of the performance testing process.
In Scope
1. A subset of Online functionality including automation of Account View functionality.
2. Re-evaluation of volumetrics.
3. has limited data, a data build is required.
4. Simulation of users at several remote locations (Network latency testing).
5. An online performance test during the replication process of Extract, Transform &
Load (ETL)
Out of Scope
1. Overnight batch and house-keeping except where ETL is involved.
2. Disaster Recovery Testing.
3. Application network profiling, and Database Analytics.
4. While single signon will be utilsed, performance of this component is out of scope.
5. The European instance of IA and Cereas and simulation of the European workload.
6. Simulation of users using the windows client.
7. Opportunity Locks as management of users will be too complicated based on the
short timescales we have.
Page 2
3. Approach
Due to the short timescales available for performance testing, it is proposed that if possible
the following steps are undertaken to maximise the amount of time available to the
performance test team.
1. A functional tester carries out a smoke test on the performance test environment as
soon as implementation of release 5 has completed.
2. Support from the development team is available to investigate any issues as soon as
they are identified, whether that be during the build or execution pahse of
performance testing.
3. A brief summary of issues is delivered by Friday 18 th March with a full report delivered
by Wednesday 23rd March.
CRM consists of InterAction which is a integration solution to interface with other EY systems
such as . This is hosted on a pair of webservers and application servers.
USS & USS are virtual machines with the same specification as production. Each
server has 4 cores & 8GB of memory. These servers support only the Americas
instance of the application.
There are several SQL server databases and database servers within the application.
There are 17 existing automated test scripts from previous rounds of performance testing.
Due to the very short timescales, not all of the automation will be included in release 2.2
performance testing, 11 existing scripts will be used and 1 new script will be built. The
folllowing scripts have been removed from Release 2.2 performance testing. Please note,
BP07 was requested by the project to be included, due to a functionality issue, we have left
this automation out of scope.
Page 3
BP10 Opportunity to three gfis Engagement, no
duns number
Some automation exists to simulate keying activities of windows client users (as opposed to
web client users). Windows client users represent around 5% of all users. Due to short
timescales, this automation will not be used during
Page 4
A summary of the automation to be included is;
BP01
BP03
BP04
BP05
BP06
BP09
BP11
BP12
BP13
BP16
BP17
Volumetric Information
For release 2.1, detailed volumetric information was gathered and is assumed to be valid for
release 2.2. This has been reviewed and is summarised below.
Stats for the busiest day, Monday 18 th January 2010 showed 1,923 vists resulting in 116,000
paged displays. This works out at an average of 60.3 page displays per visit. On this basis,
for 3,800 visitors in the peak hour, 229,140 page displays
viewed.
All performance tests will include some users with the following injected network latency to
simulate global locations. Locations will be updated based on Client requirements:
Page 5
0 ms Near Datacenter (Frankfurt/Secaucus)
Data
The Development team have previously provided 2000 user accounts for Release 2.1
performance testing. It is anticipated that these users are still available and have the correct
level of access to access the Americas instance of the application. These user account are
required to be mapped to specific users to assure access to the proper data elements.
Before test execution begins, a data build will need to take place. This is required for users
that will be used with Account View functionality. Before an account can be seen, the user
must add an opportunity for an existing account. Once ETL has taken place, the user in
Account View will be enabled to see not only the Account and the associated Opportunity,
but also all Opportunities associated with that existing account from the European data set.
This will include running the following automation?
It is understood that the Americas database instance has limited data. The data build go
some way to increasing the small number of records available at present. Database
backups and restore will not be used between test execution cycles as it is
beneficial to generate data as much data as possible.
The reporting automation BP17 needs to be updated. Currently it uses a pre-determined set
of filters to generate reports. This set of filters has been increased.
A data extract is required to generate a list of valid data for the Americas. Many of the
automated functions use search functionality near the start of the automation. The values
searched on are currently available for European data, but not for data from the Americas.
Obtaining this list is a pre-requisite for performance test preparations to begin.
Page 6
ETL Extract, Transform & Load
This is a background process that occurs at the end of the online processing day for an IA
Cereas instance. Any updates to accounts and opportunities are collected in the staging
server?? And processed in advance of being loaded into the reporting database of the other
instances. Up to now, only Europe has been in operation, so this process has always
occurred outside of normal business hours. With the Americas coming online with release
2.2, the replication stage of the ETL process will occur during the Americas online day. While
not directly controllable, the timings for when replication occurs can vary from day to day
depending on the amount of data ETL has to process.
Test Execution
1. Full Peak Load. All users will be ramped up over a period of time until peak workload
is achieved. Full peak workload will be maintained for at least 30 minutes before the
test is ramped down.
2. Endurance test with ETL running data replication. This is an attempt to measure the
impact of replication on online users as well as measure any performance
degradation over time.
3. Full Peak Load Stress test to find the workload at which point response times degrade
to an unacceptable level.
Monitoring
Due to short timescales, BEST/1 will be used to monitor servers and the database. Access to
servers will be requested so that PERFMON counters can be collected with the performance
test tool, Rational Performance Tester.
The database will be monitored with the use of Best/1. Analysis will take place to determine
if there are any inefficient queries and if possible recommend tuning changes to those
queries.
Reporting
Due to short timescales, a brief summary of progress will be issued at the end of each day. A
brief report will be issued at close of business on Friday 18 th March. This will summarise
known problems to date that require resolution that will be built into build 6.
Page 7
A full report will be issued on Wednesday 23 rd March which will focus on the key run of each
of the three performance test scenarios executed.
Batch
Page 8
4. PROJECT ROLES
The following roles are defined for this project.
Page 9
5. Deliverables
The development team will deliver the following;
1. Build 5 delivered into the performance test environment that resolves the functional
issues identified in build 4 as blocking the redevelopment of performance test scripts.
2. Extraction of data from the Americas instance that can be used with the performance
test automation and should include account information.
3. At least 1250 users to work against IA and Cereas and 50 registered users that will be
used for the Account View automation. It is understood that currently 2000 users are
available for performance testing.
5. Assist with generation of valid search criteria for use with the 15 new filters nfor
report functionality.
8. Development team support with functional and performance issues detected during
performance test build and execution.
1. 10 updated and 1 new performance test automation scripts and execution fo those
scripts.
3. Work with the functional test team to understand valid criteria for the 15 new report
filters to be used with the report functionality as required.
4. Test results including test tool statistics such as response times as well as system
statistics such as CPU, memory, disk I/O utilisation.
Page 10
6. PROJECT Timelines & ESTIMATE
Key Dates
Hours
Project Management = 28
Strategy document
Planning Meetings
Data Build = 12
Populate Account View users with account information
Increase the amount of data present in the environment
Total = 192
The above estimate includes:
Project management and Strategy Development
Meetings and Conferences
Scripting and Execution
Network Analysis
Infrasture and Application Capacity Analysis
Documentation
Page 11
Page 12
7. PROJECT RISKS
Report functionality is heavily dependent on the search criteria used. If this criteria is too
generic, performance measured could be worse than reality, if the criteria is too specific,
performance measured could be better than reality.
New report filters need to be understood with regard to the typical range of values that
would be used by users generating reports.
There are a number of functional issues present in the performance test environment after
build 4 was deployed. If these are not resolved with the deployment of build 5, some
automated test scripts may have to be de-scoped.
The SQL query used in previous performance test cycles has gone missing. This query needs
to be found or rebuilt from scratch before performance testing activities begin.
The two test harnesses are not currently working. Time is required to be spent by the
developers. There is a risk that there will be a delay if this is not dealt with promptly.
Administrator access to the database and the database servers is required before database
analysis can begin. While this has been requested, it may take some time to be actioned.
Page 13
8. SIGN-OFF SHEET
We, the undersigned, agree to the scope of this Estimate for the CRM Performance testing.
We authorise PT to proceed with this project.
Approved:
_________________
Customer Name Date
Page 14
Appendix A Architecture Diagram
The architecture diagram was received from James Zabinsky on the 7 th March.
C:\Users\Derek\
Desktop\Ernst & Young\CRM\Global CRM BuePrint Prod_PerfStress_Americas Deployment Architecture v4_5.pdf
Appendix B
The use cases for release 2.2 performance testing are based on release 2.1 keying steps
except for BP18_Account_View which is new automation for this release.
C:\Users\Derek\
Desktop\Ernst & Young\CRM\CRM 2.2 Use Cases.xls
Account View Keying Steps. Users must have accounts associated with them. This will be
set by running a databuild. Volumetrics are currently not known.
Page 15
Appendix C
The new report filters are as follows. This list was received from Nikila Cherian on the 8th
March.
The following are the additional filters
Please note - All these are drop downs (list filter type) and not free form text.
There are additional filters that accept data as free form text , I am just confirming with
business if those could be excluded from performance testing. Will keep you posted.
Opportunity Roles
Cannot find this field.
Legal Entity
Use All.
Service Line Program
Already in use.
Source of the Opportunity
Other.
Opportunity Sub-Area
Already in use.
Country
Mexico
USA
Use all
Business Unit
Use all
Global Service Code
Use all
Management Unit\
Already in Use
Operating Unit
Sub-Management Unit
Status
GFIS Engagement Status
GTAC Status
On Hold
Outcome Reason
Engagement or Opportunity
Is this opportunity associated with Cleantech
Page 16
Delays,
Most of Wednesday 9th was lost due to teething issues with the environment.
Two scripts (BP07 & BP16) have been added.
The data extract has not as yet produced workable data. The data build is dependent on a
valid data extract being produced. Development of BP17 & BP18 are dependent on the data
build.
1. Full Peak Load. All users will be ramped up over a period of time until peak workload
is achieved. Full peak workload will be maintained for at least 30 minutes before the
test is ramped down. We have two days scheduled for this, should be finished
by End of Thursday 17th.
2. Endurance test with ETL running data replication. This is an attempt to measure the
impact of replication on online users as well as measure any performance
degradation over time. We have two days scheduled for this, should be
finished by End of Monday 21st.
3. Full Peak Load Stress test to find the workload at which point response times degrade
to an unacceptable level. We have one day scheduled for this, should be
finished by Tuesday 22nd.
Page 17