Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

5.

Revision History

Date Modified Version Author Modifications/Revisions

03/07/2011 0.1 Document Created

03/08/2011 0.2 Amendments based on internal review

03/09/2001 0.3 Minor Updates to contacts

Several changes based on meetings and new


03/11/2001 0.4
information coming to light

How to Request Changes to this Document

If there are any questions or issues regarding this document, please contact:

Name Performance Team Lead phone

Names Capacity Team Lead phone

Page i
IT Services –Performance Testing Project X

1. Introduction

2. Objectives & Scope


Objectives
The following points summarise the objectives of the performance testing process.

1. Determine if the application can support the anticipated peak workloads.


2. Identify bottlenecks within the application and where possible work with the project to resolve those issues.
3. Find the breaking point of the application in terms of workload.

In Scope
1. A subset of Online functionality including automation of Account View functionality.
2. Re-evaluation of volumetrics.
3. has limited data, a data build is required.
4. Simulation of users at several remote locations (Network latency testing).
5. An online performance test during the replication process of Extract, Transform & Load (ETL)

Out of Scope
1. Overnight batch and house-keeping except where ETL is involved.
2. Disaster Recovery Testing.
3. Application network profiling, and Database Analytics.
4. While single signon will be utilsed, performance of this component is out of scope.
5. The European instance of IA and Cereas and simulation of the European workload.
6. Simulation of users using the windows client.
7. Opportunity Locks as management of users will be too complicated based on the short timescales we have.

3. Approach
Due to the short timescales available for performance testing, it is proposed that if possible the following steps are
undertaken to maximise the amount of time available to the performance test team.

1. A functional tester carries out a smoke test on the performance test environment as soon as implementation of
release 5 has completed.

2. Support from the development team is available to investigate any issues as soon as they are identified, whether
that be during the build or execution pahse of performance testing.

3. A brief summary of issues is delivered by Friday 18th March with a full report delivered by Wednesday 23rd March.

Performance Test Environment


CRM consists of InterAction which is a integration solution to interface with other EY systems such as . This is hosted on a
pair of webservers and application servers.

USS & USS are virtual machines with the same specification as production. Each server has 4 cores & 8GB of
memory. These servers support only the Americas instance of the application.

Page 1
There are several SQL server databases and database servers within the application.

Automated Test Scripts


There are 17 existing automated test scripts from previous rounds of performance testing. Due to the very short
timescales, not all of the automation will be included in release 2.2 performance testing, 11 existing scripts will be used
and 1 new script will be built. The folllowing scripts have been removed from Release 2.2 performance testing. Please
note, BP07 was requested by the project to be included, due to a functionality issue, we have left this automation out of
scope.

Process Aproximate Iterationsp Total Page


number of er hour Displays
users per hour

BP02 Contacts: add new contact, new company, remove

BP07 Marketing, add contact, remove contact

BP08 Marketing Maintain contacts on Marketing Lists (Event


Lists)

BP10 Opportunity to three gfis Engagement, no duns number

BP14 Search windows client for "activity"; Reports then


exported

BP15 Search windows client for "Job titles & Company":


Reports then exported

Some automation exists to simulate keying activities of windows client users (as opposed to web client users). Windows
client users represent around 5% of all users. Due to short timescales, this automation will not be used during

Page 2
A summary of the automation to be included is;

Process Aproximate Iterations Total Page


number of per hour Displays
users per hour

BP01

BP03

BP04

BP05

BP06

BP09

BP11

BP12

BP13

BP16

BP17

BP18 Account View – The new script to view accounts and


opportunities

Volumetric Information
For release 2.1, detailed volumetric information was gathered and is assumed to be valid for release 2.2. This has been
reviewed and is summarised below.

Stats for the busiest day, Monday 18th January 2010 showed 1,923 vists resulting in 116,000 paged displays. This works
out at an average of 60.3 page displays per visit. On this basis, for 3,800 visitors in the peak hour, 229,140 page displays

viewed.

Simulation of Users at Remote Locations


All performance tests will include some users with the following injected network latency to simulate global locations.
Locations will be updated based on Client requirements:

0 ms – Near Datacenter (Frankfurt/Secaucus)

100 ms – Regional Users (Europe/North America)

200 ms – Distant Users (Middle East/Central America)

Page 3
300 ms – Far Users (South Africa/South America)

400 ms – Need to check if we need to do this or not.

Data
The Development team have previously provided 2000 user accounts for Release 2.1 performance testing. It is anticipated
that these users are still available and have the correct level of access to access the Americas instance of the application.
These user account are required to be mapped to specific users to assure access to the proper data elements.

Before test execution begins, a data build will need to take place. This is required for users that will be used with Account
View functionality. Before an account can be seen, the user must add an opportunity for an existing account. Once ETL
has taken place, the user in Account View will be enabled to see not only the Account and the associated Opportunity, but
also all Opportunities associated with that existing account from the European data set. This will include running the
following automation…?

It is understood that the Americas database instance has limited data. The data build go some way to increasing the small
number of records available at present. Database backups and restore will not be used between test execution cycles
as it is beneficial to generate data as much data as possible.

The reporting automation BP17 needs to be updated. Currently it uses a pre-determined set of filters to generate reports.
This set of filters has been increased.

A data extract is required to generate a list of valid data for the Americas. Many of the automated functions use search
functionality near the start of the automation. The values searched on are currently available for European data, but not
for data from the Americas. Obtaining this list is a pre-requisite for performance test preparations to begin.

Page 4
ETL – Extract, Transform & Load
This is a background process that occurs at the end of the online processing day for an IA – Cereas instance. Any updates
to accounts and opportunities are collected in the staging server?? And processed in advance of being loaded into the
reporting database of the other instances. Up to now, only Europe has been in operation, so this process has always
occurred outside of normal business hours. With the Americas coming online with release 2.2, the replication stage of the
ETL process will occur during the Americas online day. While not directly controllable, the timings for when replication
occurs can vary from day to day depending on the amount of data ETL has to process.

Test Execution
Three scenarios will executed.

1. Full Peak Load. All users will be ramped up over a period of time until peak workload is achieved. Full peak
workload will be maintained for at least 30 minutes before the test is ramped down.

2. Endurance test with ETL running data replication. This is an attempt to measure the impact of replication on
online users as well as measure any performance degradation over time.

3. Full Peak Load Stress test to find the workload at which point response times degrade to an unacceptable level.

Monitoring
Due to short timescales, BEST/1 will be used to monitor servers and the database. Access to servers will be requested so
that PERFMON counters can be collected with the performance test tool, Rational Performance Tester.

Access to the following servers has now been provided.

The database will be monitored with the use of Best/1. Analysis will take place to determine if there are any inefficient
queries and if possible recommend tuning changes to those queries.

Access to the following databases has now been provided.

Reporting
Due to short timescales, a brief summary of progress will be issued at the end of each day. A brief report will be issued at
close of business on Friday 18th March. This will summarise known problems to date that require resolution that will be
built into build 6.

A full report will be issued on Wednesday 23rd March which will focus on the key run of each of the three performance test
scenarios executed.

Batch
While performance testing is not specifically measuring or monitoring batch processes, a number of regulary occuring
batch jobs will be running in the background. These may have an impact on resource utilisation and response times of the
online workload.

4. PROJECT ROLES
The following roles are defined for this project.

Page 5
5. Deliverables
The development team will deliver the following;

1. Build 5 delivered into the performance test environment that resolves the functional issues identified in build 4 as
blocking the redevelopment of performance test scripts.

2. Extraction of data from the Americas instance that can be used with the performance test automation and
should include account information.

3. At least 1250 users to work against IA and Cereas and 50 registered users that will be used for the Account View
automation. It is understood that currently 2000 users are available for performance testing.

4. Provide working versions of the GTAC & GFIS test harnesses.

5. Assist with generation of valid search criteria for use with the 15 new filters nfor report functionality.

6. If possible, userid and passwords to servers in the performance test environment to enable PERFMON counters
to be collected for performance analysis.

7. Support with the ETL process.

8. Development team support with functional and performance issues detected during performance test build and
execution.

9. Updates to project delivery dates

10. Zero to many tuning changes.

The performance test team will deliver the following;

1. 10 updated and 1 new performance test automation scripts and execution fo those scripts.

2. A build of data against the Americas instance of CRM.

3. Work with the functional test team to understand valid criteria for the 15 new report filters to be used with the
report functionality as required.

4. Test results including test tool statistics such as response times as well as system statistics such as CPU, memory,
disk I/O utilisation.

5. Daily update of progress.

6. A brief overview of performance results by Friday 18th march

7. Final performance testing results delivered by Wednesday 23rd March.

Page 6
8. Zero to many tuning changes.

6. PROJECT Timelines & ESTIMATE


Key Dates
March 8th – Build 5 delivered into the performance testing environment.
March 9th – Performance test preparations begin
March 18th – Performance test execution completes and a brief report is issued.
March 23rd – Final report delivered.

Hours
Project Management = 28
Strategy document
Planning Meetings

Performance Test Build = 68


8 performance test scripts re-recorded and verified
Report script (BP17) re-recorded
Account view
Data preparation, report filter definition, workload creation and scenario build

Data Build = 12
Populate Account View users with account information
Increase the amount of data present in the environment

Performance Test execution = 48


Shakedown testing
Peak Load scenario
Endurance test with replication
Stress test – 8 hours

Performance Test Analysis and Report = 36 hours


Analysis Peak Load test & Report – 8 hours
Analyse Endurance test & Report – 8 hours
Analyse Stress test & Report – 8 hours
Database Analysis – 12 hours

Total = 192
The above estimate includes:
Project management and Strategy Development
Meetings and Conferences
Scripting and Execution
Network Analysis
Infrasture and Application Capacity Analysis
Documentation

Page 7
7. PROJECT RISKS
Report functionality is heavily dependent on the search criteria used. If this criteria is too generic, performance measured
could be worse than reality, if the criteria is too specific, performance measured could be better than reality.

New report filters need to be understood with regard to the typical range of values that would be used by users
generating reports.

There are a number of functional issues present in the performance test environment after build 4 was deployed. If these
are not resolved with the deployment of build 5, some automated test scripts may have to be de-scoped.

The SQL query used in previous performance test cycles has gone missing. This query needs to be found or rebuilt from
scratch before performance testing activities begin.

The two test harnesses are not currently working. Time is required to be spent by the developers. There is a risk that there
will be a delay if this is not dealt with promptly.

Administrator access to the database and the database servers is required before database analysis can begin. While this
has been requested, it may take some time to be actioned.

Page 8
8. SIGN-OFF SHEET

We, the undersigned, agree to the scope of this Estimate for the CRM Performance testing. We authorise PT to proceed
with this project.

Approved:

_________________
Customer Name Date

Page 9
Appendix A Architecture Diagram
The architecture diagram was received from James Zabinsky on the 7th March.

C:\Users\Derek\
Desktop\Ernst & Young\CRM\Global CRM BuePrint Prod_PerfStress_Americas Deployment Architecture v4_5.pdf

Appendix B
The use cases for release 2.2 performance testing are based on release 2.1 keying steps except for BP18_Account_View
which is new automation for this release.

C:\Users\Derek\
Desktop\Ernst & Young\CRM\CRM 2.2 Use Cases.xls

Account View – Keying Steps. Users must have accounts associated with them. This will be set by running a databuild.
Volumetrics are currently not known.

1. Navigate to the InterAction home page.


2. Select My Accounts.
3. Randomly select account from list of accounts displayed.
4. Ramdomly select an opportunity stage (Identify, Qualify etc)
5. Randomly select an opportunity stage and a pipeline summary value.
6. Repeat steps (4 & 5) 3 times.
7. Select an opportunity for the account
8. Open an interaction (opens in a new window)
9. Close interaction window.
10. Return to accounts
11. Return to InterAction home page

Page 10
Appendix C
The new report filters are as follows. This list was received from Nikila Cherian on the 8th March.
The following are the additional filters
Please note - All these are drop downs (list filter type) and not free form text.
There are additional filters that accept data as free form text , I am just confirming with business if those could be
excluded from performance testing. Will keep you posted.

Opportunity Roles
Cannot find this field.
Legal Entity
Use All.
Service Line Program
Already in use.
Source of the Opportunity
Other.
Opportunity Sub-Area
Already in use.
Country
Mexico
USA
Use all
Business Unit
Use all
Global Service Code
Use all
Management Unit\
Already in Use
Operating Unit

Sub-Management Unit
Status
GFIS Engagement Status
GTAC Status
On Hold
Outcome Reason
Engagement or Opportunity
Is this opportunity associated with Cleantech

Release 2.1 Filters used during Performance testing


Managerial Countries – Management Unit
Sub Areas – Opportunity sub area
Service Lines
Sub Service Lines
Industry sectors
Stages
Outcomes
Channels
Account types

Page 11
Delays,

Most of Wednesday 9th was lost due to teething issues with the environment.
Two scripts (BP07 & BP16) have been added.
The data extract has not as yet produced workable data. The data build is dependent on a valid data extract being
produced. Development of BP17 & BP18 are dependent on the data build.

Possible plan moving forward.


1, Begin data build on Monday morning, completing by midday on Tuesday. Organise data base backup.
2, Once the dat build has started, we should be able to begin scripting of BP17 & BP18 to be complete by midday Tuesday.
3, Begin Test Execution Wednesday. We have 3 tests to complete.

1. Full Peak Load. All users will be ramped up over a period of time until peak workload is achieved. Full peak
workload will be maintained for at least 30 minutes before the test is ramped down. We have two days
scheduled for this, should be finished by End of Thursday 17th.

2. Endurance test with ETL running data replication. This is an attempt to measure the impact of replication on
online users as well as measure any performance degradation over time. We have two days scheduled for this,
should be finished by End of Monday 21st.

3. Full Peak Load Stress test to find the workload at which point response times degrade to an unacceptable level.
We have one day scheduled for this, should be finished by Tuesday 22 nd.

Page 12

You might also like