LoadRunner Basics

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 74

Contents

Session 1 Introduction to performance testing 2


Session 2 5
components of load runner 6
Types of performance tests: 7
Types of performance testing 7
Common Performance Problems 8
Session 3 9
Performance Testing Process 9
1 Non-functional requirements analysis 10
2 Test plan and design: 11
3 Test development 11
4 Test execution: 11
5 Test results analysis: 11
6 Test reporting 11
Performance Parameters Monitored 12
Session 4 14
Risks Addressed Through Performance Testing 14
Summary Matrix of Risks Addressed by Performance Testing Types 14
Speed-Related Risks: 16
Speed-Related Risk-Mitigation Strategies: 16
Scalability-Related Risks 17
Scalability-Related Risk-Mitigation Strategies: 17
Session5 19
Think Time, Rendezvous Points and Comments 21
Comments 22
Session6 25
Run Logic 25
Session7 parameterization 33
Session8 functions in loadrunner 38
Session9 correlation 42
Session10 controller 45
Session 11 Analysis 54
Session12 sample script and scripting process 67
session 13 77
Case Study 1- TESCO 77
Session 1

What is performance testing?

✔ It is a process by which software is tested to determine the current system


performance
✔ Process of simulating and analyzing the effect of many users on an application
✔ Analyzes the effect of the real world users environment on an application
✔ A way of measuring the performance of your website/application
✔ Process of simulating and analyzing the effect of many users on an application
✔ Emulates user activity
✔ Analyzes the effect of the real-world user environment on an application
✔ Most effective way to gauge a Web site’s capacity

POPULAR PERFORMANCE TEST TOOLS


Commercial tools
✔ HP’s LoadRunner/ Performance center
✔ Micro focus (Segue/Borland’s) Silk Performer
✔ IBM Rational Performance Tester
✔ Microsoft’s Visual Studio Team System
Open Source Tools
✔ Cynaro’s OpenSTA
✔ Apache Jmeter

Why Performance Testing

1. Speed
Does the application respond quickly enough for the intended users?
2. Scalability / Capacity
Will the application handle the expected user load and beyond?
3. Stability / Robustness
Is the application stable under expected and unexpected user loads?
4. Confidence
Are you sure the users will have a positive experience on go-live day?

Performance testing is done to provide stakeholders with information about their


application regarding speed, stability and scalability. More importantly, performance
testing uncovers what needs to be improved before the product goes to market. Without
performance testing, software is likely to suffer from issues such as: running slow while
several users use it simultaneously, inconsistencies across different operating systems and
poor usability. Performance testing will determine whether or not their software meets
speed, scalability and stability requirements under expected workloads. Applications sent
to market with poor performance metrics due to non-existent or poor performance testing
are likely to gain a bad reputation and fail to meet expected sales goals.

When Performance Testing


Performance testing Process has to be
✔ Performed from Development stage to Pre-Production stage
✔ Performed after the application has undergone functional testing to avoid functional
errors during load testing
ADVANTAGES
✔ Helps to detect hidden bottlenecks or performance problems
✔ Helps to predict how the site will function in the real world
✔ Measures the end user response time for each transaction and user load level
✔ Increases customer satisfaction and retention
✔ Avoid project failures by predicting site behavior under larger user loads

TERMINOLOGY
Virtual Users:
Software process that simulates real user’s interactions with the Application Under Test
(AUT)
Navigation Flow
A user function within the Application Under Test
Scenario
A set of navigation flow defined for a set of virtual users to execute
Think Time
Time taken by the user between page clicks
Ram up
Gradual increase of Vusers during controller execution
Throughput
It is the amount of work that a computer can do in a given time period
Transaction
A subsection of the measured workflow
Bottleneck
A load point at which the System Under Test (SUT / AUT) suffers significant
degradation
Breakpoint
A load point at which the SUT / AUT suffers degradation to the point of malfunction

Scalability
The relative ability of the AUT / SUT to produce consistent measurements regardless of
the size of work load
Response Time
The time elapsed between when a request is made and when that request is fulfilled

Session 2

HP LOAD RUNNER
HP Load Runner is a software testing tool from Hewlett-Packard. It is used to
test applications, measuring system behavior and performance under load. HP acquired
Load Runner as part of its acquisition of Mercury Interactive in November 2006
HP Load Runner can simulate thousands of users concurrently using application software,
recording and later analyzing the performance of key components of the application.
Load Runner simulates user activity by generating messages between application
components or by simulating interactions with the user interface such as keypresses or
mouse movements. The messages/interactions to be generated are stored in scripts. Load
Runner can generate the scripts by recording them, such as logging HTTP requests
between a client web browser and an application's web server.
Key components of HP Load runner:
The key components of HP LoadRunner are:

● VuGen  It is the first component you interact with when getting started with
Performance Testing using HP Load Runner. Purpose of VuGen is to create VUScripts
that are used to simulate a real-like virtual-user (Virtual User Generator) is used for
generating and editing scripts.
● Controller: controller is a program to “control” overall load test. It is responsible for
helping you run your performance test design using the VUGen scripts that have
already created. It lets to over-ride run-time settings, enable or disable think time,
rendezvous points, add load generators and control the number of users each generator
can simulate. It automatically creates a dump of execution results, gives you a live view
of “current state” of load test running. controls, launches and sequences instances of
Load Generator - specifying which script to use, for how long etc. During runs the
Controller receives real-time monitoring data and displays status.
● Load Generator : generates the load against the application by following scripts
● Agent process manages connection between Controller and Load Generator instances.
● Analysis assembles logs from various load generators and formats reports for
visualization of run result data and monitoring data. Its a program to perform detailed
analysis on the performance test that have been carried out.

The below diagram shows the architecture of load runner


Types of performance tests:
Types of performance testing
●  Load testing - checks the application's ability to perform under anticipated user loads.
The objective is to identify performance bottlenecks before the software application
goes live.
● Stress testing - involves testing an application under extreme workloads to see how it
handles high traffic or data processing . The objective is to identify breaking point of an
application.
● Endurance testing - is done to make sure the software can handle the expected load
over a long period of time.
● Spike testing - tests the software's reaction to sudden large spikes in the load generated
by users.
● Volume testing - Under Volume Testing large no. of. Data is populated in the database
and the overall software system's behavior is monitored. The objective is to check
software application's performance under varying database volumes.
● Scalability testing - The objective of scalability testing is to determine the software
application's effectiveness in "scaling up" to support an increase in user load. It helps
plan capacity addition to your software system.

Common Performance Problems


Most performance problems revolve around speed, response time, load time and poor
scalability. Speed is often one of the most important attributes of an application. A slow
running application will lose potential users. Performance testing is done to make sure an
app runs fast enough to keep a user's attention and interest. Take a look at the following
list of common performance problems and notice how speed is a common factor in many of
them:

● Long Load time - Load time is normally the initial time it takes an application to start.
This should generally be kept to a minimum. While some applications are impossible to
make load in under a minute, Load time should be kept under a few seconds if possible.
● Poor response time - Response time is the time it takes from when a user inputs data
into the application until the application outputs a response to that input. Generally this
should be very quick. Again if a user has to wait too long, they lose interest.
● Poor scalability - A software product suffers from poor scalability when it cannot
handle the expected number of users or when it does not accommodate a wide enough
range of users. Load testing should be done to be certain the application can handle the
anticipated number of users.
● Bottlenecking - Bottlenecks are obstructions in system which degrade overall system
performance. Bottlenecking is when either coding errors or hardware issues cause a
decrease of throughput under certain loads. Bottlenecking is often caused by one faulty
section of code. The key to fixing a bottlenecking issue is to find the section of code that
is causing the slowdown and try to fix it there. Bottle necking is generally fixed by either
fixing poor running processes or adding additional Hardware. Some common
performance bottlenecks are
o CPU utilization
o Memory utilization
o Network utilization
o Operating System limitations
o Disk usage

Session 3
Performance Testing Process
The methodology adopted for performance testing can vary widely but the objective for
performance tests remain the same. It can help demonstrate that your software system
meets certain pre-defined performance criteria. Or it can help compare performance of two
software systems. It can also help identify parts of your software system which degrade its
performance.

Below is a generic performance testing process


PERFORMANCE TESTING PROCESS IN LOADRUNNER

1. Planning the Test


2. Creating Vuser Scripts
3. Creating the Scenario
4. Running the Scenario
5. Monitoring the Scenario
6. Analysis of the Test Results & Reporting
1 Non-functional requirements analysis
This is the initial phase of performance testing this involves gathering information and
analysis:
• What is the main purpose of the load test on the application?
1. Migrating to newer version or new software/hardware, so need to check
performance after migration on pre-production environment.
2. Measure the responsiveness of the application(response times)
3. Tune the application to get the expected responsiveness.
• Know the high end application architecture of the application.
• Identify the business flow to be tested and their percentage load distribution.
• Know the expected concurrent and peak load on the application.
• Know the navigation flows of the identified use cases to conduct load test.

2 Test plan and design:


Performance test plan and design includes following activities
• Application overview
• Performance test goals and objectives
• Performance test approach(what tests to be conducted)
• Identify in scope and out of scope
• Testing procedure (entry and exit criteria, transaction traversal details, test
data requirements, schedule and work load criteria)
• Test environment (tool, test environment setup)

3 Test development
Before going for performance testing, the test environment software and hardware
should be ready and the scripts should be developed. In this test development phase
scripts are developed as per the business process of the application, and scripts are
enhanced to create real user environment by user actions

4 Test execution:
This is the phase where test executions occur as per mentioned in the plan for meeting the
SLA. Here is the sequence of steps that we follow.
• Decide the test (Load/stress/endurance) and number of users.
• Ramp up duration and ramp down pattern.
• Work load profile, upload scripts and run time settings.
• Apply the details and set monitors to measure the metrics.
• Start the test and perform online monitoring.

5 Test results analysis:


• During this phase test results and logs are analysed for identifying bottle
necks and performance issues. Results are compared with the SLAs for all
types of metrics and verify whether results met SLAs or not.
• If the results are not met SLAs, raise performance defects and analyse what is
causing that problem and report.
• Preliminary test report is prepared and shared across the team and higher
management. For the reported bottlenecks/issues engineering team acts and
tunes the application and re-engineers it wherever required.

6 Test reporting
For the analysed results an executive summary report is prepared with the
following metrics and sent across respective stake holders.
• Client metrics (response times, throughput, hits/sec, pages/.sec etc.)
statistics and graphs.
• Server and resource metrics statistics and graph (RAM Disk, processor,
Network and application. Web, DB server metrics)
• Performed tests
• Test results and reports.
• Suggestions and Recommendations.
Performance Parameters Monitored
The basic parameters monitored during performance testing include:

● Processor Usage - amount of time processor spends executing non-idle threads.


● Memory use - amount of physical memory available to processes on a computer.
● Disk time - amount of time disk is busy executing a read or write request.
● Bandwidth - shows the bits per second used by a network interface.
● Committed memory - amount of virtual memory used.
● Memory pages/second - number of pages written to or read from the disk in order to
resolve hard page faults. Hard page faults are when code not from the current working
set is called up from elsewhere and retrieved from a disk.
● Page faults/second - the overall rate in which fault pages are processed by the
processor. This again occurs when a process requires code from outside its working set.
● CPU interrupts per second - is the avg. number of hardware interrupts a processor is
receiving and processing each second.
● Disk queue length - is the avg. no. of read and write requests queued for the selected
disk during a sample interval.
● Network output queue length - length of the output packet queue in packets. Anything
more than two means a delay and bottlenecking needs to be stopped.
● Network bytes total per second - rate which bytes are sent and received on the
interface including framing characters.
● Response time - time from when a user enters a request until the first character of the
response is received.
● Throughput - rate a computer or network receives requests per second.
● Amount of connection pooling - the number of user requests that are met by pooled
connections. The more requests met by connections in the pool, the better the
performance will be.
● Maximum active sessions - the maximum number of sessions that can be active at
once.
● Hit/ratios - This has to do with the number of SQL statements that are handled by
cached data instead of expensive I/O operations. This is a good place to start for solving
bottlenecking issues.
● Hits per second - the no. of hits on a web server during each second of a load test.
● Database locks - locking of tables and databases needs to be monitored and carefully
tuned.
● Top waits - are monitored to determine what wait times can be cut down when dealing
with the how fast data is retrieved from memory
● Thread counts - An applications health can be measured by the no. of threads that are
running and currently active.
● Garbage collection - has to do with returning unused memory back to the system.
Garbage collection needs to be monitored for efficiency.
Volumetric Analysis:

Calculations:

If we get the data for 6 months below is the calculation for TPS:

Volumes for 6months/6 = 1 month volumes

1 month volumes /22(business days)= 1day volume

1day volumes/8 business hours=1 hour volumes

1 hour volumes/3600(sec) = volumes for 1sec.

Virtual User Distribution = script(or) scenario tps* vusers/sum of total tps

2nd formula = Individual Tps * total vusers/ sum of total tps

Per User transaction = number of transactions 1hr / number of users

Pacing = time in sec / per user transaction

Session 4

Risks Addressed Through Performance Testing


Summary Matrix of Risks Addressed by Performance Testing Types

Performanc
Risk(s) addressed
e test type
● How many users can the application handle
before undesirable behavior occurs when the
application is subjected to a particular workload?
Load ● How much data can my database/file server
handle?
● Are the network components adequate
● What happens if the production load exceeds the
anticipated peak load?
 Spike ● What kinds of failures should we plan for?

● What indicators should we look for?


● What happens if the production load exceeds the
anticipated load?
 Stress ● What kinds of failures should we plan for?
● What indicators should we look for in order to
intervene prior to failure?
● Is this build/configuration ready for additional
performance testing?
● What type of performance testing should I
Smoke
conduct next?
● Does this build exhibit better or worse
performance than the last one?
● Will performance be consistent over time?
● Are there slowly growing problems that have not
Endurance
yet been detected?
 
● Is system capacity meeting business volume
Capacity under both normal and peak load conditions?

Performance test types

Risks
Capacity Endurance Load Smoke Spike Stress

Speed-related risks
User satisfaction   X X     X
Response time
  X X X    
trend
Configuration   X X X   X
Consistency   X X      
Scalability-related risks
Capacity X X X      
Optimization X          
Efficiency X          
Future growth X   X      
Resource
X X X X X X
consumption
Hardware /
X X X   X X
environment
Stability-related risks
Reliability   X X   X X
Robustness   X X   X X
Hardware /
  X X   X X
environment
Failure mode   X X   X X

Recovery         X X

 
Speed-Related Risks:

Speed-related risks are not confined to end-user satisfaction, although that is what most
people think of first. Speed is also a factor in certain business and data related risks. Some
of the most common speed-related risks that performance testing can address include:

● Is the application fast enough to satisfy end users?


● Is the business able to process and utilize data collected by the application before
that data becomes outdated? (For example, end-of-month reports are due within 24
hours of the close of business on the last day of the month, but it takes the
application 48 hours to process the data.)
● Is the application capable of presenting the most current information (e.g., stock
quotes) to its users?
● Is the application responding within the maximum expected response time before
an error is thrown?

Speed-Related Risk-Mitigation Strategies:

The following strategies are valuable in mitigating speed-related risks:

● Ensure that your performance requirements and goals represent the needs and
desires of your users, not someone else’s.
● Compare your speed measurements against previous versions and competing
applications.
● Design load tests that replicate actual workload at both normal and anticipated peak
times.
● Conduct performance testing with data types, distributions, and volumes similar to
those used in business operations during actual production (e.g., number of
products, orders in pending status, size of user base). You can allow data to
accumulate in databases and file servers, or additionally create the data volume,
before load test execution.
● Use performance test results to help stakeholders make informed architecture and
business decisions.
● Solicit representative feedback about users’ satisfaction with the system while it is
under peak expected load.
● Include time-critical transactions in your performance tests.
● Ensure that at least some of your performance tests are conducted while periodic
system processes are executing (e.g., downloading virus-definition updates, or
during weekly backups).
● Measure speed under various conditions, load levels, and scenario mixes. (Users
value consistent speed.)
● Validate that all of the correct data was displayed and saved during your
performance test. (For example, a user updates information, but the confirmation
screen still displays the old information because the transaction has not completed
writing to the database.)

Scalability-Related Risks
Scalability risks concern not only the number of users an application can support, but
also the volume of data the application can contain and process, as well as the ability to
identify when an application is approaching capacity. Common scalability risks that can be
addressed via performance testing include:

● Can the application provide consistent and acceptable response times for the entire
user base?
● Can the application store all of the data that will be collected over the life of the
application?
● Are there warning signs to indicate that the application is approaching peak
capacity?
● Will the application still be secure under heavy usage?
● Will functionality be compromised under heavy usage?
● Can the application withstand unanticipated peak loads?

Scalability-Related Risk-Mitigation Strategies:


The following strategies are valuable in mitigating scalability-related risks:

● Compare measured speeds under various loads. (Keep in mind that the end user
does not know or care how many other people are using the application at the same
time that he/she is.)
● Design load tests that replicate actual workload at both normal and anticipated peak
times.
● Conduct performance testing with data types, distributions, and volumes similar to
those used in business operations during actual production (e.g., number of
products, orders in pending status, size of user base). You can allow data to
accumulate in databases and file servers, or additionally create the data volume,
before load test execution.
● Use performance test results to help stakeholders make informed architecture and
business decisions.
● Work with more meaningful performance tests that map to the real-world
requirements.
● When you find a scalability limit, incrementally reduce the load and retest to help
you identify a metric that can serve as a reliable indicator that the application is
approaching that limit in enough time for you to apply countermeasures.
● Validate the functional accuracy of the application under various loads by checking
database entries created or validating content returned in response to particular
user requests.
● Conduct performance tests beyond expected peak loads and observe behavior by
having representative users and stakeholders access the application manually
during and after the performance test.
Session5

VUgen (virtual user generator)

A recorded script can simulate a virtual user however a mere recording may not be enough
to replicate the “real user behavior”.

When a script is recorded, it covers single and straight flow of the subject application.
Whereas, a real user may perform multiple iterations of any process before he logs out. The
delay between clicking buttons (think time) will vary from person to person. Chances are
that some real users access your application over DSL and some access it over a dial-up. So,
in order to get the real feel of end user, we need to enhance our scripts to be exact match,
or at least very close in behavior to real users.

The script development process in VUGen:

The above is the most significant consideration when conducting “performance testing”,


but there is more to a VU Script.

● How will you gauge the precise amount of time taken by a VUser when SUL is
undergoing a performance test?
● How would you know if the Vuser has passed through or failed at certain point?
● What is the cause behind the failure, whether some backend process failed or the
server resources were limited?

We need to enhance our script to help answer all above questions.

● Using Transactions
● Understanding Think Time, Rendezvous Points and Comments
● Inserting Functions through menu
● Parameterization and it's configuration
● Run Time Settings and their impact on VU simulation
o Run Logic
o Pacing
o Log
● Think Times
● Speed Simulation
● Browser Emulation
● Proxy

● Steps for Creating Scripts:


● VuGen enables you to record a variety of Vuser types, each suited to a particular
load testing
● Environment or topology. When you open a new test, VuGen displays a complete list
of the supported
● protocols

Using Transactions
Transactions are mechanics to measure server response time for any operation. In simple
words, the use of “Transaction” helps measure the time taken by the system for a particular
request. It can be as small as a click of a button.

Applying transactions is straight forward. Just write one line of code before request is made
to the server and close the transaction when the request ends. Load Runner requires only a
string as transaction name.

You can insert a comment to describe an activity or to provide information about a specific
operation

To open a transaction, use this line of code:

lr_start_transaction (“Transaction Name”)

To close the transaction, use this line of code:

lr_end_transaction (“Transaction Name”, <status>)

The <status> tells Load Runner whether this particular transaction was successful or
unsuccessful. The possible parameters could be:

● LR_AUTO
● LR_PASS
● LR_FAIL
Example:

lr_end_transaction (“My_Login”, LR_AUTO)


lr_end_transaction (“001_Opening_Dashboard Name”, LR_PASS)
lr_end_transaction (“Business_Workflow_Transaction Name”, LR_FAIL)

Points to note:

● Don’t forget, you are working with “C” and that is a case-sensitive language.
● Period (.) character is not allowed in transaction name, although you can use spaces and
underscore.
● If you’ve branched your code well and added checkpoints to verify the response from
the server, you can use custom error handling, such as, LR_PASS or LR_FAIL. Otherwise,
you can use LR_AUTO and LoadRunner will automatically handle server error (HTTP
500, 400 etc.)
● When applying transactions, ensure there is no think time statement being sandwiched
or otherwise your transaction will always include that period.
●  Since Load Runner requires a constant string as transaction name, a common problem
when applying transaction is mismatch of string. If you give a different name when
opening and closing a transaction, you will at least 2 errors.  Since the transaction you
opened was never closed, Load Runner will yield an error. Besides, the transaction you
are trying to close was never opened, hence resulting an error.
●  Since Load Runner automatically takes care of synchronization of requests and
response, you will not have to worry about response when applying transactions.
Think Time, Rendezvous Points and Comments
Rendezvous Points
Rendezvous Points means “meeting points”. It is just one line of statement that tells Load
Runner to introduce concurrency. You insert rendezvous points into VUser scripts to
emulate heavy user load on the server.

Rendezvous points instruct VUser to wait during test execution for multiple VUser to arrive
at a certain point, so that they may concurrently perform a task. For example, to emulate
peak load on the bank server, you can insert a rendezvous point instructing 100 VUser to
deposit cash into their accounts at the same time. This can be achieved easily using
rendezvous.

If the rendezvous points are not places correctly, the VUser will be accessing different parts
of the application – even for the same script. This is because every VUser gets different
response time and hence few users lag behind.

Syntax: lr_rendesvous (“Logical Name”);

Best Practices:

● Remove any immediate think time statements


● Applying rendezvous points in a script view (after recording)
Comments
Add comments to describe an activity, a piece of code or a line of code. Comments help
make the code understandable for anyone referring to it in the future. They provide
information about specific operation and separate two sections for distinction.

You can add comments

● While recording (using tool)


● After recording (directly writing in code)
Best Practice: Mark any comments on the top of each script file.

Inserting Functions through menu


While you can directly write simple lines of code, you may need a clue to recall a function.
You can also use Steps Toolbox (known as Insert Function prior to version 12) to find and
insert any function directly into your script.

You can find Steps Toolbar under View Steps Toolbox.


This will open a side window, look at the snapshot:
Parameterization and it's configuration
A parameter in VU Gen is a container that contains a recorded value that is replaced for
various users.

During the execution of the script (in VUGen or Controller), the value from an external
source (like .txt, XML or database) substitutes the previous value of the parameter.

Parameterization is useful in sending dynamic (or unique) values to the server, for
example; a business process is desired to run 10 iterations but picking unique user name
every time.

It also helps in stimulating real-like behavior to the subject system. Have a look at below
example:

Problem examples:

Business process works only for the current date which comes from the server, hence can’t
be passed as a hardcoded request.

Sometimes, the client application passes a Unique ID to the server (for example session_id)
for the process to continue (even for a single user) – In such a case, parameterization helps.

Often, the client application maintains a cache of data being sent to and from the server. As
a result, server is not receiving a real user behavior (in case server runs different algorithm
depending upon search criteria). While Vuser script will execute successfully, the
performance statistics drawn will not be meaningful. Using different data through
parameterization helps emulates server side activity (procedures etc.) and exercises the
system.

A date that is hard-coded in the Vuser during recording may no longer be valid when that
date has passed. Parameterizing the date allows Vuser execution to succeed by replacing
the hard-coded date. Such fields or requests are the right candidates for parameterization.

Session6

Run Time Settings and their impact on VU simulation


Run Time Settings bear as much significant as your VUGen Script. With varying
configurations, you can obtain different test designs. This is why, you may end up in non-
repeatable results if Run Time Settings are not consistent. Let’s discuss each attribute one
by one.

Run Logic
Run Logic defines the number of times all actions will be executed, except vuser_init and
vuser_end.

Probably this makes clearer why Load Runner suggests keeping all the Login code within
vuser_init, and Logout part in vuser_end, both exclusively.

If we are creating multiple actions. let’s say, sign in, open screen, calculate submit funds,
check balance and log out. Then below scenario will take place for each VUser

All users will login execute open screen , calculate rental, submit funds, check balance then
again open screen , calculate rentals… and so on and repeat this for 10 times and then log
out.

This acts like a real user, the real user will not login and log out all the time but he performs
the actions multiple times.

Pacing

This is most important as this is like think time, paccing is time gap between interation to
interation whereas think time is user wait time.

If we are conducting a aggressive load test we need to select option “As soon as the
previous iteration ends”
Logs:

A log (as generally understood) is a book keeping of all events while you run Load Runner.
You can enable log to know what’s happening between your application and your server.

Load Runner gives powerful logging mechanism which is robust and scalable on its own. It
allows you to keep only “Standard Log” or a detailed, configurable extended log or disable it
altogether.

A standard log is informative and easily understandable. It contains just the right amount of
knowledge you will generally require troubleshooting your VUser scripts.

In the case of Extended Log, all the Standard log information is a subset. Additionally, you
can have parameter substitution. This tells Load Runner component to include complete
information of all the parameters (from parameterization) including requests, as well as
response data.

If you include “Data Returned by Server” then your log will go in length. This will include all
the HTML, tags, resources, non-resources information included right within the log. The
option is good only if you need serious troubleshooting. Usually, this makes the log file very
big in size and not easily comprehendible.
As you could have guess by now if you opt for “Advance Trace”, your log file will be
massive. You must give it a try. You will notice the amount of time taken by VUGen has also
increased significantly, although this will have no impact on the transaction response time
reported by VUGen. However, this is very advance information and maybe useful if you
understand the subject application, the client to server communication between your
application and hardware as well as protocol level details. Usually, this information is dead
by essence since it requires extreme efforts to understand and troubleshoot.

Log

A log (as generally understood) is a bookkeeping of all events while you run Load Runner.
You can enable log to know what’s happening between your application and your server.

Load Runner gives powerful logging mechanism which is robust and scalable on its own. It
allows you to keep only “Standard Log” or a detailed, configurable extended log or disable it
altogether.

A standard log is informative and easily understandable. It contains just the right amount of
knowledge you will generally require troubleshooting your VUser scripts.

In the case of Extended Log, all the Standard log information is a subset. Additionally, you
can have parameter substitution. This tells Load Runner component to include complete
information of all the parameters (from parameterization) including requests, as well as
response data.

If you include “Data Returned by Server” then your log will go in length. This will include all
the HTML, tags, resources, non-resources information included right within the log. The
option is good only if you need serious troubleshooting. Usually, this makes the log file very
big in size and not easily comprehendible.

As you could have guess by now if you opt for “Advance Trace”, your log file will be
massive. You must give it a try. You will notice the amount of time taken by VUGen has also
increased significantly, although this will have no impact on the transaction response time
reported by VUGen. However, this is very advance information and maybe useful if you
understand the subject application, the client to server communication between your
application and hardware as well as protocol level details. Usually, this information is dead
by essence since it requires extreme efforts to understand and troubleshoot.
Tips:

• No matter how much time VUGen takes when log is enabled, it has no impact on the
transaction response time. HP calls this phenomenon as “state of the art technology.”

• Disable log if it is not required.

• Disable log when you are finished with your scripts. Including scripts with logging
enabled will cause controller to run slower and report nagging messages.

• Disabling log will increase the capacity of the maximum number of users you can
simulate from Load Runner.

• Consider using “Send message only when error occurs” – this will mute unnecessary
information messages and report only error related messages.

Think Times

Think Time is simply the delay between two steps.

Think Time helps replicates user behavior since no real user can use any application
like a machine (VUGen). VUGen generates think time automatically. You still have
complete control to remove, multiply or fluctuate the duration of think time.
To understand more, for example, a user may open a screen (that is a response followed
by a request) and then provide it is username and password before hitting enter. The
next interaction of the application to the server will happen when he clicks “Sign in”.
The time a user took to type his username and password is Think Time in Load Runner.

If you are looking to simulate aggressive load on the application, consider disabling
think time completely.

However, to simulate a real like behavior, you can “User Random Think Time” and set
the percentages as desired.

Consider using Limit Think Time to a legitimate period. Usually, 30 seconds is fairly
good enough.

Speed Simulation

Speed simulation simply refers to bandwidth capacity for each client machine.

Since we are simulating thousands of VUser’s through Load Runner, it is amazing how
simple Load Runner has made to control the bandwidth/network speed simulation.
If you are customers access your application over 128 Kbps, you can control it from
here. You will get to simulate “real like behavior” which should help getting the right
performance statistics.

The best recommendation is to set to Use maximum bandwidth. This will help disregard
any network related performance bottlenecks and focus on any potential issues in the
application first. You can always run the test multiple times to see varying behavior
under different circumstances.

Browser Emulation

User experience does not depend upon the browser an end user is using. Clearly, this is
beyond the scope of Performance measures. However, you can choose which browser
you wish to emulate
Can you answer to yourself when exactly it will really matter for you to select the right
browser in this configuration?

You will use this configuration if you are subject application is a web application,
returning different responses for different browsers. For example, you get to see
different images and contents for IE and Firefox etc.

Another important setting is Simulate browser cache. If you want to gauge the response
time when cache enabled, check this box. If you are looking for worst case situation, this
is obviously not a consideration.

Download non-HTML resources will let Load Runner download any CSS, JS and other
rich media. This should be remained checked. However, if you which to eliminate this
from your performance test design, you can uncheck this.

Session7
Parameterization

Parameters are like script variables. They are used to vary input to the server and to
emulate real user. Different sets of data sent to server each time the script is run. Better
simulate the usage model for each accurate testing from the controller, one script can
emulate many different users on the system.
Parameterization can be done directly using the available options in the parameter
window or by browsing the path of an external notepad, which has the parameter
values with a specialized delimiter.

Parameterization provides the ability to use different values in scripts and thus helps
create data driven test scenarios.

a) Sequential each iteration: the parameter values will be taken from the starting
point of the values data and takes the values from the first for the number of
iterations for each Vuser. the second Vuser also considers the same values for the
no. of iterations as is being considered in a sequential order.
b) Random each iteration: random values will be considered for selection and
allocated for the Vusers and the specified iterations.
Used values can be re-used for the no. of times for the same Vuser and for the same
iteration
c) Unique each iteration: unique values will be considered for selection and allocated
for the Vusers and specified iterations. Used values cannot be reused and every time
a unique values will be passed.
When out of values
Abort Vusers: This option aborts the Vuser when required data is not available. It runs for
the Vusers and Iterations when the unique data is available.
Continue in a cyclic manner: this option allows user to continue the execution by using
the allocated data to the user in the cyclic manner and stop executing the next user
when running out of the data.
Continue with the last value: the option continues with the execution with the user for
specified iterations by using the last unique value and abort the next user when
running out of the data.
d) Sequential each occurrence: the input data substituted for the parameter changes
for each and every occurrence in the script and will be taken sequentially from the
parameter i.e ex if he same parameter is being reproduced in more than one
references in the script the value substituted will be keep on changing for each and
every occurrence. This option is recommended only in the cases where the data
should be changed for every occurrence.
e) Random each occurrence: the input data substituted for each and every
occurrence in the script and will be taken randomly from the parameter
f) Unique each occurrence: the input data substituted for the parameter changes for
each and every occurrence in the script and will consider only the unique values
from the parameter.
g) Sequential once: the input data submitted for the parameter will be same across
all the iterations for the second user across all the iterations where the next
available value will be considered for the next user sequentially from the parameter.
h) Random once: the input data substituted for the parameter will be taken randomly
from the parameter and the same will be considered across all the iterations.
Another random value will be considered for the next user and will be the same
across all the iterations and so on.
i) Unique once: unique input data will be substituted for the parameter per user and
remains same across all iterations. Next available unique values will be picked for
the next user and remains same for all iterations.

Block allocation:
Automatically allocate block size. This option will automatically allocate the block size for
the users by iterations count when unique each iteration option is selected. Block
has certain values for the users to reserve the input data for the users.

Simulate parameter: this option is used to simulate the parameter to find the out the
parameter substitution on in the real environment under the specialized load and
iterations.

HTTPWATCH Professional

What is the use of this utility? how it works?

HTTP watch is a truly powerful and reliable traffic sniffer and http request viewer that
provides all the information you could ever need thoroughly analyze loading and
performing parameters of your website

This reliable tool helps you get a clear insight over how a website loads and performs .it
can decrypt, monitor and analyze even encrypted https traffic.

Another important advantage provided by this tool is that fact that it logs and analyzes a
lot of traffic –related data .it can displays the values of headers cookies,query strings and a
lot more.

The through http traffic reports can also be saved to compact files which can be easily
examined at latter times. A neat and handy standalone log file viewer is also available.

It is also easy-to –install, simple-to-use and flexible.it doesn’t feature any complicated
requirements or hinders the monitoring with intricate settings.

Recording issues
Error-27279: internal error-report initialization failed, error code=-2147467259

To resolve this issue ,run any loadrunner script once as an administrator. once you’ve done
this,you’ll have “cured vugen” of this fault.

Steps to follow:

● right click vugen shortcut and choose the option to “run as administrator”
● run the script once ,the error should not appear
● close vugen
● right click🡪compatibility🡪enable administrator

2. vugen not recording –internet explorer(iexlore.exe)”hangs” when launched during


recording

When the recording WEB(HTTP/HTML)script vugen launches iexplore.exe but internet


explorer window is not displayed.

This may occur due to COM object permissions.

Change the following recording options setting:

“recording options”🡪general🡪”script”:UN-CHECK”track processes as COM local servers”

The record the WEB(HTTP/HTML)script

3. no events are being recorded

Here are most coomon reasons why no events are showing in the script:

The events counter keep increasing, while the generated script is empty

This may be because vugen recording mechanism fails to identify http data.to fix ensure
that your application indeed uses http web traffic.if the application uses SSL
connections,makes choose the correct SSL version(SSL2,SSL3,TLS) through the port
mapping dialog,available by clicking the ó ptions...’button on the record>recording
options.....dialog:the advance port maping setting dialog opens when you click ó ptions...’:

The events counter shows less than five events, while the application keeps getting
data from the server

In this case , vugen’s recording mechanism fails to capture any network activityat all.
Ensure that your application really does provide some network traffic, ie.it sends and
receives data through the IP network
if an antivirus program is running ,turn it off during recording.

Check the recording log for any clues about the recording failure.message such as
connection failure or connection not trapped can be sign of the wrongly configured port
mapping settings.

In addition,if you are recording on achrome or filefox browser, make sure that all the
instances of the browser are closed prior to the recording

Unable to record the LR script ,events not capturing

Possible solution:

Windows has an a feature called data execution prevention (DEP)which is capable of


causing such problem while recording.

Disable DEP for load runner and application under test (AUT)by navigating to

Control panel>system>advanced>performance settings>data execution prevention.

Now select the second radio button and add load runner and application under test and
press ‘apply’’

Note: if your application is web based then remove DEP for IE as well.

Overall possible solutions:

● Disable all non –essential IE tool bars and extensions.


This can be done via the manage add-ons option in the tools menu
● disable vugen thumbnails:

in reg edit ,navigate to


HKEY_LOCAL_MACHINE\SOFTWARE\Mercuryinteractive\loadrunner\vugen\thumbnails
and set the value for generate thumbs to 0. Note, that thumbnail generation commonly
causes problems even when IE behaves itself.

It’s good to disable it,as thumbnails are rarely needed.

● Clear your TEMP directories: loadrunner creates a lot of temporary files make
sure you save your scripts and exit all your applications first.
● Disable data execution prevention(DEP):

Windows has an a feature called data execution, prevention (DEP)which is capable of


causing such problems while recording. Disable DEP for load runner and application under
test (AUT)by navigating to control panel >system >advanced >performance settings >data
execution prevention.

Now select the second radio button and add load runner and application under test and
press apply

Note: if your application is web based then removes DEP for IE as well

● Disable your anti-virus:

Anti-virus software is rarely a good idea on load runner PC;

And it can be difficult to disable. Remember to exercise a security mitigation plan when
disable anti-virus components

● Try using different administrative user account:

Sometimes a user profile may become corrupted and not operative correctly.

● Try reinstalling load runner and all respective patches

Session8

Error handling: some basic functions need to be maintained in every script

1. web_reg_find():- it is the function which we really we usually use o know whether a


particular page loaded or not by finding some text in the page.it should be placed before
launch url function

Below code searches for word “google” in Google home page and if the count is>0(if word
exists), then transaction will be passed otherwise seconds a fail status

Web_reg_find (“text=google”, save count=cnt”,LAST);

Ir_end_transaction(“google_homepage”,LR_PASS);

Else
{

Ir_end transaction((“google_homepage”,LR_FAIL);

Ir_think_time(5);

Above code will check for the text ”google” in home page if not found transactions fails.in
real-time scenario, for ex. we need to run the script with the 1000 users, and let say
transaction fail for around 105 users, then all these 105 users will be dropped down and
script continues with the rest of users(1000-102=895 users).and if we want the dropped
users to resume, we need have script handlers for that, below script does that.

Web_reg_find(“text=google”,”savecount=cnt”,LAST);

Ir_start_transaction(“google_T001_homepage”);

If(ato(Ir_eval_string(“{cnt}”))>0

Ir_end_transaction(“google_ T001_homepage”,LR_PASS);

Else{

Ir_end_transaction(“google_ T001_homepage”,LR_FAIL);

Ir_error_message(“login transaction failed for the user %s”,Ir_eval_string(“{userid}”));

Ir_exit(LR_EXIT_ITERATION_AND_CONTINUE,LR_FAIL);//this statement makes the user to


continue even if transaction fails

2.Lr_eval_string():-

The Ir _eval_string functions returns the input string after evaluating any embedded
parameters if the string arguments contains only a parameter, the functions returns the
current value of the parameter. Embedded parameters must be in brackets.

Lr_eval_string: To find out the value which store in the parameter fileor exat from the
parameter file(savecount)used to extract the value

3.atoi():-
Converts a string to an integer value.

4.Ir_error_message();-

Sends an error message with location details to the output windows, log files, and other
test report summaries. The Ir_error_message functions sends an error message to product
output windows (such as the load runner output window), log files (such as vugen and the
application management agent to log file), and other test report summaries.

Vugen displays the message text of the Ir_error_message functions in the execution log in
red, using error code 17999.note that this function sends a message to the output even
when logging is disabled in the run-time settings.

5. Lr_output_message:- (“the user is running %iteration”, Ir_eval_string (“{iteration}”)

Sends a message with the script section and line number to output windows (such as the
load runner output window), log files (such as vugen log file and agent log file).

When a script is run in vugen , the output file is output.txt.

To send a message to the output file, you must enable logging in the run –time settings, and
select always send messages if you select send messages only when an error occurs, there
is no output from this function.

6.Ir_log_message(%s is running the application”,Ir_eval_string(“{user}”));

Sends a message to the Vuser or agent log file (depending on the application. you can use
this function for debugging by sending error or other informational messages to the log
file.

We can avoiding overloading the network by using this functions to send messages to the
log file rather than sending messages to the output window.

(with Ir_message orlr_output_message).

In standalone program such as vugen, Ir_log_messages sends the message to the viewer
output.txt. To send to the output file, you must enable logging in the run-time settings, and
select always send messages.

In the vuser execution log, the function does not list the location and line number from
where the message was issued.

7. Web_set_sockets_option:
web_set_sockets_option(SSL_VERSION”,”TLS”);

the web _set _sockets _option function configures option for sockets on the client machine.

This functions support socket level reply.IN the web HTML protocol,it is supports both
html based scripts and url based scripts. web _set _sockets _option has no effect on reply
using winlnet

ERRORS:

● Error -27776:SSL protocol error when attempting connect with host


“load2Amex.com”
● Ssl_handle_status encounter error :SSL_ERROR_SSL, error message:
● failed to connected to the server “load2-Amex.com:443”connection timed out

6. Ir_exit():-

It allows you exit from the script run during execution.

Ir_exit (LR_EXIT_ITERATION_AND CONTINUE,LR_FAIL):this function makes the user to


continue even if transaction fails and continue with next iterations until the test executions

7. Ir_abort():-

Aborts the execution of a script. It stop the execution of the actions section, executes the
vuser_end section, and ends the execution.

8. web_set_user(): web _set_user (username”,password”,www.myhost.com:8080);

The web _set_user function is a service function is a service function that specifies a login
string and password for a web server or proxy server.it can be called more than once if
several proxy servers require authentication.web_set_user overrides the run –time proxy
authentication settings for user name and password

When you log onto a server that requires user and password validation ,vugen records a
web_set_user statement containing the login details .however, there are some more
stringent authentications methods for which vugen is unable to insert web_set_user into
your script manually.

When you run the script , the user authorization is automatically submitted along with
every subsequent request to that server. At the end of the script, the authorization is reset.

9. web_add_auto_filter:
(“Action =Exclude”,ÜRL=https://www.example.com/resourses/form-
validations.js”,Last);

10.web_set_max_html_param_len(“99999”);

Session9

CORRELATION

Correlation is used to obtain data which are unique for each run of the script which are
generated by nested queries. correlation provides the value to avoid errors arising out of
duplicated values and also optimizing the code(to avoid nested queries).automatic
correlation is where we set some rules for correlation.it can be application, the value we
want to correlate is scanned and create correlation is used to correlate.

Correlation can be done in two ways.

1) Manual correlation

2) Automated correlation
Manual correlation: dynamic values returned by server should be correlated by inserting
a correlated by inserting a correlation function before the step where the value is getting
generated. Easiest way to identify the dynamic values is comparing the vuser script the
similar vuser script using tools🡪compare with script and by browsing the required script.
The simplest way to identify the dynamic value generation step is by verifying the server
response from the tree view. Place the correlation function above the step where the value
is generated by the server:

a) Ignore the dynamic values returned by server for the values captured and stored in
cookies in the browser. They are not required to correlate

b) The data which differs in the EXTRARES part (EXTRA Url)can be igonered are not
required to correlate

c) View state and the event validations should be correlated in the Dotnet applications

d) Session ids should be correlated in the java applications

e) No tool will handle the case where the dynamic value generated by the server is in the
same step where it is generated and passed. There should be some variation where the
dynamic values is generated and passed

f) If the dynamic value is generated in the URL and is passed in the same step, convert the
web_url into web_link where the recording should be in the html recording mode only.

Steps to be followed for inserting the correlation function:

1) record the required steps from the application using vugen


2) record the similar script with the similar stepsusing vugen
3) save both the scripts into script folder
4) open one script and go to Tools🡪compare with the script and browse the script with
which is script is to be compared with
5) the differences will be highlighted and pointed using the windiff tool
6) ignore the dynamic values returned under the extrares section and cookies
7) identify the dynamic values other than the above mentioned in step:6
8) verify the location from the server response of a page where the dyamic value is
generated by the server using the tree view and by selecting a step in the left pane
9) insert the correlation function on the top of step where the dynamic value is
generated
10)replace the dynamic value with the given parameter name in the correlation
function for all the reference in the script
CORRELATION FUNCTION

Web_reg_save_param(“<param name>”,”LB=”,RB=”,LAST);

Web_reg_save_param(“user session “,”LB=input type =hidden name=user session


value=”,”RB=>”,Ord=1”,LAST);

Param name: parameter name

LB=left boundary to identify the dynamic value

RB=right boundary to identify the dynamic value

ORD=ordinal value. This value is identified as the occurrence of the value with the specified
boundaries with the page where the dynamic value is Tree view🡪server response🡪select
the step from the left pane and search for the defined left boundary

LAST: this is the part of the syntax to end the statement

-ORD=ALL: all the references found in the page with the matches with the boundaries will
be saved in the parameter in an array structure. We have to consider the exact match from
the array and replace the dynamic value with that reference in the script.we can identify
the array structure and the values by running the script in parameter mode and then
identify the array number and replace the parameter with the reference

NOTE: correlation function should be placed with the identified boundaries in the script
before running the script.

AUTO CORRELATION

It can be done in two ways: scan for correlations and by setting a rule

a) scan script for correlations

The correlation engine scans for any dynamic values in the generated response from the
server by comparing with the replay and recorded log.

Correlation with the auto correlation engine, scan for correlations (CTRL+F8).

b) setting a rule

A rule can be set under a new application by providing the left and right boundary, action
value. Provide the necessary details can be created .this rule is applied while the script
recording is going on.
When auto correlation is performed the function is automatically created and the
parameters is replaced in the script

Web_reg_save_param(“user session”,LB=input type=hidden name=user session


value=”,”RB=>”,last);

Session10
Controller

Load runner is the licensed component of load runner which is used to simulate multiple
users, by using controller we can
● Design the scenario
● Initiating as scenario run
● Managing vuser during run
● Providing vuser execution status
● Collating results of the run
● Monitoring the system and other users
● Launching the analysis tool
Load runner controller – designing a Scenario
Step-1: launching load runner and its controller module,
Open Hp load runner launcher window by clicking “start “> “ programs”> “hp software” >
load runner > “ controller”
New scenario dialog box will open here we can select scripts for our own scenario.
Step-2: scheduling of scenario in schedule pane of the controller.
We can schedule scenario in 2 ways:
● Manual
● Goal oriented
Manual:
● Used for normal load tests
● Providers manual control of the Vusers
● Can add, start and stop Vusers in a run
Goal oriented:
Controller controls the Vusers
Used for trying to achieve a particular goal as specified in the controller goals.

Select goal oriented scenario.


Defining the scenario attributes for the script:

Select script > either right click/ double click on the edit scenario goal below highlighted
image to view group details> make changes you want to
Start scenario the below page shows running Vusers and graphs.

Defining the group attributes for the script:

Select script > either right click/ double click on the script / select below highlighted
image to view group details> make changes you want to
Schedule by group options :

Scenario schedule:
In real word scenario the actual user do not log on and log off the system exactly at the
same time. Here we instruct the load runner user to gradually log on to and off the system
by scheduling the scenario.

For ex we have 3 scripts to design a scenario, lets see how the scenario is designed based
on the business requirement .

Requirement for schedule by scenario

1. Start all groups/scripts simultaneously.


2. All groups should run for a fixed duration and stop later.

For such above requirements we usually go for schedule by scenario.

Requirements for schedule by group :

Example: we have three scripts and requirement to design the scenario is as follow

1. start script 1 and run for 1 hour


2. Start script2, 30 minutes after script 1 starts and run for only 100 iterations.
3. Start script3 after script1 finishes and run for 15 minutes.

In such cases we go for schedule by group

Adding a load generator

For creating load on application click on the below highlighted button

Testing the load generator connection

Involves instructing the controller to make attempts to connect to the load generator
machine

● Select host name > click connect


When the connection gets established, the status changes from down to ready then click
close

Reasons for load Generator connection failure

1. Check if server is ip and running


2. Check if load runner agent (service) is running, for this go to Administrative tools->
services-> load runner Agent to verify.
3. If load runner agent is running as process, check if magnetproc is running, if not go
to c:\Loadrunner\launch_service\bin folder
Configuring the run time settings

Right click on script and select run time setting or(Alt+T) click ok after making appropriate
settings.

● R
u
n

logic is to run the script for number of times

● Pacing to
define the
time to
wait before
repeating
an action .

● Log to
define the
type of

information needed to be captured during the test


● Think time is to define the time a user stops to think between steps.
● Speed simulation to define network connection in modem, dsl or cable.
● Browser Emulation to define different browsers.
● Content check for automatically detecting user defined errors.

If we over ride (or modify) the run time settings here , the changes will be effective only for
current scenario, if you remove the script and reload or hit refresh(in details review) you
will lose any changes made this will reload the run .

Save results:

Click results > select result settings > set results pop up comes, browse directory and name
the results.

Scenario setup – Run Tab:

In the run tab the following can be done

● Assign load generators to each script


● Add windows resources
● View graphs during the execution
● Add/remove graphs for viewing during the execution
● Collate results after test execution

Primary graphs we need to see during execution

1) running VUser
2) transaction response time
3) hits per second
4) throughput

Check list before we start a test

⮚ check previous report and prepare checklist according to that


⮚ validate application environment
⮚ validate all scripts
⮚ Make sure parameterization data is sufficient enough for the test.
⮚ Complete Scenario design
⮚ Check run time settings
⮚ Save results in the results path
⮚ Validate LG’s up and running.
⮚ Make sure all application server counters are added ( perfmon counter for
windows )

Performance monitor is monitoring tool to monitor performance of a computer

Using this we can see how te computer manages its resources

To start performance monitor,

⮚ Click start, click in the start search box, type perfmon and press enter.
⮚ In the navigation tree, expand monitoring Tools, and then click performance
monitors
⮚ By default this graph measures process time, which is the amount of time that the
processor is busy working on running active programs(shows in %s)

Session 11
Load runner analysis

Client side metrics:

Hits/sec: hits the no. of hits on a web server during each second of a load test or number of
request sent to server in a point of time.

When a load is increased Hits/sec will be increased.

Potential bottleneck: when there is no change in hits/sec when the load is increased it
can be considered as a bottle neck as the network bandwidth is not available to handle the
load or the application is not scalable to handle the user load.

Drastic downfall in the hits per second graph indicates that the application is not scalable
for the defined user load.

Slight downfall in the Hits/sec graph can be ignored because, the graph indicates a high
rate during the ramp-up and gets stabilized at a certain point during the steady state.

Throughput: throughput is the amount of data transferred in the network. (Data


transferred from client side to server side and server to client).

When the hits/sec is increased throughput is increases as the data transfer rate will be high
in terms of requests from the client and the response from the server.

Hits/sec and throughput are directly proportional to each other.


Throughput is measured in bytes.

Potential bottlenecks: drastic downfall in hits/sec and throughput could be possible issue
with the application scalability.

Slight downfall in the Hits/sec and throughput graph can be ignored because, the graph
indicates a high rate during the ramp-up and gets stabilized at a certain point during the
steady state.

Average response Time: the amount of time taken to process a client request. It is the
time measured between the client request and the server response in seconds.

Also called round trip time or Turn Around Time.

Potential bottleneck: High response times (greater than SLA’s) could be considered as a
potential bottleneck.

Response times could be higher during the initial stage of the test and gets stabilized at a
lighter stage.

● Min, Max, Average,90%, standard deviation.

Transactions are referred as time taken to process a specific set of requests within the start
and end transaction point.

Min time taken to execute a particular transaction.

Max time taken to execute a particular transaction.

Average time taken to execute a particular transaction.

Sum of all variances divided by total no of transactions.

90% value

● 90% of the users experiencing the response time less than the value.
● 90th position of the value in ascending order.
● All passed transaction in ascending order then 90th position is 90% percentile
value.
● Load runner 12.01 95% value also available(can be customized)

Running Vusers

Total passed transactions

Total failed transactions


Total transactions

Server side metrics:

Memory:

%committed bytes in use: committed memory is the physical memory in use for which
space has been reserved in the paging file should it need to be written to disk.

Available bytes: Available bytes is the amount of physical memory, in bytes, immediately
available for allocation to a process or for system use. It is equal to the sum of memory
assigned to the standby, free nad zero page lists.

Potential Bottleneck: Memory utilization should be lesser than 70-80% or defined SLA out
of the total memory which is an industry best practice. Lack of memory would impact to
high response times and user ramp-up.

Page faults: Page Faults/sec is the average number of pages faulted per second, It is
measured in number of pages faulted per second because only one page is faulted in each
fault operation, hence this is also equal to the number of page fault operations. This counter
includes both hard faults (those that require disk access) and soft faults (where the faulted
page is found elsewhere in physical memory).Most processors can handle large numbers of
soft faults without significant consequence. However, hard faults, which require disk
access, can cause significant delays.

Potential Bottleneck: Drastic increment in the page faults should result performance
issues in terms of high response times.

Page reads/sec/hard page faults: Page Reads/Sec is the rate at which the disk was
read to resolve hard page faults. It shows the number of reads operations, without
regard to the number of ages retrieved in each operation. Hard page faults occur when a
process references a page in virtual memory that is not in working set oe elsewhere in
physical memory, and must be retrieved from disk.

Page writes/sec: Pages writes/sec is the rate at which pages are written to disk during
the execution process.

Potential Bottleneck: Any signification increment or growth in these counters graph


indicates a bottleneck as number pages are being read in order to resolve the page faults.
This could impact the performance in terms of response time.

Hard Page fault: None of the blocks are available to execute an object then it uses virtual
memory. Then is considered as one-page fault(Hard). Whenever Actual/Primary memory
not available it takes the memory from secondary? Virtual Memory.
Soft page fault: whenever allocating secondary memory at that time primary memory also
available then considered as soft page fault.

Physical Disk:

%disk time%: disk time is the percentage of elapsed time that the selected disk drive was
busy servicing read or write requests.

Avg disk read queue length: average disk read length is the average number of read
requests that were queued for the selected disk during the sample interval.

Note: the threshold value for the queue length is<=2

Avg Disk Write queue length: Avg Disk write Queue Length is the average number of
write requests that were queued for the selected disk during the sample interval.

Note: The Threshold value for the queue length is<=2

Potential bottleneck: If the disc read or write que length is high, it indicates that more
number of requests are queued and impacts the application performance.

Processor (CPU):

% Processor Time: It shows the amount of CPU utilization during the load test execution.

Potential bottleneck: the industry best practice threshold fpr CPU utilization is <=80%,
whenever the CPU utilization is observed beyond this point, an observation should be sent
to all the stakeholders that are involved in the test execution.

There is a possibility of server crash if CPU utilization reaches to 100% and constant at the
same rate for over a period of time, few spikes beyond the threshold can be ignored and the
continuous spikes beyond the threshold are not recommended.

% Privileged Time: it is the percentage of elapsed time that the processer threads spent
executing code in privileged mode. When a windows system service is called, the service
will often run in privileged mode to gain access to system-private data. such data is
protected fro access by threads execution in user mode.

Potential bottleneck: whenever a high utilization is observed for this counter, this should
be raised as an observation as most of the threads are being used to execute the operating
system processes.

Server:
Bytes received/second: the number of bytes the server has received from the network.
Indicates how busy the server is.

Bytes transmitted/sec: the number of bytes the server has sent on the network. It
indicates how busy the server is.

Server sessions: the number of sessions currently active in the server. Indicates current
server activity Jisi-ws-cache-stats.log(Web sphere)

Sessions timed out:

The amount of sessions timed out at the server level. The number of sessions that have
been closed due to their idle time exceeding the AutoDisconnect parameter from the
server. Shows whether the Auto Disconnect settings is helping to conserve resources.

Eg: step download time error

System

Context switches: context switches/sec is the combined rate at which all the processors
on the computer are switched from one thread to another. Context switches occur when a
running thread voluntary relinquishes the processor, is preempted by a higher priority
ready thread, or switches between user-mode and privileged(kernel) to use an executive or
subsystem service.

Potential Bottleneck: A high context-switch rate often indicates that there are too many
threads competing for the processor on the system. The rate of context switches can also
affect performance of multiprocessor computers.

Web server metrics:

.net IIS Web server Metrics:

● ASP.NET Requests queued


● ASP.NET Application Requests/Sec
● ASP.NET Requests execution time.
● Processor % Time
● .NET CLR memory (% Time in GC)

DB Metrics: SQL Server, Oracle, DB2, Mysql

● Connection pools: to establish the connections between app server to DB server


● Logical Reads per second (should be high)
● Physical Reads per second (should be less than Logical reads)
● Avg read time
● Buffer Hit ratio (should be close to 100%)
● I/O read time
● Total app commits per second.
● Memory utilization
● Sql statements per second
● Rows Read per second

Starting Analysis

● You can open Analysis as an independent application or directly from the Controller.
To open Analysis as an independent application, choose one of the following:
● _ Start > all Programs > HP software > HPLoad Runner > Analysis

Once you start the Session a Summary report will be displayed

Summary Report:

The Summary report provides general information about load test scenario execution. This
report is always available from the Session Explorer or as a tab in the Analysis window. The
Summary report lists statistics about the scenario run and provides links to the following
graphs: Running Vusers, Throughput, Hits Per Second, HTTP Responses per Second,
Transaction Summary, and Average Transaction Response Time. The appearance of the
Summary report and the information displayed will vary depending on whether an SLA
(Service Level Agreement) was defined. An SLA defines goals for the scenario. Load Runner
measures these goals during the scenario run and analyses them in the Summary report

This Summary Report will have the following details:

This section displays a table containing the load test scenario’s diagnostics data. Included
in this data is a percentile column (x Percent). This column indicates the maximum
response time for that percentage of transactions performed during the run. For example,
in the table below, the value in the 88 Percent column for browse special books is 8.072.
This means that the response time for 88% of the browse special books transactions was
less those 8.072 seconds
Scenario Behaviour over Time:

This section displays the average errors per second received by the application under test
per time interval. For example, 0 means that on average there were zero errors received
per second for that time interval, 0+ means that on average there were slightly more than
zero errors received, and so on.
HTTP Responses Summary

This section shows the number of HTTP status codes returned from the Web server during
the load test scenario, grouped by status code.

The Following are the other Graphs that are Displayed

Transaction Response Time under Load


Displays average transaction response times relative to the number of Vusers running at
any given point during the load test. This graph helps you view the general impact of Vuser
load on performance time and is most useful when analysing a load test which is run with a
gradual load.
Hits per Second Graph

Displays the number of hits made on the Web server by Vusers during each second of the
load test. This graph helps you evaluate the amount of load Vusers generate, in terms of the
number of hits.

Running Vusers Graph

Displays the number of Vusers that executed Vuser scripts, and their status, during each
second of a load test. This graph is useful for determining the Vuser load on your server at
any given moment.
Average Transaction Response Time
Displays the average time taken to perform transactions during each second of the load
test. This graph helps you determine whether the performance of the server is within
acceptable minimum and maximum transaction performance time ranges defined for your
system.

Hits per Second

Displays the number of hits made on the Web server by Vusers during each second of the
load test. This graph helps you evaluate the amount of load Vusers generate, in terms of the
number of hits.
Throughput

Displays the amount of throughput (in bytes) on the Web server during the load test.
Throughput represents the amount of data that the Vusers received from the server at any
given second. This graph helps you to evaluate the amount of load Vusers generate, in
terms of server throughput.

Average Transaction Response Time

Displays the average time taken to perform transactions during each second of the load
test. This graph helps you determine whether the performance of the server is within
acceptable minimum and maximum transaction performance time ranges defined for your
system.
Windows Resources

Displays a summary of the System Resources usage for each Windows based host.

Connections Per Second


Displays the number of Connections per Second
Example of single user script for business process to login to delta tours application and
select flights and select arrival city and dept. city and date and then confirm seat selection
and flight selection and then make payments to the booked ticked and creating a invoice
Session12
Sample script and scripting process

RECORD & REPLAY --- DELTA TOURS APPLICATION WORKFLOW STEPS

● Launch DELTA Tours Application using URL http://126.0.0.1:8080/WebTours


● Login to the application
● Click on Flights button
● Select departure city and arrival city and click continue
● Select fight time and click continue
● Enter Payment Details and click continue
● Click Signoff

STEPTS

● Inserting Transaction Points


● Inserting Think Time
● Inserting Verification Checks
● Parameterization
● Correlation

TRANSACTION POINTS

● Define transactions to measure the performance of the server


● Each transaction measures the time taken for the server to respond to specific Vuser request
● Functions used for transaction points Lr_start_transaction(“Transaction name”);
Lr_end_transaction(“Transaction name”);

THINK TIME

● Time taken by the user between page clicks o Going through or Reading the page o Filling
information in the page
Lr_think_time(“time”);

Sample script

Action()
{
     char *Var1,*Var2,*Var3;
    char *Var4,*Var5,*Var6;
    char *Var7;
    
    int i,x,y;
    
    // Deltatours Application Launch    
    
    lr_start_transaction("Deltatours_FlightReservation_T01_ApplicationLaunch");    
    
    
    web_reg_find("Text=Deltatours","SaveCount=Count1", 
        LAST);
    

    // <input type="hidden" name="userSession"


value="117909.276630215zAztQfiptVzzzzzHDzDiQpHfAf"/>
    
    web_reg_save_param("userSession",
                       "LB=<input type=\"hidden\" name=\"userSession\" value=\"",
                       "RB=\"/>",
                       LAST);
         
    web_url("index.htm", 
        "URL=http://126.1.0.1:1090/Deltatours/index.htm", 
        "TargetFrame=", 
        "Resource=0", 
        "RecContentType=text/html", 
        "Referer=", 
        "Snapshot=t1.inf", 
        "Mode=HTML", 
        LAST);    
    
    if(atoi(lr_eval_string("{Count1}"))>0)
       {
           lr_output_message("**********HomePage Loaded Properly*****************");
           lr_end_transaction("Deltatours_FlightReservation_T01_Launch",LR_PASS);
           
       }
       else
       {
           lr_error_message("**********Home Page Failed to Load***********");
           lr_end_transaction("Deltatours_FlightReservation_T01_Launch",LR_FAIL);
           lr_exit(LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL);
       }
    
    lr_end_transaction("Deltatours_FlightReservation_T01_ApplicationLaunch",LR_AUTO);
    
    /* Deltatours Application LogIn */

    lr_start_transaction("Deltatours_FlightReservation_T02_LogIn");

    web_reg_find("Text=Deltatours","SaveCount=Count2",
        LAST);

    lr_think_time(30);

    web_submit_data("login.pl", 
        "Action=http://126.1.0.1:1090/cgi-bin/login.pl", 
        "Method=POST", 
        "TargetFrame=body", 
        "RecContentType=text/html", 
        "Referer=http://126.1.0.1:1090/cgi-bin/nav.pl?in=home", 
        "Snapshot=t2.inf", 
        "Mode=HTML", 
        ITEMDATA, 
        "Name=userSession", "Value={userSession}", ENDITEM, 
        "Name=username", "Value={Uname}", ENDITEM, 
        "Name=password", "Value={Pwd}", ENDITEM, 
        "Name=JSFormSubmit", "Value=off", ENDITEM, 
        "Name=login.x", "Value=91", ENDITEM, 
        "Name=login.y", "Value=10", ENDITEM, 
        LAST);
    
    if(atoi(lr_eval_string("{Count2}"))>0)
       {
           lr_output_message("**********LoginPage Loaded Properly*****************");
           lr_end_transaction("Deltatours_FlightReservation_T02_Login",LR_PASS);
           
       }
       else
       {
           lr_error_message("**********Login Page Failed to Load***********");
           lr_end_transaction("Deltatours_FlightReservation_T02_Login",LR_FAIL);
           lr_exit(LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL);
       }

    lr_end_transaction("Deltatours_FlightReservation_T02_LogIn",LR_AUTO);

    /* Flights Button Selection */

    lr_start_transaction("Deltatours_FlightReservation_T03_FlightsButtonSelection");
    
    web_reg_find("Text=Deltatours", "SaveCount=Count3",
        LAST);

    // <input type="radio" name="seatType" value="Business"


    
    web_reg_save_param("seatType",
                       "LB=<input type=\"radio\" name=\"seatType\" value=\"",
                       "RB=\"",
                       "ORD=all",
                       LAST);
    
    // <input type="radio" name="seatPref" value="Window"
    
    web_reg_save_param("seatPref",
                       "LB=<input type=\"radio\" name=\"seatPref\" value=\"",
                       "RB=\"",
                       "ORD=all",
                       LAST);

    //<option value="Los angeles">


   
    web_reg_save_param("DeptCity",
                       "LB=<option value=\"",
                       "RB=\">",
                       "ORD=all",
                       LAST);
    
    // <option selected="selected" value="Denver">
    
    web_reg_save_param("DefaultDeptCity",
                       "LB=<option selected=\"selected\" value=\"",
                       "RB=\">",
                       "ORD=all",
                       LAST);
    
    lr_think_time(30);

    web_url("Search Flights Button", 


        "URL=http://126.1.0.1:1090/cgi-bin/welcome.pl?page=search", 
        "TargetFrame=body", 
        "Resource=0", 
        "RecContentType=text/html", 
        "Referer=http://126.1.0.1:1090/cgi-bin/nav.pl?page=menu&in=home", 
        "Snapshot=t3.inf", 
        "Mode=HTML", 
        LAST);
    
    if(atoi(lr_eval_string("{Count3}"))>0)

      {
           lr_output_message("*****successfully navigate to flight selection page*********");
           lr_end_transaction("Deltatours_FlightReservation_T03_FlightButton",LR_PASS);
           
       }
       else
       {
           lr_error_message("********** Failed to navigate to flight selection page***********");
           lr_end_transaction("Deltatours_FlightReservation_T03_FlightButton",LR_FAIL);
           lr_exit(LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL);
       }

    lr_end_transaction("Deltatours_FlightReservation_T03_FlightsButtonSelection",LR_AUTO);
    
    Var1=lr_paramarr_random("seatType");
    lr_save_string(Var1,"RandseatType");
    
    Var2=lr_paramarr_random("seatPref");
    lr_save_string(Var2,"RandseatPref");
    
    x=lr_paramarr_len("DeptCity");
    lr_output_message("********************The length of DeptCity is %d*********",x);
    
    
    y=lr_paramarr_len("DefaultDeptCity");
    lr_output_message("********************The length of DefaultDeptCity is %d*********",y);
    
    for(i=1;i<=y;i++)
    {
    Var6=lr_paramarr_idx("DefaultDeptCity",i);
    lr_save_int(x+i,"DeptCityNewIndx");
    lr_save_string(Var6,lr_eval_string("DeptCity_{DeptCityNewIndx}"));
    }
      
    Var7=lr_paramarr_random("DeptCity");
    lr_save_string(Var7,"RandDeptCity");

  
    /* Flight Booking */

    lr_start_transaction("Deltatours_FlightReservation_T04_FlightBooking");
    
    web_reg_find("Text=Flight Selections", "SaveCount=Count4",
        LAST);
    
    // <input type="radio" name="outboundFlight" value="281;1861;02/17/2016">
    
    web_reg_save_param("outboundFlight",
                       "LB=<input type=\"radio\" name=\"outboundFlight\" value=\"",
                       "RB=\">",
                       "ORD=all",
                       LAST);
    lr_think_time(31);

    web_submit_data("reservations.pl", 
        "Action=http://126.1.0.1:1090/cgi-bin/reservations.pl", 
        "Method=POST", 
        "TargetFrame=", 
        "RecContentType=text/html", 
        "Referer=http://126.1.0.1:1090/cgi-bin/reservations.pl?page=welcome", 
        "Snapshot=t4.inf", 
        "Mode=HTML", 
        ITEMDATA, 
        "Name=advanceDiscount", "Value=0", ENDITEM, 
        "Name=depart", "Value={RandDeptCity}", ENDITEM, 
        "Name=departDate", "Value=02/17/2016", ENDITEM, 
        "Name=arrive", "Value=Sydney", ENDITEM, 
        "Name=returnDate", "Value=02/18/2016", ENDITEM, 
        "Name=numPassengers", "Value=1", ENDITEM, 
        "Name=seatPref", "Value={RandseatPref}", ENDITEM, 
        "Name=seatType", "Value={RandseatType}", ENDITEM, 
        "Name=.cgifields", "Value=roundtrip", ENDITEM, 
        "Name=.cgifields", "Value=seatType", ENDITEM, 
        "Name=.cgifields", "Value=seatPref", ENDITEM, 
        "Name=findFlights.x", "Value=56", ENDITEM, 
        "Name=findFlights.y", "Value=13", ENDITEM, 
        LAST);
    
    if(atoi(lr_eval_string("{Count4}"))>0)
       {
           lr_output_message("*********Successfully navigate to the Flight Selection
page*****************");
           lr_end_transaction("Deltatours_FlightReservation_T04_Booking",LR_PASS);
           
       }
       else
       {
           lr_error_message("********** Failed to Load Flight Selection page***********");
           lr_end_transaction("Deltatours_FlightReservation_T04_Booking",LR_FAIL);
           lr_exit(LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL);
       }

    lr_end_transaction("Deltatours_FlightReservation_T04_FlightBooking",LR_AUTO);

    Var4=lr_paramarr_idx("outboundFlight",1);
    Var5=(char *)strtok(Var4," ");
    lr_save_string(Var5,lr_eval_string("outboundFlight_1"));
    
    Var3=lr_paramarr_random("outboundFlight");
    lr_save_string(Var3,"RandoutboundFlight");

    /* Flight Time Selection */

    lr_start_transaction("Deltatours_FlightReservation_T05_FlightTimeSelection");
    
    web_reg_find("Text=Flight Reservation","SaveCount=Count5",
        LAST);
    

    lr_think_time(29);
    web_submit_data("reservations.pl_2", 
        "Action=http://126.1.0.1:1090/cgi-bin/reservations.pl", 
        "Method=POST", 
        "TargetFrame=", 
        "RecContentType=text/html", 
        "Referer=http://126.1.0.1:1090/cgi-bin/reservations.pl", 
        "Snapshot=t5.inf", 
        "Mode=HTML", 
        ITEMDATA, 
        "Name=outboundFlight", "Value={RandoutboundFlight}", ENDITEM, 
        "Name=numPassengers", "Value=1", ENDITEM, 
        "Name=advanceDiscount", "Value=0", ENDITEM, 
        "Name=seatType", "Value={RandseatType}", ENDITEM, 
        "Name=seatPref", "Value={RandseatPref}", ENDITEM, 
        "Name=reserveFlights.x", "Value=28", ENDITEM, 
        "Name=reserveFlights.y", "Value=7", ENDITEM, 
        LAST);
    
    if(atoi(lr_eval_string("{Count5}"))>0)
{
    lr_output_message("*********Successfully navigated to the time selection page**********");
    lr_end_transaction("Deltatours_FlightReservation_T05_TmeSelection",LR_PASS);
}
 else
 {
     lr_error_message("*********Failed to navigate to time selection page **********");
    lr_end_transaction("Deltatours_FlightReservation_T05_TmeSelection",LR_FAIL);    
    lr_exit(LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL);
 }

    lr_end_transaction("Deltatours_FlightReservation_T05_FlightTimeSelection",LR_AUTO);

    /* Making Payment */

    lr_start_transaction("Deltatours_FlightReservation_T06_Payment");
    
    web_reg_find("Text=Reservation Made!","SaveCount=Count6", 
        LAST);
    
    lr_think_time(33);
    web_submit_data("reservations.pl_3", 
        "Action=http://126.1.0.1:1090/cgi-bin/reservations.pl", 
        "Method=POST", 
        "TargetFrame=", 
        "RecContentType=text/html", 
        "Referer=http://126.1.0.1:1090/cgi-bin/reservations.pl", 
        "Snapshot=t6.inf", 
        "Mode=HTML", 
        ITEMDATA, 
        "Name=firstName", "Value={Fname}", ENDITEM, 
        "Name=lastName", "Value={Lname}", ENDITEM, 
        "Name=address1", "Value={Add1}", ENDITEM, 
        "Name=address2", "Value={Add2}", ENDITEM, 
        "Name=pass1", "Value={Passengers}", ENDITEM, 
        "Name=creditCard", "Value={CCnum}", ENDITEM, 
        "Name=expDate", "Value=12/21", ENDITEM, 
        "Name=oldCCOption", "Value=", ENDITEM, 
        "Name=numPassengers", "Value=1", ENDITEM, 
        "Name=seatType", "Value={RandseatType}", ENDITEM, 
        "Name=seatPref", "Value={RandseatPref}", ENDITEM, 
        "Name=outboundFlight", "Value={RandoutboundFlight}", ENDITEM, 
        "Name=advanceDiscount", "Value=0", ENDITEM, 
        "Name=returnFlight", "Value=", ENDITEM, 
        "Name=JSFormSubmit", "Value=off", ENDITEM, 
        "Name=.cgifields", "Value=saveCC", ENDITEM, 
        "Name=buyFlights.x", "Value=23", ENDITEM, 
        "Name=buyFlights.y", "Value=13", ENDITEM, 
        LAST);
    
    if(atoi(lr_eval_string("{Count6}"))>0)
{
    lr_output_message("*********Invoice generated Successfully**********");
    lr_end_transaction("Deltatours_FlightReservation_T06_Payment",LR_PASS);
}
 else
 {
     lr_error_message("*********Invoice generation Failed **********");
    lr_end_transaction("Deltatours_FlightReservation_T06_Payment",LR_FAIL);    
    lr_exit(LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL);
 }

    lr_end_transaction("Deltatours_FlightReservation_T06_Payment",LR_AUTO);
    /* Deltatours Application SignOff */

    lr_start_transaction("Deltatours_FlightReservation_T07_SignOff");
    
    web_reg_find("Text=Deltatours", "SaveCount=Count7",
        LAST);    
    lr_think_time(31);

    web_url("SignOff Button", 
        "URL=http://126.1.0.1:1090/cgi-bin/welcome.pl?signOff=1", 
        "TargetFrame=body", 
        "Resource=0", 
        "RecContentType=text/html", 
        "Referer=http://126.1.0.1:1090/cgi-bin/nav.pl?page=menu&in=flights", 
        "Snapshot=t7.inf", 
        "Mode=HTML", 
        LAST);

    if(atoi(lr_eval_string("{Count7}"))>0)
{
    lr_output_message("*********Signed out from the application Successfully**********");
    lr_end_transaction("Deltatours_FlightReservation_T07_Signoff",LR_PASS);
}
 else
 {
     lr_error_message("*********Failed to logout from the Application**********");
    lr_end_transaction("Deltatours_FlightReservation_T07_Signoff",LR_FAIL);    
    lr_exit(LR_EXIT_MAIN_ITERATION_AND_CONTINUE,LR_FAIL);
 }

    lr_end_transaction("Deltatours_FlightReservation_T07_SignOff",LR_AUTO);

    return 0;
}
session 13

Case Study 1- TESCO

You might also like