Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

UNIT 5

UNIT-5
Regression testing: Progressive vs Regressive testing, Regression Testability, Objectives of Regression
Testing, when is Regression Testing done? Regression Testing Types, Regression Testing Techniques.
Automation and Testing Tools: Need for Automation, Categorization of Testing Tools, Selection of Testing
tools, Cost Incurred in Testing Tools, Guidelines for Automated Testing, Overview of Some Commercial
Testing Tools.

Progressive vs. Regressive Testing:


Whatever test case design methods or testing techniques discussed till now have been referred to as progressive
testing or development testing. From verification to validation the testing process progresses towards the release
of the product. However, to maintain the software bug fixing may be required during any stage of development
and so there is a need to check the software again to validate that there has been no adverse effect on the already
working software. A system under test (STU) is said to regress if:
• A modified component fails, or
• A new component when used with unchanged components causes failures in the unchanged components
by generating side-effects or feature interactions.
Now the following versions will be there in the system:
Baseline Version: The version of a component that has passed a test suite.
Delta Version: A changed version that has not passed a regression test.
Delta Build: An executable configuration of the SUT that contains all the delta and baseline components.
Regression testing is the selective retesting of a system or component to verify that modifications have not
caused unintended effects and that the system or component still complies with its specified requirements.
Regression testing can be defined as the software maintenance ask performed on a modified program to in still
confidence that changes are correct and have not adversely affected the unchanged portions of the program.

Regression Testability:
Regression testability refers to the property of a program, modification or test suite that lets it be effectively
regression-tested. Under this definition regression testability is a function of both the design of the program and
the test suite. To consider regression testability a regression number is computed. It is the average no. of affected
test cases in the test suite that are affected by any modification to a single instruction. This number is computed
using information about the test suite coverage of the program.
If regression testability is considered at an early stage of development, it can provide significant savings in the
cost of development and maintenance of the software.

Objectives of Regression Testing:


It tests to check that the bug has been addressed: The first objective is to check whether the bug-fixing has
worked or not. So, you run exactly the same test that was executed when the problem was first found. It the
program fails on this testing it means the bug has not been fixed correctly and there is no need to do any
regression testing further.

1
If finds other related bugs: It may be possible that the developer has fixed only the symptoms of the reported
bugs without fixing the underlying bug. So, regression testes are necessary to validate that the system does not
have any related bugs.
It tests to check the effect on other parts of the program: It may be possible that bug-fixing has unwanted
consequence on other parts of a program. So, regression tests are necessary to check the influence of changes
in one part on other parts of the program.

When is Regression Testing Done?


Software Maintenance:
Corrective Maintenance: Changes made to correct a system after a failure has been observed.
Adaptive Maintenance: Changes made to achieve continuing compatibility with the target environment or other
systems.
Perfective Maintenance: Changes designed to improve or add capabilities.
Preventive Maintenance: Changes made to increase robustness, maintainability, portability and other features.
Rapid Iterative Development:
The extreme programming approach requires that a test be developed for each class and that this test be rerun
every time the class changes.
First Step of Integration:
Rerunning accumulated test suites as new components are added to successive test configurations build the
regression suite incrementally and reveal regression bugs.
Compatibility Assessment and Benchmarking:
Some test suites are designed to be run on a wide range of platforms and applications to establish conformance
with a standard or to evaluate time and space performance. These test suites are meant for regression testing but
not intended to reveal regression bugs.

Regression Testing Types:


Bug-Fix Regression:
This testing is performed after a bug has been reported and fixed. Its goal is to repeat the test cases that
expose the problem in the first place.
Side-Effect Regression/Stability Regression:
It involves retesting a substantial part of the product. The goal is to prove that the change has no detrimental
effect on something that was earlier in order. It tests the overall integrity of the program not the success of
software fixes.

Regression Testing Techniques:


There are three different techniques for regression testing. They are discussed below:
Regression test selection technique: This technique attempts to reduce the time required to retest a modified
program by selecting some subset of the existing test suite.
Test case prioritization technique: Regression test prioritization attempts to reorder a regression test suite so
that those tests with the highest priority according to some established criteria are executed earlier in the
regression testing process rather than those with lower priority. There are two types of prioritization:

2
(a) General Test Case Prioritization: For a given program P and test suite T we prioritize the test cases in
T that will be useful over a succession of subsequent modified versions of P without any knowledge of
the modified version.
(b) Version-Specific Test Case Prioritization: We prioritize the test cases in T when P is modified to P`,
with the knowledge of the changes made in P.
Test suite reduction technique: It reduces testing costs by permanently eliminating redundant test cases from
test suites in terms of codes or functionalities exercised.

Selective Retest Technique:


Selective retest techniques attempt to reduce the cost of testing by identifying the portions of P` that must be
exercised by the regression test suite. Selective retesting is distinctly different from a retest-all approach that
always executes every test in an existing regression test suite (RTS). So the objective of selective retest
technique is cost reduction. Following are the characteristic features of the selective retest technique:
• It minimizes the resources required to regression test a new version.
• It is achieved by minimizing the no. of test cases applied to the new version.
• It is needed because a regression test suite grows with each version resulting in broken, obsolete,
uncontrollable, redundant test cases.
• It analyses the relationship between the test cases and the software elements they cover. It uses the
information about changes to select test cases.

The typical steps taken by this technique are in the following manner:
1. Select T` subset of T a set of test cases to execute on P`.
2. Test P` with T` establishing correctness of P` with respect to T`.
3. If necessary, create T`` a set of new functional or structural test case for P`.
4. Test P` with T`` establishing correctness of P` with respect to T``.
5. Create T``` a new test suite and test execution profile for P` from T, T` and T``.

Each of these steps involves the following important problems:


• Regression test selection problem: Step 1
• Coverage identification problem: Step 3
• Test suite execution problem: Steps 2 and 4
• Test suite maintenance problem: Step 5

Selection Criteria Based on Code:


The motivation for code-based testing is that potential failures can only be detected if the parts of code that can
cause faults are executed.
3
Fault-revealing test cases: A test case t detects a fault in P` if it causes P` to fail. So t is fault-revealing for P`.
Modification-revealing test cases: A test case t is modification-revealing for P and P` if and only if it causes
the outputs of P and P` to differ.
Modification-traversing test cases: A test cast t is medication-traversing if and only if it executes new or
modified code in P`.
Regression Test Selection Techniques:
A variety of regression test selection techniques have been described in the research literature. We consider the
following selective retest techniques:
Minimization techniques: Minimization-based regression test selection techniques attempt to select minimal
sets of test cases from T that yield coverage of modified or affected portions of P.
Dataflow techniques: Dataflow coverage-based regression test selection techniques select test cases that
exercise data interactions that have been affected by modifications.
Safe techniques: Most regression test selection techniques, minimization and dataflow techniques among them
are not designed to be safe. Techniques that are not safe can fail to select a test case that would have revealed a
fault in the modified program. In contrast when an explicit set of safety conditions can be satisfied safe
regression test selection techniques guarantee that the selected subset T` contains all test cases in the original
test suite T that can reveal faults in P`.
Ad hoc/Random techniques: When time constrains prohibit the use of retest all approach but no test selection
tool is available developers often select test cases base on intuitions or loose associations of test cases with
functionality. Another simple approach is to randomly select a predetermined no. of test cases from T. Retest-
all technique: The retest-all technique simply reuses all existing test cases. To test P` the technique effectively
selects all test cases in T.

Regression Test Prioritization:


The regression test prioritization approach is different as compared to selective retest techniques. The
prioritization methods order the execution of RTS with the intention that a fault is detected earlier. By
prioritizing the execution of a regression test suite these methods reveal important defects in a software system
earlier in the regression testing process.
This approach can be used along with the previous selective retest technique. The steps for this approach are
given below:
1. Select T` form T a set of test cases to execute on P`.
2. Produce T`p a permutation of T` such that T`p will have a better rate of fault detection than T`.
3. Test P` with T`p in order to establish the correctness of P` with respect to T`p.
4. If necessary, create T`` a set of new functional or structural tests for P`.
5. Test P` with T`` in order to establish the correctness of P` with respect to T``.
6. Create T``` a new test suite for P` from T, T`p and T```.

4
Automation and Testing Tools
Need for Automation:
If an organization needs to choose a testing tool, the following benefits of an automation must
be considered:

--Reduction of testing Effort


--Reduces the testers’ involvement in executing tests
--Facilitates Regression Testing
--Avoids Human mistakes
--Reduces overall cost of the software
--Simulated testing
--Internal Testing
--Test case Design

Categorization of Testing Tools:


Static Testing Tools
Dynamic Testing Tools
Testing Activity Tools

Static Testing Tools:


For Static Testing, there are static program analysers which scan the source program and detect
all possible faults and anomalies. These static tools parse the program text, recognize the various
sentences, and detects the following:
--Statements are well formed.
--Inferences about the control flow of the program.
-- Compute the set of all possible values for program data.

Static tools perform the following types of static analysis:


--Control Flow Analysis
--Data use Analysis
--Interface Analysis
--Path Analysis

Dynamic Testing Tools:


Automated test tools enable the test team to capture the state of events durinf the execution of a
program by preserving a snapshot of the conditions. These tools are sometimes called Program
Monitors. These monitors perform the following functions:
--List the number of times a component is called of line of code is executed. This information is used
by testers about the statement or path coverage of their test cases.
--Report on whether a decision point has branched in all directions, thereby providing information about
branch coverage.
--Report summary statistics providing a high-level view of the percentage of statements, paths, and
branches that have been covered by the collective set of test cases run. This information is important
when test objectives are stated in terms of coverage.

Testing Activity Tools: These tools are based on the testing activities or tasks in a particular phase of
the SDLC. Testing activities can be categorized as:
--Tools for Review and Inspections
--Tools for Test Planning
--Tools for Test Design and Development
--Test Execution and Evaluation Tools

Tools for Review and Inspections: Since these tools are for static analysis on many items, some tools
are designed to work with specifications but there are far too many tools available that work exclusively
with code. In this category, the following types of tools are required:
Complexity Analysis Tools: It is important for testers that complexity is analysed so that testing time
and resources can be estimated.
Code Comprehension: These tools help in understanding dependencies, tracing, program logic, viewing
graphical representations of the program.

Tools for Test Planning: The types of tools required for test planning are:
-- Templates for test plan documentation.
-- Test schedule and staffing estimates.
-- Complexity analyzer.

Tools for Test Design and Development:


Test data generator: It automates the generation of test data based on a user defined format. Test case
generator: It automates the procedure of generating the test cases. But it works with a requirement
management tool which is meant to capture requirements information.

Test Execution and Evaluation Tools:


Capture/Playback Tools: These tools record events (keystrokes, mouse activity, display output) at the
time of running the system and place the information into a script.

Coverage Analysis Tools: These tools automate the process of thoroughly testing the software and
provide a quantitative measure of the coverage of the system being tested.

Memory Testing Tools: These tools verify that an application is properly using its memory resources.

Test management Tools: Some test management tools such as Rationals TestStudio are integrated with
requirement and configuration management and defect tracking tools, in order to simplify the entire
testing life cycle.

Network-testing Tools: These tools monitor, meausre, test and dignose performance across an entire
network.
Performance testing Tools: These tools help in measuring the response time and load capabilities of a
sytem.

Selection of Testing Tools:


--Match the tool to its appropriate use
--Select the tool to its appropriate SDLC phase
--Select the tool to the skill of the tester
--Select a tool which is affordable
--Determine how many tools are required for testing the system --Select
the tool after having the schedule of testing
Cost incurred in Testing Tools:

Automated Script Development: Auotmated test tools do not create test scripts. Therefore,a
significant time is needed to program the tests.

Training is required: Testers require training regarding the tool, otherwise the tools may end up on
the shelf or implemented inefficiently.

Configuration Management: It is necessary to track large number of files and test related artifacts.

Learning Curve for the Tools: Tests scripts generated by the tool during recording must be modified
manually, requiring tool-scripting knowledge in order to make the script robust, reusable and
maintainable.

Testing Tools can be Intrusive: It may be necessary that for automation some tools require that a
special code is inserted in the system to work correctly and to be integrated with the testing tools. These
types of tools are called as intrusive tools.

Multiple Tools are required: It may be possible that your requirement is not satisfied with just one
tool for automation.

Guidelines for Automated Testing:


Consider building a tool instead of buying one if possible: If the requirement is small and sufficient
resources allow then go for building the tool instead of buying.

Test the tool on an application prototype: While purchasing the tool it is important to verify that it
works properly with the system being developed.

Not all the tests should be automated: Automated testing is an enhancement of manual testing, but it
cannot be expected that all test on a project can be automated.

Select the tools according to Organization needs: Focus on the needs of the Organization and know
the resources (budget, schedule) before choosing the automation tool.

Use proven test-script development techniques: Automation can be effective if proven techniques
are used to produce efficient, maintainable and reusable test-script.

Automate the regression tests whenever feasible: Regression testing consumes a lot of time. If tools
are used for this testing, the testing time can be reduced to a greater extent.

Overview of Some Commercial Testing Tools:


Mercury Interactive’s WinRunner:
WinRunner, Mercury Interactive’s enterprise functional testing tool. It is used to quickly
create and run sophisticated automated tests on your application. Winrunner helps you automate the
testing process, from test development to execution. You create adaptable and reusable test scripts that
challenge the functionality of your application. Prior to a software release, you can run these tests in a
single overnight run- enabling you to detect and ensure superior software quality.

Main Features of Win Runner are:


--Developed by Mercury Interactive
--Functionality testing tool
--Supports C/s and web technologies such as (VB, VC++, D2K, Java, HTML, Power Builder, Delphe,
Cibell (ERP))
--To Support .net, xml, SAP, Peoplesoft, Oracle applications, Multimedia we can use QTP.
--Winrunner run on Windows only.
--Tool developed in C on VC++ environment.

Segue Software’s SilkTest:


Robust and portable test automation for web, native, and enterprise software applications.
Silk Test's portability enables users to test applications more effectively with lower complexity and cost
in comparison to other functional testing tools on the market. Silk Test's role based testing enables
business stakeholders, QA engineers, and developers to contribute to the whole automation testing
process, which drives collaboration and increases the effectiveness of software testing.

IBM rational SQA Robot:


Rational Robot is an automated functional, regression testing tool for automating Windows, Java, IE
and ERP applications under windows platform. Rational Robot provides test cases for common objects
such as menus, lists, bitmaps and specialized test cases for objects specific to the development
environment.

Mercury Interactive’s LoadRunner:


Mercury LoadRunner is an automated performance and load testing tool from Hewlett-Packard (HP).
An industry standard, Mercury LoadRunner is used to predict an application's behavior and
performance prior to live release. It is an enterprise-class solution for analyzing system behavior and
performance.

Apache’s Jmeter
The Apache Jmeter application is open-source software, a 100% pure Java application designed to
load test functional behavior and measure performance. It was originally designed for testing Web
Applications but has since expanded to other test functions.

Mercury Interactive’s Test Director


TestDirector is a test management tool which includes:
--Requirements Management – Helps in the recording and linking of software requirements in
TestDirector to ensure traceability of requirements through to test cases and to defects.
--Test Plan - Helps in the documentation of test plans (IEEE 829 test procedures and test cases) using
TestDirector. In the case of manual tests, Gillogley Services is experienced in the realisation of test
strategy, test planning as well as test design through the creation of TestDirector test cases.
--Test Lab – Helps in managing the execution of tests including the reporting of progress of testing
using Mercury Test Lab.

You might also like