Selenium

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 55

1

• What is Selenium?

Selenium is a suite of tools to automate web applications testing across many platforms

• Have you read any good books on Selenium?

There are several books covering Selenium automation tool

o An Introduction to Testing Web Applications with twill and Selenium by C. Titus Brown,
Gheorghe Gheorghiu, Jason Huggins
o Java Power Tools by John Ferguson Smart

• What test can Selenium do?

Selenium could be used for the functional, regression, load testing of the web based applications.
The automation tool could be implemented for post release validation with continuous integration
tools like Hudson or CruiseControl.

• What is the price of Selenium license per server?

Selenium is open source software, released under the Apache 2.0 license and can be downloaded
and used without charge.

• How much does Selenium license cost per client machine?

Selenium is open source software, released under the Apache 2.0 license and can be downloaded
and used without charge.

• Where to download Selenium?

Selenium can be downloaded and installed for free from seleniumhq.org

• What is the latest version of Selenium?

The latest version of Selenium Core, Selenium IDE, Selenium RC and Selenium Grid can be found
on Selenium download page

• What is Selenium IDE?

Selenium IDE is a Firefox add-on that records clicks, typing, and other actions to make a test,
which you can play back in the browser

• What does SIDE stand for?

Selenium IDE. It was very tricky interview question.


2

• What is Selenium Remote Control (RC) tool?

Selenium Remote Control (RC) runs your tests in multiple browsers and platforms. Tweak your
tests in your preferred language.

• What is Selenium Grid?

Selenium Grid extends Selenium RC to distribute your tests across multiple servers, saving you
time by running tests in parallel.

• How many browsers are supported by Selenium IDE?

Test Engineer can record and playback test with Selenium IDE in Firefox.

• How many browsers are supported by Selenium Remote Control?

QA Engineer can use Firefox, IE 7, Safari and Opera browsers to run actuall tests in Selenium RC.

• How many programming languages can you use in Selenium RC?

Several programming languages are supported by Selenium Remote Control. C# Java Perl PHP
Python Ruby

• What are the advantages using Selenium as testing tool?

If QA Engineer would compare Selenium with HP QTP or Micro Focus SilkTest, QA Engineer would
easily notice tremendous cost savings for Selenium. In contrast to expensive SilkTest or QTP
license, Selenium automation tool is absolutely free. Selenium allows writing and executing test
cases in various programming languages including C#, Java, PERL, Python, PHP and even HTML.
Selenium allows simple and powerful DOM-level testing and in the same time could be used for
testing in the traditional waterfall or modern Agile environments. Selenium would be definitely a
great fit for the continuous integration.

• What are the disadvantages of using Selenium as testing tool?

Selenium weak points are tricky setup; dreary errors diagnosis; tests only web applications

• How to developer Selenium Test Cases?

Using the Selenium IDE, QA Tester can record a test to comprehend the syntax of Selenium IDE
commands, or to check the basic syntax for a specific type of user interface. Keep in mind that
Selenium IDE recorder is not clever as QA Testers want it to be. Quality assurance team should
never consider Selenium IDE as a "record, save, and run it" tool, all the time anticipate reworking
a recorded test cases to make them maintainable in the future.

• Describe some problem that you had with Selenium tool?

As with any other type of test automation tools like SilkTest, HP QTP, Watir, Canoo Webtest,
Selenium allows to record, edit, and debug tests cases. However there are several problems that
seriously affect maintainability of recorded test cases.
3

The most obvious problem is complex ID for an HTML element. If IDs is auto-generated, the
recorder test cases may fail during playback. The work around is to use XPath to find required
HTML element.
Selenium supports AJAX without problems, but QA Tester should be aware that Selenium does not
know when AJAX action is completed, so ClickAndWait will not work. Instead QA tester could use
pause, but the snowballing effect of several 'pause' commands would really slow down total
testing time of test cases.

• Is there training available for Selenium?

• Do you know any alternative test automation tools for Selenium?

Selenium appears to be the mainstream open source tool for browser side testing, but there are
many alternatives. Canoo Webtest is a great Selenium alternative and it is probably the fastest
automation tool. Another Selenium alternative is Watir, but in order to use Watir QA Tester has to
learn Ruby. One more alternative to Selenium is Sahi, but is has confusing interface and small
developers community.

• Compare HP QTP vs Selenium?

Read Selenium vs QTP comparison

• Compare Borland Silktest vs Selenium?

Check Selenium vs SilkTest comparison

Q1. What is Selenium?

Ans. Selenium is a set of tools that supports rapid development of test automation scripts for web

based applications. Selenium testing tools provides a rich set of testing functions specifically

designed to fulfil needs of testing of a web based application.

Q2. What are the main components of Selenium testing tools?


Ans. Selenium IDE, Selenium RC and Selenium Grid

Q3. What is Selenium IDE?


Ans. Selenium IDE is for building Selenium test cases. It operates as a Mozilla Firefox add on and

provides an easy to use interface for developing and running individual test cases or entire test

suites. Selenium-IDE has a recording feature, which will keep account of user actions as they are

performed and store them as a reusable script to play back.


4

Q4. What is the use of context menu in Selenium IDE?


Ans. It allows the user to pick from a list of assertions and verifications for the selected location.

Q5. Can tests recorded using Selenium IDE be run in other browsers?
Ans. Yes. Although Selenium IDE is a Firefox add on, however, tests created in it can also be run in

other browsers by using Selenium RC (Selenium Remote Control) and specifying the name of the test

suite in command line.

Q6. What are the advantage and features of Selenium IDE?

Ans. 1. Intelligent field selection will use IDs, names, or XPath as needed
2. It is a record & playback tool and the script format can be written in various languages including

C#, Java, PERL, Python, PHP, HTML


3. Auto complete for all common Selenium commands
4. Debug and set breakpoints
5. Option to automatically assert the title of every page
6. Support for Selenium user-extensions.js file

Q7. What are the disadvantage of Selenium IDE tool?


Ans. 1. Selenium IDE tool can only be used in Mozilla Firefox browser.
2. It is not playing multiple windows when we record it.

Q8. What is Selenium RC (Remote Control)?


Ans. Selenium RC allows the test automation expert to use a programming language for maximum

flexibility and extensibility in developing test logic. For example, if the application under test returns

a result set and the automated test program needs to run tests on each element in the result set, the

iteration / loop support of programming language’s can be used to iterate through the result set,

calling Selenium commands to run tests on each item.

Selenium RC provides an API and library for each of its supported languages. This ability to use

Selenium RC with a high level programming language to develop test cases also allows the automated

testing to be integrated with the project’s automated build environment.

Q9. What is Selenium Grid?

Ans. Selenium Grid in the selenium testing suit allows the Selenium RC solution to scale for test suites
5

that must be run in multiple environments. Selenium Grid can be used to run multiple instances of

Selenium RC on various operating system and browser configurations.

Q10. How Selenium Grid works?

Ans. Selenium Grid sent the tests to the hub. Then tests are redirected to an available Selenium RC,

which launch the browser and run the test. Thus, it allows for running tests in parallel with the entire

test suite.

Q 11. What you say about the flexibility of Selenium test suite?
Ans. Selenium testing suite is highly flexible. There are multiple ways to add functionality to Selenium

framework to customize test automation. As compared to other test automation tools, it is

Selenium’s strongest characteristic. Selenium Remote Control support for multiple programming and

scripting languages allows the test automation engineer to build any logic they need into their

automated testing and to use a preferred programming or scripting language of one’s choice.

Also, the Selenium testing suite is an open source project where code can be modified and

enhancements can be submitted for contribution.

Q12. What test can Selenium do?


Ans. Selenium is basically used for the functional testing of web based applications. It can be used for

testing in the continuous integration environment. It is also useful for agile testing

Q13. What is the cost of Selenium test suite?


Ans. Selenium test suite a set of open source software tool, it is free of cost.

Q14. What browsers are supported by Selenium Remote Control?


Ans. The test automation expert can use Firefox, IE 7/8, Safari and Opera browsers to run tests in

Selenium Remote Control.

Q15. What programming languages can you use in Selenium RC?


Ans. C#, Java, Perl, PHP, Python, Ruby
6

Q16. What are the advantages and disadvantages of using Selenium as testing tool?
Ans. Advantages: Free, Simple and powerful DOM (document object model) level testing, can be used

for continuous integration; great fit with Agile projects.

Disadvantages: Tricky setup; dreary errors diagnosis; can not test client server applications.

Q17. What is difference between QTP and Selenium?


Ans. Only web applications can be testing using Selenium testing suite. However, QTP can be used for

testing client server applications. Selenium supports following web browsers: Internet Explorer,

Firefox, Safari, Opera or Konqueror on Windows, Mac OS X and Linux. However, QTP is limited to

Internet Explorer on Windows.

QTP uses scripting language implemented on top of VB Script. However, Selenium test suite has the

flexibility to use many languages like Java, .Net, Perl, PHP, Python, and Ruby.

Q18. What is difference between Borland Silk test and Selenium?


Ans. Selenium is completely free test automation tool, while Silk Test is not. Only web applications

can be testing using Selenium testing suite. However, Silk Test can be used for testing client server

applications. Selenium supports following web browsers: Internet Explorer, Firefox, Safari, Opera or

Konqueror on Windows, Mac OS X and Linux. However, Silk Test is limited to Internet Explorer and

Firefox.

Silk Test uses 4Test scripting language. However, Selenium test suite has the flexibility to use many

languages like Java, .Net, Perl, PHP, Python, and Ruby.

Selenium tool
First of all it will better if you use selenium RC to automate multiple test cases by using your choice of
language.

You have to follow below mentioned steps when you have to automate multiple test cases:
1. Choose your preferred language i.e.
RUBY JavaScript PERL etc.
2. Create a automation framework in your selected language. While creating automation framework you
have to notice some things like Logical Data Independence Reporting Error log handling centralized library
etc.
3. Record scripts through selenium IDE and convert it to your selected language via IDE.
4. Create functions for each test script and put centralized code in centralized library file.
5. Create a driver file and write command for execution sequence for test script.
1) Understand the basics right :
- Basics of Web testing
- How selenium works? - http://seleniumhq.org/about/how.html
7

- Selenium API - http://seleniumhq.org/documentation/core/reference.html#actions


2) How selenium Indentifies web element on the page?
- By Id or Name or Using an Xpath
3) Different Flavours of selenium
- Selenium Core
- Selenium RC
- Selenium Grid
4) Selenium - How to use IDE?
5) Is IDE good enough to automate your tests?
6) What are the practical issues, while using selenium?
7) Selenium Vs Other tools (Like QTP)

Jason Huggins on Selenium’s Challenges

Jason Huggins, one of the founders of Selenium, spoke at the Agile Developer user group meeting recently.
Matt Raible of DZone gives a good summary of Jason’s comments.

Among them:

• Selenium started at Thoughtworks. They were challenged to fix an Ajax bug in their expense
reporting system. JWebUnit, HtmlUnit, JsUnit, Driftwood, and FIT did not meet their needs. They
invented Selenese as a notation for tests first.
• Selenium Core, as a JavaScript embedded test playing robot, came next. Then Selenium RC and
Grid.
• Selenium test playback is slow. Parallelization can solve some of the slowness problems.
• JavaScript sandbox, Flash, Applets, Silverlight, and HTML 5’s Canvas all present problems in
Selenium.

Read the article here.

Here are my thoughts:

Thanks for the good write-up. Too bad about your battery. I would like to hear more.

PushToTest integrated Selenium into TestMaker a couple of years ago. Selenium works very well for
testing Ajax applications. And, TestMaker runs Selenium tests as functional tests, load and performance
tests, and business service monitors. TestMaker runs these tests in your QA lab, in the Cloud, or both. See
http://www.pushtotest.com/products/cloudtesting

 What does SIDE stand for?


 What is the difference between an assert and a verify with Selenium commands?
 What Selenese commands can be used to help debug a regexp?
 What is one big difference between SilkTest and Selenium, excluding the price?
 Which browsers can Selenium IDE be run in?
 If a Selenium function requires a script argument, what would that argument look like in general
terms?
 If a Selenium function requires a pattern argument, what five prefixes might that argument have?
 What is the regular expression sequence that loosely translates to "anything or nothing?"
 What is the globbing sequence that loosely translates to "anything or nothing?
 What does a character class for all alphabetic characters and digits look like in regular expressions?
 What does a character class for all alphabetic characters and digits look like in globbing?
 What must one set within SIDE in order to run a test from the beginning to a certain point within the
test?
 What does a right-pointing green triangle at the beginning of a command in SIDE indicate?
 How does one get rid of the right-pointing green triangle?
 How can one add vertical white space between sections of a single test?
 What Selenium functionality uses wildcards?
 Which wildcards does SIDE support?
 What are the four types of regular expression quantifiers which we've studied?
 What regular expression special character(s) means "any character?"
 What distinguishes between an absolute and relative URL in SIDE?
 How would one access a Selenium variable named "count" from within a JavaScript snippet?
8

 What Selenese command can be used to display the value of a variable in the log file, which can be
very valuable for debugging?
 If one wanted to display the value of a variable named answer in the log file, what would the first
argument to the previous command look like?
 Where did the name "Selenium" come from?
 Which Selenium command(s) simulates selecting a link?
 Which two commands can be used to check that an alert with a particular message popped up?
 What does a comment look like in Column view?
 What does a comment look like in Source view?
 What are Selenium tests normally named (as displayed at the top of each test when viewed from within
a browser)?
 What command simulates selecting the browser's Back button?
 If the Test Case frame contains several test cases, how can one execute just the selected one of those
test cases?
 What globbing functionality is NOT supported by SIDE?
 What is wrong with this character class range? [A-z]
 What are four ways of specifying an uppercase or lowercase M in a Selenese pattern?
 What does this regular expression match?

regexp:[1-9][0-9],[0-9]{3},[0-9]{3}

 What are two ways to match an asterisk within a Selenese regexp?


 What is the generic name for an argument (to a Selenese command) which starts with //?
 What Selenese command is used to choose an item from a list?
 How many matches exist for this pattern?

regexp:[13579][02468]

 What is the oddity associated with testing an alert?


 How can one get SIDE to always record an absolute URL for the open command's argument?
 What Selenese command and argument can be used to transfer the value of a JavaScript variable into a
SIDE variable?
 How would one access the value of a SIDE variable named name from within a JavaScript snippet used
as the argument to a Selenese command?
 What is the name of the type of JavaScript entity represented by the last answer?
 What string(s) does this regular expression match?

regexp:August|April 5, 1908

 What Selenium regular expression pattern can be used instead of the glob below to produce the same
results?

verifyTextPresent | glob:9512?

 What Selenium globbing pattern can be used instead of the regexp below to produce the same
results?

Discover the automating power of Selenium

Functional testing and black box is a methodology used to test the behaviour that has an application from
the viewpoint of their functions, validated for this purpose various aspects ranging from the aesthetics of
the front end, the navigation within your pages between pages within the forms, the compliance of
technical specifications associated with fields, buttons, bars, among other pages, entry permits and access
to consultations and modifications, management of parameters, the management of the modules that
constitute , and other conditions that make up the various "features" expected to provide the system to
operate the end user as a normal and correct.
9

To meet this objective, the tester must choose a set of inputs under certain pre-defined within a certain
context, and to check whether the outputs are correct or incorrect depending on the outcome defined in
advance between the parties and / or techniques for the customer / supplier.

This form of testing an application is made "outside", that is why "black box testing" because the test
covers the roads that follow the internal procedures of the program.

In connection with this test, although there are many tools now that this one out for various reasons will be
published in future articles is: Selenium.

Selenium works directly on the web browser, its installation is simple and handling is so intuitive that
allows quickly define and configure a test case, recording the journey in a page and then save the
sequence of steps as a test script and and then play it when you want.

Selenium is an open-source tool that not only allows the testing of the system but also facilitates the
acceptance testing of web applications.

Integrates with Firefox, and includes the ability to write the tests directly in Java, C #, Python and Ruby.

This solution has three basic tools to record a sequence of steps within a website, simulate the test with
different browsers and automated test generation.

Selenium IDE is a plug-in for Firefox which allows you to record and execute scripts directly from your
browser.

Selenium RC is a library and server written in Java that allows you to run scripts from local or remote
through commands.

Grids Selenium: Selenium server allows multiple coordinate in order to run scripts on multiple platforms
and devices at the same time.

Software testing Metrices - Test Case Review Effectiveness

Metrics are the means by which the software quality can be measured; they give you confidence in the
product. You may consider these product management indicators, which can be either quantitative or
qualitative. They are typically the providers of the visibility you need.

The goal is to choose metrics that will help you understand the state of your product.
10

Metrics for Test Case Review Effectiveness:

1. Major Defects Per Test Case Review


2. Minor Defects Per Test Case Review
3. Total Defects Per Test Case Review
4. Ratio of Major to Minor Defects Per Test Case Review
5. Total Defects Per Test Case Review Hour
6. Major Defects Per Test Case Review Hour
7. Ratio of Major to Minor Defects Per Test Case Review Hour
8. Number of Open Defects Per Test Review
9. Number of Closed Defects Per Test Case Review
10. Ratio of Closed to Open Defects Per Test Case Review
11. Number of Major Open Defects Per Test Case Review
12. Number of Major Closed Defects Per Test Case Review
13. Ratio of Major Closed to Open Defects Per Test Case Review
14. Number of Minor Open Defects Per Test Case Review
15. Number of Minor Closed Defects Per Test Case Review
16. Ratio of Minor Closed to Open Defects Per Test Case Review
17. Percent of Total Defects Captured Per Test Case Review
18. Percent of Major Defects Captured Per Test Case Review
19. Percent of Minor Defects Captured Per Test Case Review
20. Ratio of Percent Major to Minor Defects Captured Per Test Case Review
21. Percent of Total Defects Captured Per Test Case Review Hour
22. Percent of Major Defects Captured Per Test Case Review Hour
23. Percent of Minor Defects Captured Per Test Case Review Hour
24. Ratio of Percent Major to Minor Defects Captured Per Test Case Review Hour
25. Percent of Total Defect Residual Per Test Case Review
26. Percent of Major Defect Residual Per Test Case Review
27. Percent of Minor Defect Residual Per Test Case Review
28. Ratio of Percent Major to Minor Defect Residual Per Test Case Review
29. Percent of Total Defect Residual Per Test Case Review Hour
30. Percent of Major Defect Residual Per Test Case Review Hour
31. Percent of Minor Defect Residual Per Test Case Review Hour
32. Ratio of Percent Major to Minor Defect Residual Per Test Case Review Hour
33. Number of Planned Test Case Reviews
34. Number of Held Test Case Reviews
35. Ratio of Planned to Held Test Case Reviews
36. Number of Reviewed Test Cases
37. Number of Unreviewed Test Cases
38. Ratio of Reviewed to Unreviewed Test Cases
39. Number of Compliant Test Case Reviews
40. Number of Non-Compliant Test Case Reviews
41. Ratio of Compliant to Non-Compliant Test Case Reviews
42. Compliance of Test Case Reviews
43. Non-Compliance of Test Case Reviews
44. Ratio of Compliance to Non-Compliance of Test Case Reviews

Risk Analysis in Software Testing


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

A risk is a potential for loss or damage to an Organization from materialized threats. Risk
Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat as we have
seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based
system.

Risk Identification:

1. Software Risks: Knowledge of the most common risks associated with Software development, and the
platform you are working on.

2. Business Risks: Most common risks associated with the business using the Software
11

3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform
you are working on, tools being used, and test methods being applied.

4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or
untested Software Prodicts.

5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing
and operating information technology, products and process; assessing their likelihood, and initiating
strategies to test those risks.

Traceability means that you would like to be able to trace back and forth how and where any work product
fulfills the directions of the preceding (source-) product. The matrix deals with the where, while the how
you have to do yourself, once you know the where.

Take e.g. the Requirement of User Friendliness (UF). Since UF is a complex concept, it is not solved by just
one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute
to this Requirement and many groups of lines of code may contribute to it.

A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together
are supposed to solve the UF requirement, along with other (sub-)requirements. On the other side (e.g.
top) you specify all design solutions. Now you can connect on the crosspoints of the matrix, which design
solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it
should be deleted, as it is of no value.

Having this matrix, you can check whether any requirement has at least one design solution and by
checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of)
connected design(s).

If you have to change any requirement, you can see which designs are affected. And if you change any
design, you can check which requirements may be affected and see what the impact is.

In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a
particular design and how changes in design or code affect each other.

Demonstrates that the implemented system meets the user requirements.

Serves as a single source for tracking purposes.

Identifies gaps in the design and testing.

Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps.

Risk analysis is appropriate to most software development projects.

Use risk analysis to determine where testing should be focused:

1. Which functionality is most important to project

2. Which functionality is most visible to user

3. Which functionality has largest impact on users.

4. Which aspects of the application is most important to the customer

5. Which aspects of the application is tested early in the development cycle.

These are some of the risk analysis factors to test an application.

The two most important characteristics of Risks are


a. Its Adverse effect to the success of the project
b. Its uncertainty.

Typical parameters used in analysis of risks are


1. Probability of occurrence
2. Impact of the risk - this is sometimes a product of Severity of the impact and period in which the risk
might occur.

The product of the above two parameters will give you Risk Exposure factor based on which the the
12

priority of the risks to be mitigated or acted upon could be decide.

Risks are always as against the business requirements for which the software has been created. So, it is
always very important to understand the risks that can affect the client's business and its impact. Testers
need inputs from client as well as developers with regard to this understanding. What you may consider as
risk from testing point of view may not be as seen by the client. So, it is necessary to associate risk level
with each requirement and also the priority. This will be very beneficial while suggesting go/no go for
production.

Software Testing Requirements


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Continuing the Basics of Software Testing series for Freshers,,

Software testing is not an activity to take up when the product is ready. An effective testing begins with a
proper plan from the user requirements stage itself. Software testability is the ease with which a computer
program is tested. Metrics can be used to measure the testability of a product. The requirements for
effective testing are given in the following sub-sections.

Operability:

The better the software works, the more efficiently it can be tested.

•The system has few bugs (bugs add analysis and reporting overhead to the test process)
•No bugs block the execution of tests

The product evolves in functional stages (allows simultaneous development & testing)

Observability:

What is seen is what is tested

•Distinct output is generated for each input


•System states and variables are visible or queriable during execution
•Past system states and variables are visible or queriable (eg., transaction logs)
•All factors affecting the output are visible
•Incorrect output is easily identified
•Incorrect input is easily identified
•Internal errors are automatically detected through self-testing mechanism
•Internally errors are automatically reported
•Source code is accessible

Controllability:

The better the software is controlled, the more the testing can be automated and optimized.
13

•All possible outputs can be generated through some combination of input


•All code is executable through some combination of input
•Software and hardware states can be controlled directly by testing
•Input and output formats are consistent and structured
•Tests can be conveniently specified, automated, and reproduced.

Decomposability:

By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be
performed.

•The software system is built from independent modules


•Software modules can be tested independently

Simplicity:

The less there is to test, the more quickly it can be tested


•Functional simplicity
•Structural simplicity
•Code simplicity

Stability:

The fewer the changes, the fewer the disruptions to testing


•Changes to the software are infrequent
•Changes to the software are controlled
•Changes to the software do not invalidate existing tests
•The software recovers well from failures

Understandability:

The more information we have, the smarter we will test


•The design is well understood
•Dependencies between internal external and shared components are well understood.
•Changes to the design are communicated.
•Technical documentation is instantly accessible
•Technical documentation is well organized
•Technical documentation is specific and detailed
•Technical documentation is accurate

Software Testing Effort Estimating


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Test Estimation Challenges

1. Successful test estimation is a challenge for most organizations because:

a. No standard Formulae/Methods for Test Estimation


14

b. Test Effort Estimates also includes the Debugging Effort


c. Difficult to attempt testing estimates without first having detailed information about a project
d. Software Testing Myth that Testing can be performed at the End
e. Difficult to attempt testing estimates without an understanding of what should be included in a
'testing' estimation for a project (functional testing? unit testing? reviews? inspections? load testing?
security testing?)

Traditional Practices

1. Current Test Planning Methods include


a. Using Percentage of Development Effort
i. Depends on the accuracy of the Development Effort
ii. Does not account revisit of Development Effort
b. Using Tester Developer Ratio
i. May not be same for all types of Projects
ii. Does not consider the size of the Project
c. Using KLOC
i. Does not consider Complexity, Criticality & Priority of the Project.

What’s the Best Approach to Test Estimation?

There is no simple answer for this. The 'best approach' is highly dependent on the particular organization
and project and the experience of the personnel involved.

For example, given two software projects of similar complexity and size, the appropriate test effort for one
project might be very large if it was for life-critical medical equipment software, but might be much smaller
for the other project if it was for a low-cost computer game. A test estimation approach that only
considered size and complexity might be appropriate for one project but not for the other.

Approaches to Test Estimation

1. Implicit Risk Context Approach:


2. Metrics-Based Approach:
3. Test Work Breakdown Approach:
4. Iterative Approach:
5. Percentage-of-Development Approach:

Test Estimation Process – A Practical Approach

1. Combination of all the Approaches


2. Considers Risk & Complexity Factors
3. Based on Previous History i.e. Organization or Project Metrics
4. Based on Work Breakdown Structure
5. Based on Iterative Model

Top Five Pitfalls of Manual Software Testing


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Manual software testing is a necessity, and an unavoidable part of the software product
development process. How much testing you do manually, as compared to using test automation, can
15

make the difference between a project’s success and failure. We will discuss test automation in more
detail in a later chapter, but the top five pitfalls of manual software testing illuminate areas where
improvements can be made. The pitfalls are listed and described below.

1. Manual testing is slow and costly. Because it is very labor-intensive, it takes a long time to
complete tests. To try to accelerate testing, you may increase the headcount of the test organization. This
increases the labor as well as the communication costs.

2. Manual tests don’t scale well. As the complexity of the software increases, the complexity of the
testing problem grows exponentially. If tests are detailed and must be performed manually, performing
them can take quite a bit of time and effort. This leads to an increase in the total time devoted to testing
as well as the total cost of testing. Even with these increases in the time and cost, the test coverage goes
down as the complexity goes up because of the exponential growth rate.

3. Manual testing is not consistent or repeatable. Variations in how the tests are performed are
inevitable, for various reasons. One tester may approach and perform a certain test differently from
another, resulting in different results on the same test, because the tests are not being performed
identically. As another example, if there are differences in the location a mouse is pointed when its button
is clicked, or how fast operations are performed, these could potentially produce different results.

4. Lack of training is a common problem, although not unique to manual software testing. The staff
should be well-trained in the different phases of software testing:
– Test design
– Test execution
– Test result evaluation

5. Testing is difficult to manage. There are more unknowns and greater uncertainty in testing than in
code development. Modern software development practices are well-structured, but if you don’t have
sufficient structure in testing, it will be difficult to manage. Consider a case in which the development
phase of a project schedule slips. Since manual software testing takes more time, more resources, and is
costly, that schedule slip can be difficult to manage. A delay in getting the software to the test team on
schedule can result in significant wasted resources. Manual testing, as well as badly designed automated
testing, are also not agile. Therefore, changes in test focus or product requirements make these efforts
even more difficult to manage.

What is Traceability Matrix from Software Testing perspective?


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

The concept of Traceability Matrix is very important from the Testing perspective. It
is document which maps requirements with test cases. By preparing Traceability matrix, we can ensure
that we have covered all the required functionalities of the application in our test cases.

What is Traceability Matrix from Software Testing perspective?


The concept of Traceability Matrix is very important from the Testing perspective. It is document
which maps requirements with test cases. By preparing Traceability matrix, we can ensure that
we have covered all the required functionalities of the application in our test cases. Some of the
features of the traceability matrix:

• It is a method for tracing each requirement from its point of origin, through each development
phase and work product, to the delivered product

• Can indicate through identifiers where the requirement is originated, specified, created, tested,
and delivered
• Will indicate for each work product the requirement(s) this work product satisfies
• Facilitates communications, helping customer relationship management and commitment
negotiation
16

Traceability matrix is the answer of the following basic questions of any Software Project:

• How is it possible to ensure, for each phase of the lifecycle, that I have correctly accounted for all
the customer’s needs?
• How can I ensure that the final software product will meet the customer’s needs? For example I
have a functionality which checks if I put invalid password in the password field the application
throws an error message “Invalid password”. Now we can only make sure this requirement is
captured in the test case by traceability matrix.

Some more challenges we can overcome by Traceability matrix:

• Demonstrate to the customer that the requested contents have been developed
• Ensure that all requirements are correct and included in the test plan and the test cases
• Ensure that developers are not creating features that no one has requested
• The system that is built may not have the necessary functionality to meet the customers and
users needs and expectations. How to identify the missing parts?
• If there are modifications in the design specifications, there is no means of tracking the changes
• If there is no mapping of test cases to the requirements, it may result in missing a major defect in
the system
• The completed system may have “Extra” functionality that may have not been specified in the
design specification, resulting in wastage of manpower, time and effort.
• If the code component that constitutes the customer’s high priority requirements is not known,
then the areas that need to be worked first may not be known thereby decreasing the chances of
shipping a useful product on schedule
• A seemingly simple request might involve changes to several parts of the system and if proper
Traceability process is not followed, the evaluation of the work that may be needed to satisfy the
request may not be correctly evaluated

Step by step process of creating a Traceability Matrix from requirements:

step1: Identify all the testable requirements in granular level from various requirement specification
documents. These documents vary from project to project. Typical requirements you need to capture are
as follows:
Used cases (all the flows are captured)
Error Messages
Business rules
Functional rules
SRS
FRS
So on…

example requirements: login functionality, generate report, update something etc.

step2: In every project you must be creating test-cases to test the functionality as defined by the
requirements. In this case you want to extend the traceability to those test-cases. In the example table
below the test-cases are identified with a TC_ prefix.
Put all those requirements in the top row of a spreadsheet. And use the right hand column of the
spreadsheet to jot down all the test cases you have written for that particular requirement. In most of the
cases you will have multiple test cases you have written to test one requirement. See the sample
spreadsheet below:
17

step3: Put cross against each of the test case to each requirement if that particular test case is checking
that particular requirement partially or completely. In the above table you can see REQ1 UC1.1 is checked
by three test cases. (TC1.1.1, TC1.1.3, TC1.1.5).

Another example of traceability matrix where requirement documents (use case) are mapped back to the
test cases.
18

Change management through traceability matrix:


It will be lot easier for you to track changes if you have a good traceability matrix in place. For example
REQ1 UC1.1 we know upfront from the traceability matrix that what test cases I need to modify to
incorporate those changes. In the above case we need to modify TC1.1.1, TC1.1.3 and TC1.1.5 only.

Software Testing Strategy and Methodology


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Testing strategy (methodology)

A test strategy describes the approach, objectives, and direction of the effort test. The purpose of a testing
strategy or method is to minimize risk and ultimately provide the best software for the client. The testing
strategy of choice for a particular application can vary depending on the software, amount of use, and its
objectives. For example, the testing strategy for a transactional system like Oracle will be very different
from the strategy developed to test an analytical tool as the Data Warehouse. In addition, the strategy
chosen for testing a campus-wide purchasing system to a limited number of users of the tool housing
requires a very different test strategies. Because some of these examples have higher exposure, they also
have a higher risk.

Below one is the common and best example of Testing Strategy


19

STAGES OF THE TEST OF LIFE CYCLE


A Software has used several variations of one or more test methods. Lets say, if software is “IMSS” Typical
stages include preparation, conference room pilot (CRP), unity, integration, system testing and user
acceptance.

These steps are also called stages or levels. The project manager should review the steps below and
consider the same terminology and sequence. If it makes sense, certain phases and tasks May be deleted.
In other cases, tasks and phases of May should be added. May perform some tasks in parallel, and some
steps can be combined. In most cases, each phase must be completed before another can begin.

Duration of the tasks vary depending on the timing and risk of the project manager is ready to absorb.
Test Preparation Phase (before testing begins)
Team - Develop test strategy
Task - Develop high-level test plan
Team - Identify the test cases
Task - Develop scenarios, test scripts
Team - Identify and share test data
Team - Identify the processes, procedures, standards, documentation requirements
Team - Identify and create test environment
Task - Identify test team (s)
Team - Train testers

Unit test phase - The purpose of this testing is to verify and validate the function modules
correctly. This is completed by developers and must be completed before future phases can begin. The
Testing Manager are not normally involved in this phase.

Phase CRP (Conference Room Pilot - optional). The purpose of this phase is to verify proof of concept.
The CRP is generally necessary for new, large, not projects.
Assumption - Test instance is ready
Assumption - Metadata is inserted test example
Assumption - The unit tests and simulations has been completed
Assumption - test scenarios have been identified (by script or ad hoc)
Task - Identify CRP participants
Team - Determine and establish logistics CRP
Task - Define expectations.
Team - Start of CRP
Task - Collect and document feedback
Task - End CRP
Team - Obtain approval phase / sign-off
Team - Collect / share / integrate lessons learned, incorporate the necessary changes
Task - Tune / revise and approve the new test plan

Integration of the testing phase - The purpose of this testing is to verify and validate all the modules are
interface and work together.
Assumption - Requirements are frozen and the design is determined
Assumption - Application is ready for integration tests
Assumption - Metadata was populated by such test tables
Assumption - Unit testing is complete
Task - Test system and document using the test scripts
Team - Test interfaces
Task - Identify and report bugs
Task - Retest fixed bugs / regression test
Team - Test Security
Team - Test browsers / platforms / operating systems
Team - Obtain approval phase / sign-off
Team - Collect / share / integrate lessons learned
Task - Tune / revise and approve the new test plan
The test phase of the system - The purpose of this testing is to verify and validate the system works as if it
were production.
20

Assumption - Metadata was populated pending test


Assumption - Application is ready and has successfully completed the integration tests
Task - Test system and document using the test scripts
Task - Identify and report bugs
Task - Retest fixed bugs / regression test
Team - test business processes and reports
Team - Stress test
Task - test performance (eg, refreshes the screen)
Team - test connection security, responsibilities, piracy
Team - Obtain approval phase / sign-off
Team - Collect / share / integrate lessons learned
Task - Tune / revise and approve the new test plan
Acceptance phase users - The objective of this testing is to verify and validate the system and
for end users as if it were the production
Assumption - Show-stoppers and the highest level of bugs were fixed and work around have been
identified and approved
Assumption - All other phases have been signed off
Assumption - Application is ready for acceptance testing by the user
Assumption - Metadata was populated by such test tables
Team - Train users testers
Task - Populate and approve test scripts
Task - Test system and document using the test scripts
Team - Obtain approval phase / sign-off
Team - Collect / share / integrate lessons learned

Guidelines for automated testing of web applications


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

One of the key reasons for doing automated testing is to ensure that time is not spent on doing
repetitive tasks which can be completed by tools without human intervention. Automation could be one of
the most effective tools in your toolbox but it is not a silver bullet that will solve all the problems and
improve quality. Automation tools are obedient servants, and as a tester we need to become their master
and use them properly to realize their full potential. It is very important to understand that automation
tools are only as good as we use them. Converting test cases from manual to automated is not the best
use of automation tools. They can be used in much more effective ways.

Creating robust and useful test automation framework is a very difficult task. In the web world, this task
becomes even more difficult because things might change overnight. If we follow so called best practices
of automation taken from stable, desktop applications, they will not be suitable in web environment and
probably will have negative impact on the project's quality.

Many problems in the web world are identical to one another. For example irrespective of any web
application we always need to validate things such as presence of title on all the pages.Depending on your
context may be the presence of meta data on every page, presence of tracking code, presence of ad code,
size and number of advertising units and so on.

Solution presented in this article can be used to validate all, or any of the rules mentioned above , across
all the pages in any domain / website. We were given a mandate to ensure that specific tracking code is
present on all the pages of a big website. In a true agile fashion, once this problem was solved it was
extended and re-factored to incorporate many rules on all the pages.

This solution was developed using Selenium Remote Control with Python as scripting language. One of the
main reason for using tools such as Selenium RC is their ability to allow us to code in any language and
this allow us to utilize full power of standard language. For this solution, a python library called Beautiful
Soup was used to parse HTML pages. This solution was ported to another tool called Twill to make it faster.
Since the initial code was also developed in Python, converting it to Twill was a piece of cake.
21

Essentially this solution / script is a small web crawler, which will visit all the pages of any website and
validate certain rules. As mentioned earlier, problem statement for this is very simple i.e. “ Validate certain
rules on every webpage for any given website ”. In order to achieve this, following steps were followed

1.Get Page

2. Get All the links

3. Get first link and if link is not external and crawler has not visited it, open link.

4. Get Page Source

5. Validate all the rules you want to validate on this page

6. Repeat 1 to 5 for all the pages.

It is worth mentioning here that rules that can be validated using this framework are the rules, which can
be validated by looking at the source code for the page. Some of the rules that can be validated using this
script are –

1. Make sure that title is present for all the pages and is not generic
2. Check the presence of meta tags like keywords and description on all the pages.
3. Ensure that instrumentation code is present on all the pages
4. Ensure that every image has an alternative text associated with it
5. Ensure that ad code is coming from the right server and has all the relevant information we need.
6. Ensure that size of the banners and skyscrapers used for advertisement is proper.
7. Ensure that every page contain at least two advertisements and no page should have more than
four advertisements, except home page.
8. Ensure that master CSS is applied on all the pages for a given domain.
9. Make sure that all the styles are coming from the CSS files and styles are not present for any
element on a web page.

Above mentioned list might give you some idea of what can be achieved using this approach. This list can
be extended very easily. It is limited only by your imagination :)

In the next article, we will look at the code snippets and explain how easily these rules can be customized
and validated across all the pages on any given domain.
Impact Analysis Checklist for Requirements Changes

Implications of the Proposed Change

❏ Identify any existing requirements in the baseline that conflict with the proposed change.

❏ Identify any other pending requirement changes that conflict with the proposed change.

❏ What are the consequences of not making the change?

❏ What are possible adverse side effects or other risks of making the proposed change?

❏ Will the proposed change adversely affect performance requirements or other quality attributes?

❏ Will the change affect any system component that affects critical properties such as safety and
security, or involve a product change that triggers recertification of any kind?

❏ Is the proposed change feasible within known technical constraints and current staff skills?

❏ Will the proposed change place unacceptable demands on any computer resources required for the
development, test, or operating environments?

❏ Must any tools be acquired to implement and test the change?


22

❏ How will the proposed change affect the sequence, dependencies, effort, or duration of any tasks
currently in the project plan?

❏ Will prototyping or other user input be required to verify the proposed change?

❏ How much effort that has already been invested in the project will be lost if this change is accepted?

❏ Will the proposed change cause an increase in product unit cost, such as by increasing third-party
product licensing fees?

❏ Will the change affect any marketing, manufacturing, training, or customer support plans?

System Elements Affected by the Proposed Change

❏ Identify any user interface changes, additions, or deletions required.

❏ Identify any changes, additions, or deletions required in reports, databases, or data files.

❏ Identify the design components that must be created, modified, or deleted.

❏ Identify hardware components that must be added, altered, or deleted.

❏ Identify the source code files that must be created, modified, or deleted.

❏ Identify any changes required in build files.

❏ Identify existing unit, integration, system, and acceptance test cases that must be modified or
deleted.

❏ Estimate the number of new unit, integration, system, and acceptance test cases that will be required.

❏ Identify any help screens, user manuals, training materials, or other documentation that must be
created or modified.

❏ Identify any other systems, applications, libraries, or hardware components affected by the change.

❏ Identify any third party software that must be purchased.

❏ Identify any impact the proposed change will have on the project’s software project management
plan, software quality assurance plan, software configuration management plan, or other plans.

❏ Quantify any effects the proposed change will have on budgets of scarce resources, such as memory,
processing power, network bandwidth, real-time schedule.

❏ Identify any impact the proposed change will have on fielded systems if the affected component is not
perfectly backward compatible.
23

Effort Estimation for a Requirements Change

Effort
(Labor Hours) Task

Update the SRS or requirements database with the new requirement


Develop and evaluate prototype
Create new design components
Modify existing design components
Develop new user interface components
Modify existing user interface components
Develop new user publications and help screens
Modify existing user publications and help screens
Develop new source code
Modify existing source code
Purchase and integrate third party software
Identify, purchase, and integrate hardware components; qualify vendor
Modify build files
Develop new unit and integration tests
Modify existing unit and integration tests
Perform unit and integration testing after implementation
Write new system and acceptance test cases
Modify existing system and acceptance test cases
Modify automated test drivers
Perform regression testing at unit, integration, and system levels
Develop new reports
Modify existing reports
Develop new database elements
Modify existing database elements
Develop new data files
Modify existing data files

Modify various project plans


Update other documentation
Update requirements traceability matrix
Review modified work products
Perform rework following reviews and testing
Recertify product as being safe, secure, and compliant with standards.
Other additional tasks
TOTAL ESTIMATED EFFORT

Procedure:

1. Identify the subset of the above tasks that will have to be done.
2. Allocate resources to tasks.
3. Estimate effort required for pertinent tasks listed above, based on assigned resources.
4. Total the effort estimates.
5. Sequence tasks and identify predecessors.
6. Determine whether change is on the project’s critical path.
24

7. Estimate schedule and cost impact.


25

Impact Analysis Report Template

Change Request ID: ______________


Title: ______________________________________________________
Description: ______________________________________________________
______________________________________________________
Analyst: __________________________
Date Prepared: __________________________

Prioritization Estimates:
Relative Benefit: (1-9)
Relative Penalty: (1-9)
Relative Cost: (1-9)
Relative Risk: (1-9)
Calculated Priority: (relative to other pending requirements)

Estimated total effort: ___________ labor hours


Estimated lost effort: ___________ labor hours (from discarded work)
Estimated schedule impact: ___________ days
Additional cost impact: ___________ dollars
Quality impact: _______________________________________________
_______________________________________________

Other requirements affected: ____________________________________________________


____________________________________________________
Other tasks affected: ____________________________________________________
____________________________________________________
Integration issues: ____________________________________________________
Life cycle cost issues: ____________________________________________________
Other components to examine ____________________________________________________
for possible changes: ____________________________________________________

Peer Test Reviews


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

It's always been a source of bafflement (technical term) that within the testing domain
Walkthroughs and Peer Reviews are not more widely practiced. I recall being 'invited' to a code review
where a developer went through their code line by line in dolby prologic monotone. It was almost as
painful as the many walkthroughs of Test Plans I've subjected folks to.

What I took away from the meeting was how incredibly useful and interesting it 'could' have been. Here
was the opportunity to have code explained line by line, to be provided a live narration of the thinking,
logic and reasoning behind what had been created. What's more our own views and opinions could be
incorporated into what he been made.

If this was some contempory artist or author giving a narration of one of thier works we'd be fascinated
and consider ourselves fortunate that we might influence the work. But it's just some bloke we work with
and it's code, so we fall asleep.

The problem is it can be hard to get excited about code, even harder to get excited about someone talking
about it! The reality is most Walkthroughs and Code Reviews are brain numbingly boring.

You've probably heard of and been subjected to Walkthroughs and Code Reviews of various types, the idea
that an author of something (code, plans, schedules, etc.) will sit down with an interested group and walk
them through, line by line, clause by clause, page by page, explaining what was written and why.
Occasionally asking for input, "all ok?", "Seem to make sense?" so that after say 30 minutes or maybe an
hour everyone has found they're not interested anymore and are half asleep. Actually, probably asleep. It
makes me feel tired just describing it!

Peer Code Reviews


26

Peer Code Reviews on the other hand are meant to be snappier, more active and energetic. Think more in
terms of having produced a usable piece(s) of code, say a set of core functions or an interface onto those
APIs your buddy wrote. Then with this small chunk in hand get it reviewed. The review is say 10 minutes
long, you're doing a Walkthrough but it's a lively narrative.

Suggested Best Practices


- Small, manageable review items: not weeks of coding
- 5 or 10 minutes review time: not 1 or 2 hour brain numbing marathons
- Grab reviewers: don't book out meetings, keep it proactive (but don't break folks concentration if they're
'in the zone'!)
- Actively seek review from different colleagues: cross mentoring and knowledge sharing
- Keep responsive to new perspectives: review isn't just bug finding, it's learning tricks and tips too.

Peer Test Reviews


The interesting twist for us in the test domain is that we can apply this approach to our Test Artefacts.
When we're in an agile environment it serves us well to be working in the same spirit as our developer
colleagues.

Why not get Peer Test Review of those 20 or so Test Cases you just put together for that module?
Incrementally delivering Ready for Execution Test Cases is a great way to help the likes of Project
Managers feel relaxed that we're making progress on our planning and prep.

Doing the same with Test Plans, Designs, Breakdowns or whatever other artefacts you produce is also a
win. This lightweight approach achieves our objectives but stops us getting bogged down in heavyweight
process.

Follow the above Best Practices and keep the event lively, if you really must book out a meeting to
coordinate review with several people at the same time, that's OK. Just go a little overboard with your
presentation. Print copies in colour if you can or let folks access it on their lap-tops to save trees. Use the
projector to make viewing easier, create slides that are already noted up or can be 'written on', what ever
keeps the energy levels up.

Regression Testing: "What" to test and "When"

Regression testing is often seen as an area in which companies hesitate to allocate resources. We often
hear statements such as: "The developer said the defect is fixed. Do we need to test it again?" And the
answer should be: "Well, the developer probably said the product had no defects to begin with." The truth
of the matter is, in today's world of extremely complex devices and software applications, the quantity and
quality of regression testing performed on a product are directly proportional to the commitment vendors
have to their customer base. This does not mean that the more regression testing, the better. It simply
means that we must make sure that regression testing is done in the right amount and with the right
approach.

The two main challenges presented by regression testing are:

1. What do we test?
2. When do we test it?

The purpose of this article is to outline a few techniques that will help us answer these questions. The first
issue we should consider is the fact that it is not necessary to execute our regression at the end of our
testing cycle. Much of the regression effort can be accomplished simultaneously to all other testing
activities. The supporting assumption for this approach is:
"We do not wait until all testing is done to fix our defects."

Therefore, much of the regression effort can be accomplished long before the end of the project, if the
project is of reasonable length. If our testing effort will only last one week, the following techniques may
have to be modified. However, it is not usual for a product to be tested in such a short period of time.
Furthermore, as you study the techniques outlined below, you will see that as the project's length
increases, the benefits offered by these techniques also increase.

To answer the questions of what should we test and when, we will begin with a simple suite of ten tests. In
the real world, this suite would obviously be much larger, and not necessarily static, meaning that the
number of tests can increase or decrease as the need arises. After our first test run with the first beta
(which we will call "Code Drop 1") of our hypothetical software product, our matrix looks like this.

In the matrix above, we have cross-referenced the defects we found, with the tests that caused them. As
you can see, defect number 1 was caused by test 2, but it also occurred on test 3. The remaining failures
caused unique defects.
27

As we prepare to execute our second test run (Code Drop 2), we must decide what tests will be executed.
The rules we will use only apply to our regression effort. There are rules we can apply to the subset of
tests that have passed, in order to find out which ones we should re-execute. However, that will be the
topic of another article.

The fundamental question we must now ask is: "Have any of the defects found been fixed?" Let us
suppose that defects 1, 2, and 3 have, in fact, been reported as fixed by our developers. Let us also
suppose that three more tests have been added to our test suite. After "Code Drop 2", our matrix looks as
follows:

A few key points to notice are:

Of the tests that previously failed, only the tests that were associated with defects that were supposedly
fixed were executed. Test number 9, which caused defect number 4, was not executed on Code Drop 2,
because defect number 4 is not fixed.

Defect number 1 is fixed, because tests 2 and 3 have finally passed.

Test number 7 still fails. Therefore, the defect remains.

Test number 13 is a new test, and it caused a new defect.

We chose not to execute tests that had passed on Code Drop 1. This may often not be the case, since
turmoil in our code or the area's importance (such as a new feature, an improvement to an old feature, or
a feature as a key selling point of the product) may prompt us to re-execute these tests.
This simple, but efficient approach ensures that our matrix will never look like the matrix below (in order to
more clearly show the problem, we will omit the Defect # column after each code drop). We will also
consider Code Drop 5 to be our final regression pass.

We will address tests 2, 7, and 9 later, but here are a few key points to notice about this matrix:

Why were tests 1, 4, 5, 6, 10, 11, and 12 executed up to five times? They passed every single time.
Why were tests 3 and 8 executed up to five times? They first failed and were fixed. Did they need to be
executed on every code drop after the failure?

If test 13 failed, was the testing team erroneously told it had been fixed on each code drop? If not, why
was it executed four times with the same result? We can also ask the question: "Why isn't it fixed?" But we
will not concern ourselves with that issue, since we are only addressing the topic of regression.
In conclusion, we will list some general rules we can apply to our testing effort that will ensure
our regression efforts are justified and accurate. These rules are:

1. A test that has passed twice should be considered as regressed, unless turmoil in the code (or other
reasons previously stated, such as a feature's importance) indicates otherwise. By this we mean that the
only time a test should be executed more than twice is if changes to the code in the area the test
exercises (or the importance of the particular feature) justify sufficient concerns about the test's state or
the feature's condition.

2. A test that has failed once should not be re-executed unless the developer informs the test team that
the defect has been fixed. This is the case for tests 7 and 9. They should not have been re-executed until
Code Drops 4 and 5 respectively.

3. We must implement accurate algorithms to find out what tests that have already passed once should be
re-executed, in order to be aware of situations such as the one of test number 2. This test passed twice
after its initial failure and it failed again on Code Drop 4. Just as an additional note of caution: "When in
doubt, execute."

4. For tests that have already passed once, the second execution should be reserved for the final
regression pass, unless turmoil in the code indicates otherwise, or unless we do not have enough tests to
execute. However, we must be careful. Although it is true that this allows us to get some of the regression
effort out of the way earlier in the project, it may limit our ability to find defects introduced later in the
project.

5. The final regression pass should not consist of more than 30% to 40% of the total number of tests in our
suite. This subset should be allocated using the following priorities:
a. All tests that have failed more than once. By this we mean the tests that failed, the developer reported
them as fixed, and yet they failed again either immediately after they were fixed or some time during the
remainder of the testing effort.
b. All tests that failed once and then passed, once they were reported as fixed.
c. All, or a carefully chosen subset of the tests that have passed only once.
28

d. If there is still room to execute more tests, execute any other tests that do not fit the criteria above but
you feel should nevertheless be executed.

These common sense rules will ensure that regression testing is done smartly and in the right amount. In
an ideal world, we would have the time and the resources to test our product completely. Nevertheless,
today's world is a world of tight deadlines and even tighter budgets. Wise resource expenditure today will
ensure our ability to continue to develop reliable products tomorrow.

Exception handling in Software testing


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Exception handling is a programming language construct or computer hardware mechanism


designed to handle the occurrence of some condition that changes the normal flow of execution. For
signaling conditions that are part of the normal flow of execution see the concepts of signal and event
handler.

Exception or error handling refers to the anticipation, detection, and resolution of programming,
application, and communications errors. Specialized programs, called error handlers, are available for
some applications. The best programs of this type forestall errors if possible, recover from them when they
occur without terminating the application, or (if all else fails) gracefully terminate an affected application
and save the error information to a log file.

In programming, a development error is one that can be prevented. Such an error can occur in syntax or
logic. Syntax errors, which are typographical mistakes or improper use of special characters, are handled
by rigorous proofreading. Logic errors, also called bugs, occur when executed code does not produce the
expected or desired result. Logic errors are best handled by meticulous program debugging.
This can be an ongoing process that involves, in addition to the traditional debugging routine, beta testing
prior to official release and customer feedback after official release.

A run-time error takes place during the execution of a program, and usually happens because of adverse
system parameters or invalid input data. An example is the lack of sufficient memory to run an application
or a memory conflict with another program. On the Internet, run-time errors can result from electrical
noise, various forms of malware or an exceptionally heavy demand on a server. Run-time errors can be
resolved, or their impact minimized, by the use of error handler programs, by vigilance on the part of
network and server administrators, and by reasonable security countermeasures on the part of Internet
users. In runtime engine environments such as Java or .NET there exist tools that attach to the runtime
engine and every time that an exception of interest occurs they record debugging information that existed
in memory at the time the exception was thrown (call stack and heap values). These tools are called
Automated Exception Handling or Error Interception tools and they provide 'root-cause' information for
exceptions.

Usage

- It determines the ability of applications system to process the incorrect transactions properly

- Errors encompass all unexpected conditions.

- In some system approx. 50% of programming effort will be devoted to handling error condition.

Objective

- Determine Application system recognizes all expected error conditions.

- Determine Accountability of processing errors has been assigned and procedures provide a high
probability that errors will be properly corrected.

- Determine During correction process reasonable control is maintained over errors.

How to Use:
29

- A group of knowledgeable people is required to anticipate what can go wrong in the application system.

- It is needed that all the application knowledgeable people assemble to integrate their knowledge of user
area, auditing and error tracking.

- Then logical test error conditions should be created based on this assimilated information.

When to Use:

- Throughout SDLC.

- Impact from errors should be identified and should be corrected to reduce the errors to acceptable level.

- Used to assist in error management process of system development and maintenance.

Example:

- Create a set of erroneous transactions and enter them into the application system then find out whether
the system is able to identify the problems.

- Using iterative testing enters transactions and trap errors. Correct them. Then enter transactions with
errors, which were not present in the system earlier.

Verification of Exception Handling

The point of exception handling routines is to ensure that the code can handle error conditions. In order to
establish that exception handling routines are sufficiently robust, it is necessary to present the code with a
wide spectrum of invalid or unexpected inputs, such as can be created via software fault injection and
mutation testing (which is also sometimes referred to as fuzz testing). One of the most difficult types of
software for which to write exception handling routines is protocol software, since a robust protocol
implementation must be prepared to receive input that does not comply with the relevant specification(s).

In order to ensure that meaningful regression analysis can be conducted throughout a software
development lifecycle process, any exception handling verification should be highly automated, and the
test cases must be generated in a scientific, repeatable fashion. Several commercially available systems
exist that perform such testing.

Compatibility Testing
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

A Testing to ensure compatibility of an application or Web site with different browsers, OS and
hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all
can impact the behavior of the product and introduce costly and embarrassing bugs. We test for
compatibility using real test environments. That is testing how will the system performs in the particular
software, hardware or network environment. Compatibility testing can be performed manually or can be
driven by an automated functional or regression test suite.

The purpose of compatibility testing is to reveal issues related to the product's interaction with other
software as well as hardware. The product compatibility is evaluated by first identifying the
hardware/software/browser components that the product is designed to support. Then a
hardware/software/browser matrix is designed that indicates the configurations on which the product will
be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate
compatibility between the product and the hardware/software/browser matrix. Finally, the script is
executed against the matrix, and any anomalies are investigated to determine exactly where the
incompatibility lies.

Some typical compatibility tests include testing your application:

1. On various client hardware configurations


30

2. Using different memory sizes and hard drive space


3. On various Operating Systems
4. In different network environments
5. With different printers and peripherals (i.e. zip drives, USB's, etc.)

System Test Coverage Strategies


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Here are three approaches:

1. Major features first. Create tests which will exercise all the principal features first, to give maximum
coverage. This will probably be the same as the regression test. Then exercise each feature in some depth.

2. Major use cases first. As major features. Requires that you know both the user profile and the use
profile. The test must be end-to-end such that some real-world user objective is reached.

3. Major inputs and outputs. If the application is I/O dominated, then identify the most common kinds of
inputs and outputs and create tests to exercise them.

These can be followed by:


1. All GUI features
2. All functions with variations
3. All input/output combination

Note that various test management tools will give you coverage metrics showing the proportion of
requirements “covered” by a test, test cases run, etc. These are of course purely arbitrary; just because a
tester has associated a test case to a requirement doesn’t mean that the requirement has been
adequately covered. That’s one of the things you as test manager must check.

Agile manifesto
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

The Agile Manifesto is a statement of the principles that underpin agile software development:

Individuals and interactions over processes and tools


Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

The QA team may want to add one more principle to the Agile Manifesto

Craftsmanship over Execution: This is meant to focus software developers on creating good code vs.
simply writing code that barely works. Both craftsmanship and execution are good things, but taking care
to create good code is viewed as more important from testers and customers point of view.

Principles Behind the Agile Manifesto:

1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software.

2. Welcome changing requirements, even late in development. Agile processes harness


change for the customer's competitive advantage.

3. Deliver working software frequently, from a couple of weeks to a couple of months,


with a preference to the shorter timescale.
31

4. Business people and developers must work together daily throughout the project.

5. Build projects around motivated individuals. Give them the environment and support
they need, and trust them to get the job done.

6. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.

7. Working software is the primary measure of progress.

8. Agile processes promote sustainable development. The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.

9. Continuous attention to technical excellence and good design enhances agility.

10. Simplicity -- the art of maximizing the amount of work not done -- is essential.

11. The best architectures, requirements, and designs emerge from self-organizing teams.

12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its
behavior accordingly.

The One-Hour Regression Test


(From Better Software Magazine - Oct 2008 Issue)

by Steven Woody

Software regression testing is an essential and challenging task for software test groups. By definition,
regression testing is the process of verifying that none of the existing system features have been
accidentally broken by any new features or recent bug fixes. The main challenges for regression testing
are performing as many tests as possible in as short a time as possible and finding any serious regression
defects as early as possible in the regression test cycle.

The classical regression test cycle is:

1. A final software version with all of the new features completed and with all of the major bugs
fixed becomes available from the software development team.
2. Functional tests (for new features) and regression tests (for existing features) are executed.
3. Performance, integration, interoperability, and stress tests are executed.
4. If any serious defects are found, or significant feature changes are needed, steps 1 through 3 are
repeated.
5. When all needed features are operational and no major defects remain, the software is released
from test, ready for customer use.

Problems Inherent in Regression Testing


Although it is easy to follow this sequence, there are several problems with this classical approach to
regression testing.

Lather-rinse-repeat
The goal of regression testing is to find problems in previously working software features. Although project
management may choose to defer fixes or, in some cases, not to fix problems at all, there will be some
defects found during regression testing that must be fixed before release. Then the fix-regression test
cycle repeats itself, perhaps many more times than the schedule allows.

Any code change can break anything and everything


Project managers often believe that any defects introduced by a code change will be isolated to that
particular area of the code and, thus, an abbreviated regression test focused on that particular feature
area is sufficient. Eventually you learn that any code change--no matter how isolated--can have indirect,
inadvertent, and catastrophic effects on seemingly unrelated areas of the code. Sometimes a bug lies
latent in the code for years until some other change is made, which at last reveals the bug.
32

The best tests are often saved for last


Most regression test plans are arranged in a logical order that starts with some simple tests, such as "ping
the server;" proceeds through the multiple functionalities of the product; and ends with complex
performance tests, such as "measure the routing convergence time with one million routes in the table."
Although logical from a tester' perspective, it is completely backwards from the regression test's goal of
finding the most serious defects first. The tests that are the most complex, the most demanding, and the
most critical for product and customer success should be executed first.

Getting meaningful results is slow


Both automated and manual approaches to regression testing tend to take a day or more before we know
if the latest software version is acceptable or if it has serious problems. More complex performance,
interoperability, and stress tests can take weeks to perform. But managers may need to know within hours
if the latest version is good enough for a customer demonstration of a new feature or if it fixes a critical
issue being experienced at a customer site without introducing any new problems.

Regression testing is repetitious, dull, and tedious


Regression testing tends to be the least favorite activity for software testers due to its repetitive nature.
After a few iterations, the same manual tests become mind numbing, especially if few regression bugs are
found. Automated test scripts are usually developed to speed up regression testing and reduce the burden
on human testers. But, multiply a simple automated regression test by the thousands of features that exist
in most products and the result is tens of thousands of lines of regression test scripts to maintain--another
dull task.

Regression testing is a major resource drain


Most test groups dedicate the majority of their resources to regression testing, either by writing and
maintaining thousands of automated tests or by having a large number of testers spend weeks performing
the regression testing manually. As problems are found in the software, the test team becomes trapped in
an endless repetition of the regression test cycle, never able to escape in order to spend adequate time
working on new feature tests or better tests.

Attempts to Deal with These Problems


Many test groups have tried to deal with the problems of classical regression testing by using one of these
four approaches:

The complete regression test


Some test groups seek to avoid regression m test problems by starting regression testing on one software
version and continuing for days or weeks on that same version until the regression tests are completed,
even though newer software versions are becoming available during the regression test cycle. These
versions remain untested until the current regression test cycle is completed, but then regression testing
starts again on the latest version.

The continuous regression test


Other test groups seek to avoid the problems by accepting new software versions on a daily or weekly
basis during the regression test cycle. When a new version is available, it is loaded onto the test systems
and testing continues without restarting. At the end of testing, the regression tests have been spread
across multiple software versions. No version has been completely tested, but the regression test has been
run only once.

The regression smoke test


Other test groups scale back the scope of regression testing so that a superficial, usually automated,
"smoke test" can be completed within a few hours on each new software version. Full regression testing is
done only on the "final" version.

The restartable regression test


Other groups restart regression testing as each new version of software becomes available, particularly if
it is toward the end of the release schedule and there is a belief that this is the final (or nearly final)
version that will need to be tested. Of course, any serious problems that are found require regression
testing to restart (again). And rarely does the schedule clock restart to allow enough time to perform a full
regression test. Sometimes the first test cases will get executed over and over for each software version,
while test cases toward the end of the test plan may get executed only once.

Still, the Problems Remain


Each of these approaches to regression testing results in little meaningful testing being performed on the
last software version. This lack of meaningful testing fails the purpose of regression testing, which is to
give, in a timely manner, high confidence that all critical features are still working.

A common approach to mitigate the risk of performing very little regression testing is to perform a more
comprehensive test on the "final" version. The problem with this is that it is impossible to know before
regression testing starts that the "final" testing won't find any major problems and require yet another
33

final version to be built and tested comprehensively, followed by another final version, and so on.

The One-Hour Regression Test Strategy


These problems can be minimized by the adoption of a "one-hour regression test" strategy. This strategy:

• Attempts to find the most serious bugs as quickly as possible, at the cost of finding the lesser
issues later in the regression test cycle
• Adapts to the various styles of regression testing: the complete regression test, the continuous
regression test, the regression smoke test, and the restartable regression test
• Does not require additional people, test equipment, or weekend work

Ask Yourself, "What if I only had an Hour?"


What if a customer asked you to demonstrate to him, within an hour, that your newest software is ready
for use? What tests would you run? Now, think carefully--are these the same tests that you are now
performing in your first hour of regression testing?

The importance of the first hour


"Wait a minute," you say. "I have six weeks to execute my regression test plan. Is the first hour really that
important?"

In a word, yes. Finding the biggest problems as early as possible in the regression test cycle gives the
development team the maximum time to develop a comprehensive fix--not just a quick fix.

Immediate feedback on the software quality keeps the entire project team focused on the goal at hand,
which is to produce high-quality software as quickly as possible. In addition, immediate feedback prevents
other projects from diverting (stealing) your development resources.

Immediate feedback gives project management the maximum amount of time to react to problems.
Sometimes it is necessary to reset (lower) customer expectations. Knowing this as soon as possible makes
a difficult job easier.

Immediate feedback gives the test team greater awareness of where the problem areas are in the current
software version. This guides testers to avoid further testing of that feature until it is fixed and then to
perform additional testing of that feature after it is fixed.

Declaring the latest version "dead on arrival" can take pressure off the test team. Instead of spending
several days performing low-value tests that will have to be repeated anyway, you can spend the bulk of
your time performing, automating, and improving interesting, high-value regression tests.

How to pick the right tests for the first hour


The philosophy of one-hour regression testing is very different from the philosophy of classical regression
testing. Instead of methodically validating that each and every feature meets the requirements of the
product specification, the first-hour test approach tries to find the major problems with the version as
quickly as possible.

The right regression tests to perform in the first hour will vary depending on the type of product, the
maturity and stability of the software, and the test resources available. The following questions will help
you select the right first-hour tests for your product.

Customer-Focused Tests
First, consider the typical use of the product from beginning to end and make sure that use is tested. Think
of global, macroscopic tests of your product. What tests would be the most meaningful to your customers?
How does the product fit into the customer's overall situation and expectation? What is the name of your
product? The first tests should make sure that the product lives up to its name. Which features would have
the greatest impact on the customer if they were broken? If stored data were lost? If security were
compromised? If service were unavailable? These critical features must be tested in the first hour.

Complex and Difficult Tests


Which one usage scenario would tell you that the greatest number of important features is working? Make
sure the scenario is a realistic, customer-based configuration. If there is no single scenario, then test your
top-three customer usage scenarios that have a lot of "moving parts." Consider which top-level tests would
reveal any problems with numerous underlying unit or feature tests. Which features can be tested
together? Instead of regression testing features in isolation, use a real-world mix of compatible features,
validating all simultaneously. Finally, what features are the most complex? Which uses are the most
dynamic or the most challenging? What tests are the most difficult to pass or have the least margin to
spare?
34

Big Picture Tests


Now step back and look at the big picture. Make sure that you test the overall system performance in
large-scale use. What are the important top-level performance metrics? What performance tests can be
used as benchmarks of major features? Which tests will quickly show any memory, CPU, or other resource-
depletion issues? What tests can indicate overall system health with a busy system?

Expected Failure Tests


What features have historically been a problem for your product? Where have prior defects been found
and fixed? These are areas that should be regression tested early in the cycle. Which regression tests are
most likely to fail based on the changes that have been made? Use your testing experience and intuition to
select the regression tests that have the highest likelihood of failing. What are the newest features in the
regression test cycle? These often have problems that slipped past the new feature testing.

The overall goal is to test the software version as deeply and quickly as possible. These tests may take
more than an hour, but try to prioritize and select the tests that will provide the most essential information
about the quality of the software version within a matter of a few hours.

Putting It All Together


Before starting any regression testing, it is critical to test the bug fixes, new features, or other substantial
modifications to existing features that are in this latest version. These changes are the whole reason for
building the version; if they aren't right, you know there will be another build.

Insert the one-hour regression test before your full regression testing but after testing the new features, as
follows:

Test the reasons for the build.

• Test the bug fixes and test for any likely problems introduced by the fixes.
• Test any new features using a "one-hour new-feature test plan."

Perform your one-hour regression test.

• Select performance, load, and capacity tests.


• Select interoperability and compatibility tests.
• Select database backup and restore tests.
• Select redundancy tests, security tests, and data integrity tests.
• Select feature and functional tests.

Continue regression testing.

• Run any smoke or sanity tests.


• Perform load, capacity, interoperability, and performance testing.
• Perform stress, fault insertion, endurance, and longevity testing.
• Perform detailed tests of each feature, per classical regression testing.

Continuously optimize your first-hour regression testing as bugs are found, new features are added, and
testing tools are improved.

Evaluate your current regression test strategy


Now that you have a select list of candidate first-hour tests, compare it with the tests that you are
currently performing in the first hour of regression testing. Is your existing regression test plan written in a
test-centric or customer- centric sequence? Your first tests should be focused on the complex, large-scale,
customer-usage scenarios.

Is your existing regression test plan aging or up to date? Is it constantly updated to integrate the new
feature tests into the regression test plan? Or are new feature tests simply tacked on to the end of the test
plan, waiting until the end to be executed?

After focusing on the first hour, ask yourself what you want to accomplish in the first four hours, the first
35

eight hours, and the first week. Make sure that you prioritize and rearrange your test cases so that if you
are asked to release a version a few days before all of the regression test cases can be completed, all of
the critical tests will already have been performed.

Smoke and sanity tests


This one-hour regression test has similarities with two other types of regression testing, smoke testing and
sanity testing, but they are very different.

Smoke testing usually refers to superficial tests of the software, such as:

• Does the software install and run?


• Are all of the screens and menus accessible?
• Can all of the objects be accessed?

The one-hour regression test approach is more focused and does not replace the broader smoke test. If
possible, the smoke test should be run in parallel with the one-hour regression test. If this is not possible,
then run the smoke test immediately after the one-hour regression test.

Sanity testing usually refers to a small set of simple tests that verify basic functionality of the software,
such as:

• Simple calculations provide correct results.


• Basic functions work correctly.
• Individual, isolated features work correctly.

There is typically no capacity, scale, or performance testing involved with sanity testing. Sanity testing
provides the starting point for in-depth regression testing of each feature. Sanity testing should be
performed only after the one-hour regression test is complete. Ideally, sanity tests should be re-examined
to provide greater benefit to the regression testing, once the one-hour regression tests are in place.

The one-hour regression test differs from smoke and sanity testing in another important way. The goal of
one-hour regression testing is to find the major problems as quickly as possible by testing complex
features deeply.

New features
If there are any new features introduced during the regression test cycle, hit them hard in the first hour,
shaking out any major design problems. After the new features survive this testing, methodically test the
details of the new features using the full new-feature test plan.

Summary
The one-hour regression testing approach performs a small number of the most important and most
complex tests during the first hour after each new software version is received, with the goal of finding the
big problems early in the regression test cycle.

The various approaches to software testing can all benefit from one-hour regression testing, whether your
test group uses a traditional, agile, context-driven, or exploratory test strategy.

The one-hour regression testing approach requires a slight change of thinking for the first hour, but it can
save days or weeks of repetitious regression testing, while increasing the overall software quality of your
product.

Difference between Regression Testing and ReTesting


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Regression is also retesting, but the objective is different.

yes,
36

Regression testing is the testing which is done on every build change. We retest already tested
functionality for any new bug introduced due to change.

Retesting is the testing of same functionality, this may be due to any bug fix, or due to any change in
implementation technology.

Selecting Test cases for Regression Testing


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

.. Continuing the Test Management Series

It was found from recent survey (from small and big Software testing companies) that good bunch of the
defects reported by clients were due to last minute bug fixes creating side effects. And hence, selecting
the test cases for regression testing is not easy. It is an art and one QA Lead/QA Manager should be
perfect in this.

To select test cases for Regression Testing, QA Lead/QA Manager should know the following:

1. Knowledge on the bug fixes and how it affects the application.


2. In which area/functionality of application, the defects are frequently occurring.
3. Which area/functionality has undergone many/recent code changes?
4. Which features of the product/application are mandatory requirements of the client?
5. Which features of the product/application are Important to the customer?
6. In which functionality, the bugs are fixed in rush/hurry?

Selecting test cases for regression testing depends more on the criticality of bug fixes than the criticality of
the defect itself. Fix of a minor bug can result in major side effect and a bug fix for an Extreme defect or a
high severity defect can have no or a just a minor side effect. So the Test Lead/engineer or test Manager
needs to balance these aspects for selecting the test cases and test scenarios for regression testing.

Solution: Make good relations with Development team lead/Development Technical manager. They can
easily help QA team in identifying the above. A proper impact analysis should be done.

While selecting test cases and test scenarios for Regression Testing, we should not select only those test
cases which fail while regular test cycles because those test cases and scenarios can have no or less
relevance to the bug fixes. We need to select more positive test cases than negative test cases for final
regression test cycle. It is also recommended and a best software testing practise that the regular test
cycles (which are conducted before regression testing) should have right mix of both positive and negative
test scenarios. Negative test scenarios are those test cases which are introduced with intention to break
the application.
37

From a recent survey, it is found that several companies have "constant test cases set" for regression
testing and they are executed irrespective of the number and type of bug fixes. Sometimes this approach
may not find all bugs (side effects) due to the bug fixes in the application. Also in some cases it is
observed that the effort spend on executing test cases for regression testing can be minimized if analysis
or a study is done to find out what test cases are relevant for regression testing and what are not. This can
be done by Impact Analysis.

It is a good approach to Planning Regression testing is from the beginning of project before the test cycles
instead of hat you plan after the completion of regular test cycles. Best practise is –

Classify the test cases and test scenarios into various Priorities based on importance and customer usage.
Here it is suggested the test cases be classified into three classes:

Priority 0 – In this category, those test cases falls which checks basic functionality and are executed for
pre-system acceptance and when product goes thru major change. These test cases basically checks
whether the application is stable enough for further testing or not. These are Sanity Test cases which
delivers high project value to client and the entire software development/testing and quality assurance
team.

Priority-1 – In this category, we can add test cases of those functionalities (which are very important to
customer) in which we are getting major/critical bugs, the bugs of critical functionalities which are fixed in
rush.

Priority-2 – These test cases deliver moderate project value. Executed part of Software Testing cycle and
selected for regression testing on need basis.

There are various right approaches to regression testing which needs to be decided on "case to case" basis
and we can prioritize the test cases:

• Case 1: If the criticality and impact of the bug fixes are LOW, then it is enough a Software Tester
selects a few test cases from Test Case Database (TCDB) and executes them. These test cases
can fall under any Priority (Priority 0, Priority 1 or Priority 2).

• Case 2: If the criticality and the impact of the bug fixes are Medium, then we need to execute all
Priority 0 and Priority 1 test cases. If bug fixes need some additional test cases from Priority-2,
then those test cases can also be selected and executed for regression testing. Selecting Priority-
2 test cases in this case is desirable or optional but not must.

• Case 3: If the criticality and impact of the bug fixes are High, then testing team need to execute
all the Priority 0, Priority 1 and carefully selected Priority 2 test cases. Priority 2 test cases cannot
be skipped in this case. So be careful while choosing the test cases.
38

• Case 4: QA Lead or QA Manager can also go through the complete log of changes happened (can
be obtained from Configuration Management Team) because of bug fixes and select the test
cases to conduct regression testing. This is a detailed and sometimes complex process but can
give very good results. Also don’t forget the One Hour Regression Test Strategy.

This is illustrated in the picture below;

Five keys to Choose the most effective Regression Test Cases


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

By definition regression testing is a new round of software validation after a build release with
bug fixes. According to Microsoft's statistics, in accordance with their experience, most developers will
introduce one fresh defect after solving 3 ~ 4 defects. So we need to do regression testing to find those
newly introduced bugs.

In general, higher the coverage of the regression, lower the risk, but the time it will take will be more and
vice versa. So, if time allows, once should cover all test cases as part of the regression test suite but
generally will not have so much time. This requires us to make a balance between effort it takes and
coverage of the test use cases used as regression testing.

When choosing regression testing, the first thing to determine is the ratio of regression test cases, this
39

situation should be based on time, and 100% are the best, but because of the time constrains this ratio is
generally around 60%. Then we have to determine the regression test cases' priority.

Let's check the seven most common Regression test selection method:
1. First check the newly modified features (if there is any) in the new build release.
2. Then find the impact areas, meaning because of the introduced new features. What the closely coupled
areas can get impacted? All those related modules need to be retested as part of regression testing.
3. Include the main flows or highly used areas of the program. You can easily get the frequency of the use
of a particular module, and if it is very high then that is the area you need to retest.
4. Furthermore, the most vulnerable parts of the program, for instance, security risks, data leakage,
encryption registration.
5. If the above done, there is still time, then best to include some of the alternative flows test cases found
the use case. Alternative flows are not happy path testing but some other ways to use the programs.
These are the regression test case selection priority. In most of the organization people use some
automation tools to automate the regression test cases. It always has got good ROI.

SPONSORED LINKS

What is Ramp Testing? - Continuously raising an input signal until the system breaks down.

What is Depth Testing? - A test that exercises a feature of a product in full detail.

What is Quality Policy? - The overall intentions and direction of an organization as regards quality as
formally expressed by top management.

What is Race Condition? - A cause of concurrency problems. Multiple accesses to a shared resource, at
least one of which is a write, with no mechanism used by either to moderate simultaneous access.

What is Emulator? - A device, computer program, or system that accepts the same inputs and produces
the same outputs as a given system.

What is Dependency Testing? - Examines an application's requirements for pre-existing software, initial
states and configuration in order to maintain proper functionality.

What is Documentation testing? - The aim of this testing is to help in preparation of the cover
documentation (User guide, Installation guide, etc.) in as simple, precise and true way as possible.

What is Code style testing? - This type of testing involves the code check-up for accordance with
development standards: the rules of code comments use; variables, classes, functions naming; the
maximum line length; separation symbols order; tabling terms on a new line, etc. There are special tools
for code style testing automation.
40

What is scripted testing? - Scripted testing means that test cases are to be developed before tests
execution and some results (and/or system reaction) are expected to be shown. These test cases can be
designed by one (usually more experienced) specialist and performed by another tester.

Random Software Testing Terms and Definitions:

• Formal Testing: Performed by test engineers


• Informal Testing: Performed by the developers
• Manual Testing: That part of software testing that requires human input, analysis, or evaluation.
• Automated Testing: Software testing that utilizes a variety of tools to automate the testing process.
Automated testing still requires a skilled quality assurance professional with knowledge of the automation
tools and the software being tested to set up the test cases.
• Black box Testing: Testing software without any knowledge of the back-end of the system, structure or
language of the module being tested. Black box test cases are written from a definitive source document,
such as a specification or requirements document.
• White box Testing: Testing in which the software tester has knowledge of the back-end, structure and
language of the software, or at least its purpose.
• Unit Testing: Unit testing is the process of testing a particular complied program, i.e., a window, a
report, an interface, etc. independently as a stand-alone component/program. The types and degrees of
unit tests can vary among modified and newly created programs. Unit testing is mostly performed by the
programmers who are also responsible for the creation of the necessary unit test data.
• Incremental Testing: Incremental testing is partial testing of an incomplete product. The goal of
incremental testing is to provide an early feedback to software developers.
• System Testing: System testing is a form of black box testing. The purpose of system testing is to
validate an application's accuracy and completeness in performing the functions as designed.
• Integration Testing: Testing two or more modules or functions together with the intent of finding
interface defects between the modules/functions.
• System Integration Testing: Testing of software components that have been distributed across
multiple platforms (e.g., client, web server, application server, and database server) to produce failures
caused by system integration defects (i.e. defects involving distribution and back-office integration).
• Functional Testing: Verifying that a module functions as stated in the specification and establishing
confidence that a program does what it is supposed to do.
• Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output
of the current system to verify the new system performs the operations correctly.
• Usability Testing: Usability testing is testing for 'user-friendliness'. A way to evaluate and measure
how users interact with a software product or site. Tasks are given to users and observations are made.
• End-to-end Testing: Similar to system testing - testing a complete application in a situation that
mimics real world use, such as interacting with a database, using network communication, or interacting
with other hardware, application, or system.
• Security Testing: Testing of database and network software in order to keep company data and
resources secure from mistaken/accidental users, hackers, and other malevolent attackers.
• Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the
application is functioning according to specifications. This level of testing is a subset of regression testing.
It normally includes testing basic GUI functionality to demonstrate connectivity to the database,
application servers, printers, etc.
• Regression Testing: Testing with the intent of determining if bug fixes have been successful and have
not created any new problems.
• Acceptance Testing: Testing the system with the intent of confirming readiness of the product and
customer acceptance. Also known as User Acceptance Testing.
• Installation Testing: Testing with the intent of determining if the product is compatible with a variety
of platforms and how easily it installs.
• Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
• Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type
of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the
development cycle, this will be the only kind of testing that can be performed – usually done by skilled
testers. Sometimes ad hoc testing is referred to as exploratory testing.
• Configuration Testing: Testing to determine how well the product works with a broad range of
hardware/peripheral equipment configurations as well as on different operating systems and software.
• Load Testing: Testing with the intent of determining how well the product handles competition for
system resources. The competition may come in the form of network traffic, CPU utilization or memory
allocation.
• Penetration Testing: Penetration testing is testing how well the system is protected against
unauthorized internal or external access, or willful damage. This type of testing usually requires
sophisticated testing techniques.
• Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking
41

point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.
• Smoke Testing: A random test conducted before the delivery and after complete testing.
• Pilot Testing: Testing that involves the users just before actual release to ensure that users become
familiar with the release contents and ultimately accept it. Typically involves many users, is conducted
over a short period of time and is tightly controlled. (See beta testing)
• Performance Testing: Testing with the intent of determining how efficiently a product handles a
variety of events. Automated test tools geared specifically to test and fine-tune performance are used
most often for this type of testing.
• Exploratory Testing: Any testing in which the tester dynamically changes what they're doing for test
execution, based on information they learn as they're executing their tests.
• Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even
distributed to the public at large.
• Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not
go through all the in-house quality checks.
• Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the
test cases can discriminate the program from slight variants of the program.
• Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing
approach that examines the application's program structure, and derives test cases from the application's
program logic.
• Compatibility Testing: Testing used to determine whether other system software components such as
browsers, utilities, and competing software will conflict with the software being tested.
• Comparison Testing: Testing that compares software weaknesses and strengths to those of
competitors' products.
• Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to
reaching customers. Sometimes a selected group of users are involved. More often this testing will be
performed in-house or by an outside testing firm in close cooperation with the software engineering
department.
• Independent Verification and Validation (IV&V): The process of exercising software with the intent
of ensuring that the software system meets its requirements and user expectations and doesn't fail in an
unacceptable manner. The individual or group doing this work is not part of the group or organization that
developed the software.
• Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers
only the functionality of the application.
• Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates
and uses test drivers for components that have not yet been developed, because, with bottom-up testing,
low-level components are tested first. The objective of bottom-up testing is to call low-level components
first, for testing purposes.

• Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In
other words, if a program does not perform as intended, it is most likely a bug.
• Error: A mismatch between the program and its specification is an error in the program.
• Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra
data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is
a flaw in the software system and has no impact until it affects the user/customer and operational system.
90% of all the defects can be caused by process problems.
• Failure: A defect that causes an error in operation or negatively impacts a user/ customer.
• Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties
concerned with the project adhere to the process and procedures, standards and templates and test
readiness reviews.
• Quality Control: quality control or quality engineering is a set of measures taken to ensure that
defective products or services are not produced, and that the design meets performance requirements.
• Verification: Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code, requirements and
specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
• Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of
the product; validation typically involves actual testing and takes place after verifications are completed.

Testing Levels and Types

There are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing.
42

Various types of testing come under these levels.


Unit Testing: To verify a single program or a section of a single program.

Integration Testing: To verify interaction between system components

Prerequisite: unit testing completed on all components that compose a system

System Testing: To verify and validate behaviors of the entire system against the original system
objectives
Software testing is a process that identifies the correctness, completeness, and quality of software.

Branch Testing | Condition Testing | Data Definition


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Branch Testing

In branch testing, test cases are designed to exercise control flow branches or decision points in a unit.
This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage, need to test both
branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling) within the
branch should be exercised at least once.

Branch coverage (sometimes called Decision Coverage) measures which possible branches in flow control
structures are followed. Clover does this by recording if the Boolean expression in the control structure
evaluated to both true and false during execution.

Branch testing comes under white box testing or black box testing?
Branch testing is done while doing white box testing, where focus is given on code.There are many other
white box technique. Like Loop testing.

Confused? Read this.

Must Read: Branch-coverage testability transformation for unstructured programs:

http://sites.google.com/a/softwaretestingtimes.com/osst/osst/Branch-Coverage.pdf?attredirects=0&d=1

Condition Testing

The object of condition testing is to design test cases to show that the individual components of logical
conditions and combinations of the individual components are correct. Test cases are designed to test the
individual elements of logical expressions, both within branch conditions and within other expressions in a
unit.
43

Condition testing is a test case design approach that exercises the logical conditions contained in a
program module. A simple condition is a Boolean variable or a relational expression, possibly with one
NOT operator. A relational expression takes the form:

E1 < relational- operator>E 2

where are arithmetic expressions and relational operator is one of the following <, =, , (nonequality) >,
or . A compound condition is made up of two or more simple conditions, Boolean operators, and
parentheses. We assume that Boolean operators allowed in a compound condition include OR, AND and
NOT.

The condition testing method concentrates on testing each condition in a program. The purpose of
condition testing is to determine not only errors in the conditions of a program but also other errors in the
program. A number of condition testing approaches have been identified. Branch testing is the most
basic. For a compound condition, C, the true and false branches of C and each simple condition in C must
be executed at least once.

Domain testing needs three and four tests to be produced for a relational expression. For a relational
expression of the form:

E1 < relational- operator>E 2

Three tests are required the make the value of greater than, equal to and less than , respectively.

Data Definition – Use Testing

Data definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is
anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The
objective is to create test cases that will drive execution through paths between specific definitions and
uses.

Extended Random Regression Testing (ERRT)


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

ERRT is a variation of a regression test, which consists of running standard tests


from the test library in random order until the software under test fails. An important point to remember:
the software under test has already passed successfully those tests in this build! That means that those
tests add no more coverage as standard regression tests. ERRT is useful for system-level tests or some
very specific unit tests. Typical defects found with this method include: timing problems, memory
corruption, stack corruption, and memory leaks.
44

ERRT exposes problems that can’t be found with conventional test techniques. Troubleshooting such
defects can be extremely difficult and very expensive.

Long Sequence Testing

Repeating test cases and critical operations over and over again during long sequence testing is one way
to uncover those intermittent failures. Typically, automatically generated test cases are randomly selected
from the test repository databank and executed over a very long time.

To test network-centric applications, high-volume long sequence testing (LST) is an efficient technique.
McGee and Kaner explored it using what they call extended random regression (ERR) testing. A more
promising method to test complex network-centric systems is using genetic algorithms coupled with high
volume testing.
Genetic algorithms, in particular, provide a powerful search technique that is effective in very large search
spaces, as represented by system environment attributes and input parameters in the testing arena.

Test Case Development


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

(Topic for Beginners)


Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test
Case will consist of information such as requirements testing, test steps, verification steps, prerequisites,
outputs, test environment, etc.

A Test Case is:


- A set of inputs, execution preconditions, and expected outcomes developed for a particular objective,
such as to exercise a particular program path or to verify compliance with a specific requirement.

- A detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes
what to test, a test case describes how to perform a particular test. You need to develop a test case for
each test listed in the test plan.

(Click on the image to see in a Clear view)


45

Test cases should be written by a team member who understands the function or technology being tested,
and each test case should be submitted for peer review.

Organizations take a variety of approaches to documenting test cases; these range from developing
detailed, recipe-like steps to writing general descriptions. In detailed test cases, the steps describe exactly
how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform
the test and what data to use.

Detailed test cases are recommended to test a software because determining pass or fail criteria is usually
easier with this type of case. In addition, detailed test cases are reproducible and are easier to automate
than descriptive test cases. This is particularly important if you plan to compare the results of tests over
time, such as when you are optimizing configurations. Detailed test cases are more time-consuming to
develop and maintain. On the other hand, test cases that are open to interpretation are not repeatable and
can require debugging, consuming time that would be better spent on testing.

When planning your tests, remember that it is not feasible to test everything. Instead of trying to test
every combination, prioritize your testing so that you perform the most important tests — those that focus
on areas that present the greatest risk or have the greatest probability of occurring — first.

Once the Test Lead prepared the Test Plan, the role of individual testers will start from the preparation of
Test Cases for each level in the Software Testing like Unit Testing, Integration Testing, System Testing and
User Acceptance Testing and for each Module.

General Guidelines to Prepare Test Cases

As a tester, the best way to determine the compliance of the software to requirements is by designing
effective test cases that provide a thorough test of a unit. Various test case design techniques enable the
testers to develop effective test cases. Besides, implementing the design techniques, every tester needs
to keep in mind general guidelines that will aid in test case design:
a. The purpose of each test case is to run the test in the simplest way possible. [Suitable techniques -
Specification derived tests, Equivalence partitioning]
b. Concentrate initially on positive testing i.e. the test case should show that the software does what it is
intended to do. [Suitable techniques - Specification derived tests, Equivalence partitioning, State-transition
testing]
c. Existing test cases should be enhanced and further test cases should be designed to show that the
software does not do anything that it is not specified to do i.e. Negative Testing [Suitable techniques -
Error guessing, Boundary value analysis, Internal boundary value testing, State transition testing]
d. Where appropriate, test cases should be designed to address issues such as performance, safety
requirements and security requirements [Suitable techniques - Specification derived tests]
e. Further test cases can then be added to the unit test specification to achieve specific test coverage
objectives. Once coverage tests have been designed, the test procedure can be developed and the tests
executed [Suitable techniques - Branch testing, Condition testing, Data definition-use testing, State-
transition testing]

Test Case Template


To prepare these Test Cases each organization uses their own standard template, an ideal
template is providing below to prepare Test Cases.

Fig 1: Common Columns in Test cases that are present in all Test case formats
46

Fig 2: A very details Low Level Test Case format

The Name of this Test Case Document itself follows some name convention like below so that by seeing
the name we can identify the Project Name and Version Number and Date of Release.

DTC_Functionality Name_Project Name_Ver No

DTC – Detailed Test Case


Functionality Name: For which the test cases is developed
Project Name: Name of the Project
Ver No: Version number of Software
(You can add Release Date also)

The bolded words should be replaced with the actual Project Name, Version Number and Release Date. For
eg. Bugzilla Test Cases 1.2.0.3 01_12_04

On the Top-Left Corner we have company emblem and we will fill the details like Project ID, Project Name,
Author of Test Cases, Version Number, Date of Creation and Date of Release in this Template.

And we will maintain the fields Test Case ID, Requirement Number, Version Number, Type of
Test Case, Test Case Name, Action, Expected Result, Cycle#1, Cycle #2, Cycle#3, Cycle#4 for
each Test Case. Again this Cycle is divided into Actual Result, Status, Bug ID and Remarks.

Test Case ID:


To Design the Test Case ID also we are following a standard: If a test case belongs to application not
specifically related to a particular Module then we will start them as TC001, if we are expecting more than
one expected result for the same test case then we will name it as TC001.1. If a test case is related to
Module then we will name it as M01TC001, and if a module is having a sub-module then we name that as
M01SM01TC001. So that we can easily identify to which Module and which sub-module it belongs to. And
one more advantage of this convention is we can easily add new test cases without changing all Test Case
Number so it is limited to that module only.

Requirement Number:
It gives the reference of Requirement Number in SRS/FRD for Test Case. For Test Case we will specify to
which Requirement it belongs to. The advantage of maintaining this one here in Test Case Document is in
future if a requirement will get change then we can easily estimate how many test cases will affect if we
change the corresponding Requirement.

Version Number:

Under this column we will specify the Version Number, in which that particular test case was introduced.
So that we can identify finally how many Test Cases are there for each Version.

Type of Test Case:


47

It provides the List of different type of Test Cases like GUI, Functionality, Regression, Security, System,
User Acceptance, Load, Performance etc., which are included in the Test Plan. So while designing Test
Cases we can select one of this option. The main objective of this column is we can predict totally how
many GUI or Functionality test cases are there in each Module. Based on this we can estimate the
resources.

Test Case Name:

This gives more specific name like particular Button or text box name, for which that particular Test Case
belongs to. I mean to say we will specify the Object name for which it belongs to. For eg., OK button, Login
form.

Action (Input):

This is very important part in Test Case because it gives the clear picture what you are doing on the
specific object. We can say the navigation for this Test Case. Based the steps we have written here we will
perform the operations on the actual application.

Expected Result:

This is the result of the above action. It specifies what the specification or user expects from that particular
action. It should be clear and for each expectation we will sub-divide that Test Case. So that we can specify
pass or fail criteria for each expectation.

Up to the above steps we will prepare the Test Case Document before seeing the actual application and
based on System Requirement Specification/Functional Requirement Document and Use Cases. After that
we will send this document to the concerned Test Lead for approval. He will review this document for
coverage of all user Requirements in the Test Cases. After that he approved the Document.

Now we are ready for testing with this Document and we will wait for the Actual Application. Now we will
use the Cycle #1 parts.

Under each Cycle#1 we are having Actual, Status, Bug ID and Remarks.

Number of Cycles is based on the Organization. Some organizations document Three Cycles some
organizations maintain the information for Four Cycles.
But here I provided only one Cycle in this Template but you have to add more cycles based on your
requirement.

Actual:

We will test the actual application against each Test Case and if it matches the Expected result then we
will say it as “As Expected” else we will write the actually what happened after doing those action.

Status:
It simply indicates Pass or Fail status of that particular Test Case. If Actual and Expected both mismatch
then the Status is Fail else it is Pass. For Passed Test Cases Bug ID should be null and for failed Test Cases
Bug ID should be Bug ID in the Bug Report corresponding to that Test Case.

Bug ID:

This is gives the reference of Bug Number in Bug Report. So that Developer/Tester can easily identify the
Bug associated with that Test Case.

Remarks:

This part is optional. This is used for some extra information.

Test script defect – Test case defect and Test case Review
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS
48

1. What is a test script defect?

During a testing phase, it is not uncommon at all that testers face issues other than application failures. It
is believed that during the first cycle of testing, about half the number of reported defects are related to
test scripts; i.e. test script defects count about 50% of all the reported defects, while the other 50% are
due to software failures and incorrect environment setup.

Test script defect is the discrepancies in the test cases which are prepared by Software testers.

2. What causes test script defects?

The root cause of the defects found in test scripts/test cases can be attributed to the following:

• Not fully understanding the requirements or design or any other source documents that the test
script is derived from and based on.
• Designing test cases require thorough understanding of the application subject to test. Therefore,
it is imperative that the test designers should have a clear understanding of the requirements and
design flow documents, so they can write correct test cases.
• Not working with the latest version of the base documents.
• Proper change control and configuration management activities are the absolute necessary
efforts in order to prevent the pitfalls of working with old or wrong versions of documents.
• Not properly translating requirements and design flows in to test cases and breaking them down
to test steps.
• The same way a programmer needs to translate software requirement or software design into
code, a tester must also be able to analyze a requirement and derive test cases from it. This in
turn requires understandable and testable requirements.
• Not realizing the person executing the test script could be someone from outside without
knowledge of the application under test.
• A lot of the time, when testers design test cases, they assume that the only people who will
execute their scripts will be their team mates or peers and will have familiarity with the
applications. Therefore, the steps are condensed or merged which could appear to be vague for
someone less experienced with the applications and thus unable to follow the script to execute
the tests.
• Not proper use cases and specification documents.

3. Severities of Test Script Defects

Issues found in the test script can be categorized in to three levels of severity:
Level 1: Issues in test script stopping the tester to carry out the execution.
This is a serious issue with a high priority, since the software cannot be tested if the test script is
majorly defective, e.g. the workflow of the test case and steps do not matchup with what is
written up in the requirements or design specifications.
An example could be that the workflow and behavior of an application depends on a set of test
data and values that the tester should set before the execution, but the script does not contain or
list the required test data and thus the tester cannot verify the workflow.

A defect should be raised and logged and the changes and corrections to the test scripts must be made
immediately during the execution phase and the test should be carried out with the new version of the test
script.

Level 2: Issue in test script with a workaround, i.e. the tester can identify the issue and is able to
continue testing with the workaround:
This is a moderate issue with a medium priority. Of course, if too many workarounds have to be made
during the test execution phase, the priority for fixing the test script defects becomes high.

For example, an application requires username, password and a random generated key to verify user
credentials. The script only asks username and password to be entered while the application is expecting
49

the random number to be entered as well. The tester can enter the random number as well as the
username and password and carryon with the rest of the script.

Level 3: Test script with cosmetic errors / suggestions


Spelling mistakes, missing step numbers or missing information on a section of the documents, e.g.
reference to source documents are all minor issues and usually of a low priority.

4. How to prevent Test Script/Test Case Defects?

Before the testing phase begins, all test script documents (amongst other documents) should be subjected
to formal reviews to prevent the above issues appearing during the formal testing phase. If at all possible,
there should be a “dry-running” of the scripts before the formal test execution begins. This gives the
testers a chance to raise any uncertainties or doubts about the nature of the scripts and to minimize the
number of issues listed above.

Also testers writing the test scripts must have a thorough understanding of the applications and workflows
in order to write effective test cases and to maximize the exposure of defects.

Reviewing Test Cases


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

The main reason of the reviewing: increase test cases quality and therefore product
quality.

As we know testers are involved on the Requirements Specification review process to provide the SQA
knowledge to the requirements written. As testers are involved on the process they become experts on the
area and on the application functionality and many times their knowledge helps avoid introducing future
defects into the functionality that later the developer will code (it’s the phase that we called: defect
prevention).

Once the requirements are approved and baselined, testers start designing test cases whether on their
mind, or writing them (test drafts). Once all the ideas or test case drafts are understood and prepared, the
SQA tester will start developing test cases. When this happens, each test case written is based on a
specific requirement, so with that we start assuring having traceability between requirement and
test cases. This will help SQA team to manage the Requirements coverage of what is going to be tested.

Once the test cases are developed, SQA tester should share-distribute-discuss those with the
same team that reviewed the requirements (SRS writer, Developers, SQA tester,
Implementation team, etc). However, sometimes this is not possible, as perhaps when the
Requirements are baselined, the person who is in charge of the SRS starts on another project and has not
even more time to dedicate reviewing a set of test cases. The same happens with the Implementations
team, as they are perhaps installing the product on a customer site. There are cases where SQA tester and
developer start more or less at the same time with their work based on the Requirements. Developer
starts developing code and Tester developing test cases. There are other times that SQ Tester starts
thinking or having test case drafts even before the Developer starter coding. That means that
developing code and test cases are and should be separate processes.

Of course that having a Requirements-Usability people reviewing test cases has a lot of value, also having
the implementations team doing the same. The problem has been that these often did not happen due the
lack of resources, so the test cases review would progress only with the developer involved on the same
project and functionality. In any case the developer review test cases always would go in the direction
50

of adding details, parameters or circumstances not included in the tester written test cases or well even
adding new test cases but never modifying the sense of the test cases written by the tester.

This is the approach and the how the test cases defined by testers need to be reviewed by the developer.
We should also notice that some times when the test cases writer is a beginner, not a senior tester, or well
does not have so much knowledge about the functionality, then someone from the SQA team with more
experience should check the test cases before sharing them with the developer for review.

Benefits of having test cases reviews for SQA test cases written, including on them the developers:

• Defect prevention while SRS review: SQA tester could advance during SRS reviews possible issues
before any code starts

• Conceptual and Technical Coverage: Requirements- Usability ensures the coverage from the Concept
point of view and Developer ensures the coverage from the Technical Point of view. The traceability
coverage track is assumed by traceability tools (Quality Center)

• Defect prevention while test cases review: If the developer has the opportunity to check the test
cases while implementing code, it is possible that this will help him to realize codification that may be a
cause of a defect. This will help to coding in the way of potential defects.

• Developer Knowledge add to test cases: Developer has also a very good understanding of the
requirement (SRS), explicit and implicit as well. Also has done a deep analysis of them since he had to
accomplish the SRA. He can bring experience on understanding better details or well some cases not being
considered.

After having the test cases reviewed, the SQ team receives all the feedback and decides, based on its
experience and knowledge on SQA and also on the functionality if feedback is applied or not. When not
applied, the reason should be explained and discussed with the developer since there should be a final
agreement on the test cases written.

How to write test cases for use cases ?


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

What is a Use Case?


A use case describes the system’s behavior under various conditions as it responds to a request from one
of the users. The user initiates an interaction with the system to accomplish some goal. Different
sequences of behavior, or scenarios, can unfold, depending on the particular requests made and
conditions surrounding the requests. The use case collects together those different scenarios.

Use cases are popular largely because they tell coherent stories about how the system will behave in use.
The users of the system get to see just what this new system will be and get to react early.

In software engineering, a use case is


1. A technique for capturing the potential requirements of a new system or software change.
2. Each use case provides one or more scenarios that convey how the system should interact with the end
user or another system to achieve a specific business goal.
3. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain
expert.
4. Use cases are often co-authored by software developers
and end users.

By definition of use cases , we just follow the requirement document so we concentrate of testing like
functionality testing
acceptance testing
Alpha testing etc

How many User Acceptance Test Cases need to be prepared for an application?
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents
51

SPONSORED LINKS

For testing projects if we know the development effort in FP then there is a rule
called Caper Jones rule it says # of test cases = (Function Points) power 1.2

This will give the no. of user acceptance test cases can be prepared.

Software testing Metrices - Test Case Review Effectiveness


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Metrics are the means by which the software quality can be measured; they give you
confidence in the product. You may consider these product management indicators, which can be either
quantitative or qualitative. They are typically the providers of the visibility you need.

The goal is to choose metrics that will help you understand the state of your product.

Metrics for Test Case Review Effectiveness:

1. Major Defects Per Test Case Review


2. Minor Defects Per Test Case Review
3. Total Defects Per Test Case Review
4. Ratio of Major to Minor Defects Per Test Case Review
5. Total Defects Per Test Case Review Hour
6. Major Defects Per Test Case Review Hour
7. Ratio of Major to Minor Defects Per Test Case Review Hour
8. Number of Open Defects Per Test Review
9. Number of Closed Defects Per Test Case Review
10. Ratio of Closed to Open Defects Per Test Case Review
11. Number of Major Open Defects Per Test Case Review
12. Number of Major Closed Defects Per Test Case Review
13. Ratio of Major Closed to Open Defects Per Test Case Review
14. Number of Minor Open Defects Per Test Case Review
15. Number of Minor Closed Defects Per Test Case Review
16. Ratio of Minor Closed to Open Defects Per Test Case Review
17. Percent of Total Defects Captured Per Test Case Review
18. Percent of Major Defects Captured Per Test Case Review
19. Percent of Minor Defects Captured Per Test Case Review
20. Ratio of Percent Major to Minor Defects Captured Per Test Case Review
21. Percent of Total Defects Captured Per Test Case Review Hour
22. Percent of Major Defects Captured Per Test Case Review Hour
23. Percent of Minor Defects Captured Per Test Case Review Hour
24. Ratio of Percent Major to Minor Defects Captured Per Test Case Review Hour
25. Percent of Total Defect Residual Per Test Case Review
26. Percent of Major Defect Residual Per Test Case Review
27. Percent of Minor Defect Residual Per Test Case Review
28. Ratio of Percent Major to Minor Defect Residual Per Test Case Review
29. Percent of Total Defect Residual Per Test Case Review Hour
30. Percent of Major Defect Residual Per Test Case Review Hour
31. Percent of Minor Defect Residual Per Test Case Review Hour
32. Ratio of Percent Major to Minor Defect Residual Per Test Case Review Hour
33. Number of Planned Test Case Reviews
34. Number of Held Test Case Reviews
35. Ratio of Planned to Held Test Case Reviews
36. Number of Reviewed Test Cases
37. Number of Unreviewed Test Cases
38. Ratio of Reviewed to Unreviewed Test Cases
39. Number of Compliant Test Case Reviews
40. Number of Non-Compliant Test Case Reviews
52

41. Ratio of Compliant to Non-Compliant Test Case Reviews


42. Compliance of Test Case Reviews
43. Non-Compliance of Test Case Reviews
44. Ratio of Compliance to Non-Compliance of Test Case Reviews

Test Case for an Elevator


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Some of the use cases would be:


1) Elevator is capable of moving up and down.
2) It is stopping at each floor.
3) It moves exactly to that floor when corresponding floor no is pressed.
4) It moves up when called from upward and down when called from downward.
5) It waits until 'close' button is pressed.
6) If anyon steps inbetween the door at the time of closing, door should open.
7) No break points exists
8) More usecases for the load that the elevator can carry (if required)

ADDITIONAL:
1) When I push the call button, does it come to the floor and open the door after stopping?
2) Do the doors stay open for at least 5 seconds?
3) When closing, do the doors reverse if someone is standing in their way?
4) Does the elevator wait for someone to push a floor button before moving?
5) Does the elevator ignore the floor button of the current floor?
6) Does the floor button light up when pressed?
7) Does the Open Door button work when the elevator is moving?
8) Does the elevator travel in a smooth fashion?
9) Is there an up button on the top floor or a down button on the bottom floor?

Four Test Cases on ATM, Cell Phone, Traffic Signal, Elevator – Frequently Discussed in
Interviews on Software Testing
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Test Case on ATM:


TC 1:- successful card insertion.
TC 2:- unsuccessful operation due to wrong angle card insertion.
TC 3:- unsuccessful operation due to invalid account card.
TC 4:- successful entry of pin number.
TC 5:- unsuccessful operation due to wrong pin number
entered 3 times.
TC 6:- successful selection of language.
TC 7:- successful selection of account type.
TC 9:- successful selection of withdrawal option.
TC 10 :- successful selection of amount.
TC 11:- unsuccessful operation due to wrong denominations.
TC 12:- successful withdrawal operation.
Tc 13 :- unsuccessful withdrawal operation due to amount
greater than possible balance.
TC 14 :- unsuccessful due to lack of amount in ATM.
TC 15 :- unsuccessful due to amount greater than the day limit.
TC 16 :- unsuccessful due to server down.
TC 17 :- unsuccessful due to click cancel after insert card.
TC 18:- unsuccessful due to click cancel after insert card
and pin no.
TC 19:- unsuccessful due to click cancel after language
53

selection,account type selection,withdrawal selection, enter


amount

Difference between CMM and CMMI


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Capability Maturity Model (CMM): A five level staged framework that describes the key
elements of an effective software process. The Capability Maturity Model covers practices for planning,
engineering and managing software development and maintenance.

Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an
effective product development and maintenance process. The Capability Maturity Model Integration covers
practices for planning, engineering and managing product development and maintenance. CMMI is the
designated successor of the CMM.

Test Case for a Cell Phone


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

1. Check whether Battery is inserted into mobile properly


2. Check Switch on/Switch off of the Mobile
3. Insert the sim into the phone & Check
4. Add one user with name and phone number in Address book
5. Check the Incoming call
6. Check the outgoing call
7. send/receive messages for that mobile
8. Check all the numbers/Characters on the phone working fine by clicking on them..
9. Remove the user from phone book & Check removed properly with name and phone number
10. Check whether Network working fine..
11. If its GPRS enabled Check for the connectivity.

Test Case for a Traffic Signal


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

1. verify that signal has 3 coloured-red,green,yellow light


2. Power suppy is proper to it
3. Three lights work On n Off properly
4. Lights glow n dim in standard sequence
5. lights glow for specified time interval - red 1min, yello 10 sec n green 1 min
6. only one green light is On at a time on signal.

================================================================
======================

verify if the traffic lights are having three lights(green,yellow,red)


verify that the lights turn on in a sequence
verify that lights turn on in a sequence based on time specified(greenlight-1min,yellowlight10sec,redlight 1
min)
verify that only one light glows at a time
verify if the speed of the Traffic light can be accelerated as time specified based on the traffic
54

verify if the traffic lights in some spots are sensor activated.

================================================================
======================

verify if the traffic lights are having three lights(green,yellow,red)


verify that the lights turn on in a sequence
verify that lights turn on in a sequence based on time specified(greenlight-1min,yellowlight10sec,redlight 1
min)
verify that only one light glows at a time
verify if the speed of the Traffic light can be accelerated as time specified based on the traffic
verify if the traffic lights in some spots are sensor activated.

Test Case for an Elevator


Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents

SPONSORED LINKS

Some of the use cases would be:


1) Elevator is capable of moving up and down.
2) It is stopping at each floor.
3) It moves exactly to that floor when corresponding floor no is pressed.
4) It moves up when called from upward and down when called from downward.
5) It waits until 'close' button is pressed.
6) If anyon steps inbetween the door at the time of closing, door should open.
7) No break points exists
8) More usecases for the load that the elevator can carry (if required)

ADDITIONAL:
1) When I push the call button, does it come to the floor and open the door after stopping?
2) Do the doors stay open for at least 5 seconds?
3) When closing, do the doors reverse if someone is standing in their way?
4) Does the elevator wait for someone to push a floor button before moving?
5) Does the elevator ignore the floor button of the current floor?
6) Does the floor button light up when pressed?
7) Does the Open Door button work when the elevator is moving?
8) Does the elevator travel in a smooth fashion?
9) Is there an up button on the top floor or a down button on the bottom floor?

What parameters to consider for Performance Testing?


generally we consider five parameters

Response time
page download time
Through put
transactions per second
Turn around time
what is conventional Testing? and what is unconventional testing?
Unconventional testing is a sort of testing done by the QA
people, in which they test each and every documents right
from the intial case of the 'SDLC'.
Where as conventional testing done by the test engineers on
the application in the testing phase of the 'SDLC'.
What is the difference between User Controls and Master Pages

Master pages and user controils are two different concepts.

Master pages are used to provide the consistent layout and


common behaviour for multiple pages in your
applications.then u can add the Contenetplaceholder to add
child pages custom contenet.
55

User Controls:Sometimes u need the functionality in ur web


pages which is not possible using the Built-In Web server
controls then user can create his own controls called user
controls using asp.net builtin controls.User controlsn are
those with .aspx extensions and u can share it in the
application.
what is entry and exit criteria?
entry exit

unit test to check the modules white box testing


level is done

integration exit of unit test WBT + BBT


test

system test exit of integration 100% BBT


test

UAT exit of system test may be WBT


and compulsary BBT
Entry criteria is what has been agreed by relevant parties
as to what needs to be completed before a particular phase
of testing can begin. This can include fully documented
release notes, completion of other test phases etc. Exit
criteria is what needs to be completed before a test phase
can finish. This can mean a certain percentage of test
coverage, or based on there being only X number of critical
bugs. Hope this helps further
What is the default identifier of structure in C? public

You might also like