Professional Documents
Culture Documents
Selenium
Selenium
Selenium
• What is Selenium?
Selenium is a suite of tools to automate web applications testing across many platforms
o An Introduction to Testing Web Applications with twill and Selenium by C. Titus Brown,
Gheorghe Gheorghiu, Jason Huggins
o Java Power Tools by John Ferguson Smart
Selenium could be used for the functional, regression, load testing of the web based applications.
The automation tool could be implemented for post release validation with continuous integration
tools like Hudson or CruiseControl.
Selenium is open source software, released under the Apache 2.0 license and can be downloaded
and used without charge.
Selenium is open source software, released under the Apache 2.0 license and can be downloaded
and used without charge.
The latest version of Selenium Core, Selenium IDE, Selenium RC and Selenium Grid can be found
on Selenium download page
Selenium IDE is a Firefox add-on that records clicks, typing, and other actions to make a test,
which you can play back in the browser
Selenium Remote Control (RC) runs your tests in multiple browsers and platforms. Tweak your
tests in your preferred language.
Selenium Grid extends Selenium RC to distribute your tests across multiple servers, saving you
time by running tests in parallel.
Test Engineer can record and playback test with Selenium IDE in Firefox.
QA Engineer can use Firefox, IE 7, Safari and Opera browsers to run actuall tests in Selenium RC.
Several programming languages are supported by Selenium Remote Control. C# Java Perl PHP
Python Ruby
If QA Engineer would compare Selenium with HP QTP or Micro Focus SilkTest, QA Engineer would
easily notice tremendous cost savings for Selenium. In contrast to expensive SilkTest or QTP
license, Selenium automation tool is absolutely free. Selenium allows writing and executing test
cases in various programming languages including C#, Java, PERL, Python, PHP and even HTML.
Selenium allows simple and powerful DOM-level testing and in the same time could be used for
testing in the traditional waterfall or modern Agile environments. Selenium would be definitely a
great fit for the continuous integration.
Selenium weak points are tricky setup; dreary errors diagnosis; tests only web applications
Using the Selenium IDE, QA Tester can record a test to comprehend the syntax of Selenium IDE
commands, or to check the basic syntax for a specific type of user interface. Keep in mind that
Selenium IDE recorder is not clever as QA Testers want it to be. Quality assurance team should
never consider Selenium IDE as a "record, save, and run it" tool, all the time anticipate reworking
a recorded test cases to make them maintainable in the future.
As with any other type of test automation tools like SilkTest, HP QTP, Watir, Canoo Webtest,
Selenium allows to record, edit, and debug tests cases. However there are several problems that
seriously affect maintainability of recorded test cases.
3
The most obvious problem is complex ID for an HTML element. If IDs is auto-generated, the
recorder test cases may fail during playback. The work around is to use XPath to find required
HTML element.
Selenium supports AJAX without problems, but QA Tester should be aware that Selenium does not
know when AJAX action is completed, so ClickAndWait will not work. Instead QA tester could use
pause, but the snowballing effect of several 'pause' commands would really slow down total
testing time of test cases.
Selenium appears to be the mainstream open source tool for browser side testing, but there are
many alternatives. Canoo Webtest is a great Selenium alternative and it is probably the fastest
automation tool. Another Selenium alternative is Watir, but in order to use Watir QA Tester has to
learn Ruby. One more alternative to Selenium is Sahi, but is has confusing interface and small
developers community.
Ans. Selenium is a set of tools that supports rapid development of test automation scripts for web
based applications. Selenium testing tools provides a rich set of testing functions specifically
provides an easy to use interface for developing and running individual test cases or entire test
suites. Selenium-IDE has a recording feature, which will keep account of user actions as they are
Q5. Can tests recorded using Selenium IDE be run in other browsers?
Ans. Yes. Although Selenium IDE is a Firefox add on, however, tests created in it can also be run in
other browsers by using Selenium RC (Selenium Remote Control) and specifying the name of the test
Ans. 1. Intelligent field selection will use IDs, names, or XPath as needed
2. It is a record & playback tool and the script format can be written in various languages including
flexibility and extensibility in developing test logic. For example, if the application under test returns
a result set and the automated test program needs to run tests on each element in the result set, the
iteration / loop support of programming language’s can be used to iterate through the result set,
Selenium RC provides an API and library for each of its supported languages. This ability to use
Selenium RC with a high level programming language to develop test cases also allows the automated
Ans. Selenium Grid in the selenium testing suit allows the Selenium RC solution to scale for test suites
5
that must be run in multiple environments. Selenium Grid can be used to run multiple instances of
Ans. Selenium Grid sent the tests to the hub. Then tests are redirected to an available Selenium RC,
which launch the browser and run the test. Thus, it allows for running tests in parallel with the entire
test suite.
Q 11. What you say about the flexibility of Selenium test suite?
Ans. Selenium testing suite is highly flexible. There are multiple ways to add functionality to Selenium
Selenium’s strongest characteristic. Selenium Remote Control support for multiple programming and
scripting languages allows the test automation engineer to build any logic they need into their
automated testing and to use a preferred programming or scripting language of one’s choice.
Also, the Selenium testing suite is an open source project where code can be modified and
testing in the continuous integration environment. It is also useful for agile testing
Q16. What are the advantages and disadvantages of using Selenium as testing tool?
Ans. Advantages: Free, Simple and powerful DOM (document object model) level testing, can be used
Disadvantages: Tricky setup; dreary errors diagnosis; can not test client server applications.
testing client server applications. Selenium supports following web browsers: Internet Explorer,
Firefox, Safari, Opera or Konqueror on Windows, Mac OS X and Linux. However, QTP is limited to
QTP uses scripting language implemented on top of VB Script. However, Selenium test suite has the
flexibility to use many languages like Java, .Net, Perl, PHP, Python, and Ruby.
can be testing using Selenium testing suite. However, Silk Test can be used for testing client server
applications. Selenium supports following web browsers: Internet Explorer, Firefox, Safari, Opera or
Konqueror on Windows, Mac OS X and Linux. However, Silk Test is limited to Internet Explorer and
Firefox.
Silk Test uses 4Test scripting language. However, Selenium test suite has the flexibility to use many
Selenium tool
First of all it will better if you use selenium RC to automate multiple test cases by using your choice of
language.
You have to follow below mentioned steps when you have to automate multiple test cases:
1. Choose your preferred language i.e.
RUBY JavaScript PERL etc.
2. Create a automation framework in your selected language. While creating automation framework you
have to notice some things like Logical Data Independence Reporting Error log handling centralized library
etc.
3. Record scripts through selenium IDE and convert it to your selected language via IDE.
4. Create functions for each test script and put centralized code in centralized library file.
5. Create a driver file and write command for execution sequence for test script.
1) Understand the basics right :
- Basics of Web testing
- How selenium works? - http://seleniumhq.org/about/how.html
7
Jason Huggins, one of the founders of Selenium, spoke at the Agile Developer user group meeting recently.
Matt Raible of DZone gives a good summary of Jason’s comments.
Among them:
• Selenium started at Thoughtworks. They were challenged to fix an Ajax bug in their expense
reporting system. JWebUnit, HtmlUnit, JsUnit, Driftwood, and FIT did not meet their needs. They
invented Selenese as a notation for tests first.
• Selenium Core, as a JavaScript embedded test playing robot, came next. Then Selenium RC and
Grid.
• Selenium test playback is slow. Parallelization can solve some of the slowness problems.
• JavaScript sandbox, Flash, Applets, Silverlight, and HTML 5’s Canvas all present problems in
Selenium.
Thanks for the good write-up. Too bad about your battery. I would like to hear more.
PushToTest integrated Selenium into TestMaker a couple of years ago. Selenium works very well for
testing Ajax applications. And, TestMaker runs Selenium tests as functional tests, load and performance
tests, and business service monitors. TestMaker runs these tests in your QA lab, in the Cloud, or both. See
http://www.pushtotest.com/products/cloudtesting
What Selenese command can be used to display the value of a variable in the log file, which can be
very valuable for debugging?
If one wanted to display the value of a variable named answer in the log file, what would the first
argument to the previous command look like?
Where did the name "Selenium" come from?
Which Selenium command(s) simulates selecting a link?
Which two commands can be used to check that an alert with a particular message popped up?
What does a comment look like in Column view?
What does a comment look like in Source view?
What are Selenium tests normally named (as displayed at the top of each test when viewed from within
a browser)?
What command simulates selecting the browser's Back button?
If the Test Case frame contains several test cases, how can one execute just the selected one of those
test cases?
What globbing functionality is NOT supported by SIDE?
What is wrong with this character class range? [A-z]
What are four ways of specifying an uppercase or lowercase M in a Selenese pattern?
What does this regular expression match?
regexp:[1-9][0-9],[0-9]{3},[0-9]{3}
regexp:[13579][02468]
regexp:August|April 5, 1908
What Selenium regular expression pattern can be used instead of the glob below to produce the same
results?
verifyTextPresent | glob:9512?
What Selenium globbing pattern can be used instead of the regexp below to produce the same
results?
Functional testing and black box is a methodology used to test the behaviour that has an application from
the viewpoint of their functions, validated for this purpose various aspects ranging from the aesthetics of
the front end, the navigation within your pages between pages within the forms, the compliance of
technical specifications associated with fields, buttons, bars, among other pages, entry permits and access
to consultations and modifications, management of parameters, the management of the modules that
constitute , and other conditions that make up the various "features" expected to provide the system to
operate the end user as a normal and correct.
9
To meet this objective, the tester must choose a set of inputs under certain pre-defined within a certain
context, and to check whether the outputs are correct or incorrect depending on the outcome defined in
advance between the parties and / or techniques for the customer / supplier.
This form of testing an application is made "outside", that is why "black box testing" because the test
covers the roads that follow the internal procedures of the program.
In connection with this test, although there are many tools now that this one out for various reasons will be
published in future articles is: Selenium.
Selenium works directly on the web browser, its installation is simple and handling is so intuitive that
allows quickly define and configure a test case, recording the journey in a page and then save the
sequence of steps as a test script and and then play it when you want.
Selenium is an open-source tool that not only allows the testing of the system but also facilitates the
acceptance testing of web applications.
Integrates with Firefox, and includes the ability to write the tests directly in Java, C #, Python and Ruby.
This solution has three basic tools to record a sequence of steps within a website, simulate the test with
different browsers and automated test generation.
Selenium IDE is a plug-in for Firefox which allows you to record and execute scripts directly from your
browser.
Selenium RC is a library and server written in Java that allows you to run scripts from local or remote
through commands.
Grids Selenium: Selenium server allows multiple coordinate in order to run scripts on multiple platforms
and devices at the same time.
Metrics are the means by which the software quality can be measured; they give you confidence in the
product. You may consider these product management indicators, which can be either quantitative or
qualitative. They are typically the providers of the visibility you need.
The goal is to choose metrics that will help you understand the state of your product.
10
SPONSORED LINKS
A risk is a potential for loss or damage to an Organization from materialized threats. Risk
Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat as we have
seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based
system.
Risk Identification:
1. Software Risks: Knowledge of the most common risks associated with Software development, and the
platform you are working on.
2. Business Risks: Most common risks associated with the business using the Software
11
3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform
you are working on, tools being used, and test methods being applied.
4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or
untested Software Prodicts.
5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing
and operating information technology, products and process; assessing their likelihood, and initiating
strategies to test those risks.
Traceability means that you would like to be able to trace back and forth how and where any work product
fulfills the directions of the preceding (source-) product. The matrix deals with the where, while the how
you have to do yourself, once you know the where.
Take e.g. the Requirement of User Friendliness (UF). Since UF is a complex concept, it is not solved by just
one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute
to this Requirement and many groups of lines of code may contribute to it.
A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together
are supposed to solve the UF requirement, along with other (sub-)requirements. On the other side (e.g.
top) you specify all design solutions. Now you can connect on the crosspoints of the matrix, which design
solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it
should be deleted, as it is of no value.
Having this matrix, you can check whether any requirement has at least one design solution and by
checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of)
connected design(s).
If you have to change any requirement, you can see which designs are affected. And if you change any
design, you can check which requirements may be affected and see what the impact is.
In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a
particular design and how changes in design or code affect each other.
Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps.
The product of the above two parameters will give you Risk Exposure factor based on which the the
12
Risks are always as against the business requirements for which the software has been created. So, it is
always very important to understand the risks that can affect the client's business and its impact. Testers
need inputs from client as well as developers with regard to this understanding. What you may consider as
risk from testing point of view may not be as seen by the client. So, it is necessary to associate risk level
with each requirement and also the priority. This will be very beneficial while suggesting go/no go for
production.
SPONSORED LINKS
Software testing is not an activity to take up when the product is ready. An effective testing begins with a
proper plan from the user requirements stage itself. Software testability is the ease with which a computer
program is tested. Metrics can be used to measure the testability of a product. The requirements for
effective testing are given in the following sub-sections.
Operability:
The better the software works, the more efficiently it can be tested.
•The system has few bugs (bugs add analysis and reporting overhead to the test process)
•No bugs block the execution of tests
The product evolves in functional stages (allows simultaneous development & testing)
Observability:
Controllability:
The better the software is controlled, the more the testing can be automated and optimized.
13
Decomposability:
By controlling the scope of testing, problems can be isolated quickly, and smarter testing can be
performed.
Simplicity:
Stability:
Understandability:
SPONSORED LINKS
Traditional Practices
There is no simple answer for this. The 'best approach' is highly dependent on the particular organization
and project and the experience of the personnel involved.
For example, given two software projects of similar complexity and size, the appropriate test effort for one
project might be very large if it was for life-critical medical equipment software, but might be much smaller
for the other project if it was for a low-cost computer game. A test estimation approach that only
considered size and complexity might be appropriate for one project but not for the other.
SPONSORED LINKS
Manual software testing is a necessity, and an unavoidable part of the software product
development process. How much testing you do manually, as compared to using test automation, can
15
make the difference between a project’s success and failure. We will discuss test automation in more
detail in a later chapter, but the top five pitfalls of manual software testing illuminate areas where
improvements can be made. The pitfalls are listed and described below.
1. Manual testing is slow and costly. Because it is very labor-intensive, it takes a long time to
complete tests. To try to accelerate testing, you may increase the headcount of the test organization. This
increases the labor as well as the communication costs.
2. Manual tests don’t scale well. As the complexity of the software increases, the complexity of the
testing problem grows exponentially. If tests are detailed and must be performed manually, performing
them can take quite a bit of time and effort. This leads to an increase in the total time devoted to testing
as well as the total cost of testing. Even with these increases in the time and cost, the test coverage goes
down as the complexity goes up because of the exponential growth rate.
3. Manual testing is not consistent or repeatable. Variations in how the tests are performed are
inevitable, for various reasons. One tester may approach and perform a certain test differently from
another, resulting in different results on the same test, because the tests are not being performed
identically. As another example, if there are differences in the location a mouse is pointed when its button
is clicked, or how fast operations are performed, these could potentially produce different results.
4. Lack of training is a common problem, although not unique to manual software testing. The staff
should be well-trained in the different phases of software testing:
– Test design
– Test execution
– Test result evaluation
5. Testing is difficult to manage. There are more unknowns and greater uncertainty in testing than in
code development. Modern software development practices are well-structured, but if you don’t have
sufficient structure in testing, it will be difficult to manage. Consider a case in which the development
phase of a project schedule slips. Since manual software testing takes more time, more resources, and is
costly, that schedule slip can be difficult to manage. A delay in getting the software to the test team on
schedule can result in significant wasted resources. Manual testing, as well as badly designed automated
testing, are also not agile. Therefore, changes in test focus or product requirements make these efforts
even more difficult to manage.
SPONSORED LINKS
The concept of Traceability Matrix is very important from the Testing perspective. It
is document which maps requirements with test cases. By preparing Traceability matrix, we can ensure
that we have covered all the required functionalities of the application in our test cases.
• It is a method for tracing each requirement from its point of origin, through each development
phase and work product, to the delivered product
• Can indicate through identifiers where the requirement is originated, specified, created, tested,
and delivered
• Will indicate for each work product the requirement(s) this work product satisfies
• Facilitates communications, helping customer relationship management and commitment
negotiation
16
Traceability matrix is the answer of the following basic questions of any Software Project:
• How is it possible to ensure, for each phase of the lifecycle, that I have correctly accounted for all
the customer’s needs?
• How can I ensure that the final software product will meet the customer’s needs? For example I
have a functionality which checks if I put invalid password in the password field the application
throws an error message “Invalid password”. Now we can only make sure this requirement is
captured in the test case by traceability matrix.
• Demonstrate to the customer that the requested contents have been developed
• Ensure that all requirements are correct and included in the test plan and the test cases
• Ensure that developers are not creating features that no one has requested
• The system that is built may not have the necessary functionality to meet the customers and
users needs and expectations. How to identify the missing parts?
• If there are modifications in the design specifications, there is no means of tracking the changes
• If there is no mapping of test cases to the requirements, it may result in missing a major defect in
the system
• The completed system may have “Extra” functionality that may have not been specified in the
design specification, resulting in wastage of manpower, time and effort.
• If the code component that constitutes the customer’s high priority requirements is not known,
then the areas that need to be worked first may not be known thereby decreasing the chances of
shipping a useful product on schedule
• A seemingly simple request might involve changes to several parts of the system and if proper
Traceability process is not followed, the evaluation of the work that may be needed to satisfy the
request may not be correctly evaluated
step1: Identify all the testable requirements in granular level from various requirement specification
documents. These documents vary from project to project. Typical requirements you need to capture are
as follows:
Used cases (all the flows are captured)
Error Messages
Business rules
Functional rules
SRS
FRS
So on…
step2: In every project you must be creating test-cases to test the functionality as defined by the
requirements. In this case you want to extend the traceability to those test-cases. In the example table
below the test-cases are identified with a TC_ prefix.
Put all those requirements in the top row of a spreadsheet. And use the right hand column of the
spreadsheet to jot down all the test cases you have written for that particular requirement. In most of the
cases you will have multiple test cases you have written to test one requirement. See the sample
spreadsheet below:
17
step3: Put cross against each of the test case to each requirement if that particular test case is checking
that particular requirement partially or completely. In the above table you can see REQ1 UC1.1 is checked
by three test cases. (TC1.1.1, TC1.1.3, TC1.1.5).
Another example of traceability matrix where requirement documents (use case) are mapped back to the
test cases.
18
SPONSORED LINKS
A test strategy describes the approach, objectives, and direction of the effort test. The purpose of a testing
strategy or method is to minimize risk and ultimately provide the best software for the client. The testing
strategy of choice for a particular application can vary depending on the software, amount of use, and its
objectives. For example, the testing strategy for a transactional system like Oracle will be very different
from the strategy developed to test an analytical tool as the Data Warehouse. In addition, the strategy
chosen for testing a campus-wide purchasing system to a limited number of users of the tool housing
requires a very different test strategies. Because some of these examples have higher exposure, they also
have a higher risk.
These steps are also called stages or levels. The project manager should review the steps below and
consider the same terminology and sequence. If it makes sense, certain phases and tasks May be deleted.
In other cases, tasks and phases of May should be added. May perform some tasks in parallel, and some
steps can be combined. In most cases, each phase must be completed before another can begin.
Duration of the tasks vary depending on the timing and risk of the project manager is ready to absorb.
Test Preparation Phase (before testing begins)
Team - Develop test strategy
Task - Develop high-level test plan
Team - Identify the test cases
Task - Develop scenarios, test scripts
Team - Identify and share test data
Team - Identify the processes, procedures, standards, documentation requirements
Team - Identify and create test environment
Task - Identify test team (s)
Team - Train testers
Unit test phase - The purpose of this testing is to verify and validate the function modules
correctly. This is completed by developers and must be completed before future phases can begin. The
Testing Manager are not normally involved in this phase.
Phase CRP (Conference Room Pilot - optional). The purpose of this phase is to verify proof of concept.
The CRP is generally necessary for new, large, not projects.
Assumption - Test instance is ready
Assumption - Metadata is inserted test example
Assumption - The unit tests and simulations has been completed
Assumption - test scenarios have been identified (by script or ad hoc)
Task - Identify CRP participants
Team - Determine and establish logistics CRP
Task - Define expectations.
Team - Start of CRP
Task - Collect and document feedback
Task - End CRP
Team - Obtain approval phase / sign-off
Team - Collect / share / integrate lessons learned, incorporate the necessary changes
Task - Tune / revise and approve the new test plan
Integration of the testing phase - The purpose of this testing is to verify and validate all the modules are
interface and work together.
Assumption - Requirements are frozen and the design is determined
Assumption - Application is ready for integration tests
Assumption - Metadata was populated by such test tables
Assumption - Unit testing is complete
Task - Test system and document using the test scripts
Team - Test interfaces
Task - Identify and report bugs
Task - Retest fixed bugs / regression test
Team - Test Security
Team - Test browsers / platforms / operating systems
Team - Obtain approval phase / sign-off
Team - Collect / share / integrate lessons learned
Task - Tune / revise and approve the new test plan
The test phase of the system - The purpose of this testing is to verify and validate the system works as if it
were production.
20
SPONSORED LINKS
One of the key reasons for doing automated testing is to ensure that time is not spent on doing
repetitive tasks which can be completed by tools without human intervention. Automation could be one of
the most effective tools in your toolbox but it is not a silver bullet that will solve all the problems and
improve quality. Automation tools are obedient servants, and as a tester we need to become their master
and use them properly to realize their full potential. It is very important to understand that automation
tools are only as good as we use them. Converting test cases from manual to automated is not the best
use of automation tools. They can be used in much more effective ways.
Creating robust and useful test automation framework is a very difficult task. In the web world, this task
becomes even more difficult because things might change overnight. If we follow so called best practices
of automation taken from stable, desktop applications, they will not be suitable in web environment and
probably will have negative impact on the project's quality.
Many problems in the web world are identical to one another. For example irrespective of any web
application we always need to validate things such as presence of title on all the pages.Depending on your
context may be the presence of meta data on every page, presence of tracking code, presence of ad code,
size and number of advertising units and so on.
Solution presented in this article can be used to validate all, or any of the rules mentioned above , across
all the pages in any domain / website. We were given a mandate to ensure that specific tracking code is
present on all the pages of a big website. In a true agile fashion, once this problem was solved it was
extended and re-factored to incorporate many rules on all the pages.
This solution was developed using Selenium Remote Control with Python as scripting language. One of the
main reason for using tools such as Selenium RC is their ability to allow us to code in any language and
this allow us to utilize full power of standard language. For this solution, a python library called Beautiful
Soup was used to parse HTML pages. This solution was ported to another tool called Twill to make it faster.
Since the initial code was also developed in Python, converting it to Twill was a piece of cake.
21
Essentially this solution / script is a small web crawler, which will visit all the pages of any website and
validate certain rules. As mentioned earlier, problem statement for this is very simple i.e. “ Validate certain
rules on every webpage for any given website ”. In order to achieve this, following steps were followed
1.Get Page
3. Get first link and if link is not external and crawler has not visited it, open link.
It is worth mentioning here that rules that can be validated using this framework are the rules, which can
be validated by looking at the source code for the page. Some of the rules that can be validated using this
script are –
1. Make sure that title is present for all the pages and is not generic
2. Check the presence of meta tags like keywords and description on all the pages.
3. Ensure that instrumentation code is present on all the pages
4. Ensure that every image has an alternative text associated with it
5. Ensure that ad code is coming from the right server and has all the relevant information we need.
6. Ensure that size of the banners and skyscrapers used for advertisement is proper.
7. Ensure that every page contain at least two advertisements and no page should have more than
four advertisements, except home page.
8. Ensure that master CSS is applied on all the pages for a given domain.
9. Make sure that all the styles are coming from the CSS files and styles are not present for any
element on a web page.
Above mentioned list might give you some idea of what can be achieved using this approach. This list can
be extended very easily. It is limited only by your imagination :)
In the next article, we will look at the code snippets and explain how easily these rules can be customized
and validated across all the pages on any given domain.
Impact Analysis Checklist for Requirements Changes
❏ Identify any existing requirements in the baseline that conflict with the proposed change.
❏ Identify any other pending requirement changes that conflict with the proposed change.
❏ What are possible adverse side effects or other risks of making the proposed change?
❏ Will the proposed change adversely affect performance requirements or other quality attributes?
❏ Will the change affect any system component that affects critical properties such as safety and
security, or involve a product change that triggers recertification of any kind?
❏ Is the proposed change feasible within known technical constraints and current staff skills?
❏ Will the proposed change place unacceptable demands on any computer resources required for the
development, test, or operating environments?
❏ How will the proposed change affect the sequence, dependencies, effort, or duration of any tasks
currently in the project plan?
❏ Will prototyping or other user input be required to verify the proposed change?
❏ How much effort that has already been invested in the project will be lost if this change is accepted?
❏ Will the proposed change cause an increase in product unit cost, such as by increasing third-party
product licensing fees?
❏ Will the change affect any marketing, manufacturing, training, or customer support plans?
❏ Identify any changes, additions, or deletions required in reports, databases, or data files.
❏ Identify the source code files that must be created, modified, or deleted.
❏ Identify existing unit, integration, system, and acceptance test cases that must be modified or
deleted.
❏ Estimate the number of new unit, integration, system, and acceptance test cases that will be required.
❏ Identify any help screens, user manuals, training materials, or other documentation that must be
created or modified.
❏ Identify any other systems, applications, libraries, or hardware components affected by the change.
❏ Identify any impact the proposed change will have on the project’s software project management
plan, software quality assurance plan, software configuration management plan, or other plans.
❏ Quantify any effects the proposed change will have on budgets of scarce resources, such as memory,
processing power, network bandwidth, real-time schedule.
❏ Identify any impact the proposed change will have on fielded systems if the affected component is not
perfectly backward compatible.
23
Effort
(Labor Hours) Task
Procedure:
1. Identify the subset of the above tasks that will have to be done.
2. Allocate resources to tasks.
3. Estimate effort required for pertinent tasks listed above, based on assigned resources.
4. Total the effort estimates.
5. Sequence tasks and identify predecessors.
6. Determine whether change is on the project’s critical path.
24
Prioritization Estimates:
Relative Benefit: (1-9)
Relative Penalty: (1-9)
Relative Cost: (1-9)
Relative Risk: (1-9)
Calculated Priority: (relative to other pending requirements)
SPONSORED LINKS
It's always been a source of bafflement (technical term) that within the testing domain
Walkthroughs and Peer Reviews are not more widely practiced. I recall being 'invited' to a code review
where a developer went through their code line by line in dolby prologic monotone. It was almost as
painful as the many walkthroughs of Test Plans I've subjected folks to.
What I took away from the meeting was how incredibly useful and interesting it 'could' have been. Here
was the opportunity to have code explained line by line, to be provided a live narration of the thinking,
logic and reasoning behind what had been created. What's more our own views and opinions could be
incorporated into what he been made.
If this was some contempory artist or author giving a narration of one of thier works we'd be fascinated
and consider ourselves fortunate that we might influence the work. But it's just some bloke we work with
and it's code, so we fall asleep.
The problem is it can be hard to get excited about code, even harder to get excited about someone talking
about it! The reality is most Walkthroughs and Code Reviews are brain numbingly boring.
You've probably heard of and been subjected to Walkthroughs and Code Reviews of various types, the idea
that an author of something (code, plans, schedules, etc.) will sit down with an interested group and walk
them through, line by line, clause by clause, page by page, explaining what was written and why.
Occasionally asking for input, "all ok?", "Seem to make sense?" so that after say 30 minutes or maybe an
hour everyone has found they're not interested anymore and are half asleep. Actually, probably asleep. It
makes me feel tired just describing it!
Peer Code Reviews on the other hand are meant to be snappier, more active and energetic. Think more in
terms of having produced a usable piece(s) of code, say a set of core functions or an interface onto those
APIs your buddy wrote. Then with this small chunk in hand get it reviewed. The review is say 10 minutes
long, you're doing a Walkthrough but it's a lively narrative.
Why not get Peer Test Review of those 20 or so Test Cases you just put together for that module?
Incrementally delivering Ready for Execution Test Cases is a great way to help the likes of Project
Managers feel relaxed that we're making progress on our planning and prep.
Doing the same with Test Plans, Designs, Breakdowns or whatever other artefacts you produce is also a
win. This lightweight approach achieves our objectives but stops us getting bogged down in heavyweight
process.
Follow the above Best Practices and keep the event lively, if you really must book out a meeting to
coordinate review with several people at the same time, that's OK. Just go a little overboard with your
presentation. Print copies in colour if you can or let folks access it on their lap-tops to save trees. Use the
projector to make viewing easier, create slides that are already noted up or can be 'written on', what ever
keeps the energy levels up.
Regression testing is often seen as an area in which companies hesitate to allocate resources. We often
hear statements such as: "The developer said the defect is fixed. Do we need to test it again?" And the
answer should be: "Well, the developer probably said the product had no defects to begin with." The truth
of the matter is, in today's world of extremely complex devices and software applications, the quantity and
quality of regression testing performed on a product are directly proportional to the commitment vendors
have to their customer base. This does not mean that the more regression testing, the better. It simply
means that we must make sure that regression testing is done in the right amount and with the right
approach.
1. What do we test?
2. When do we test it?
The purpose of this article is to outline a few techniques that will help us answer these questions. The first
issue we should consider is the fact that it is not necessary to execute our regression at the end of our
testing cycle. Much of the regression effort can be accomplished simultaneously to all other testing
activities. The supporting assumption for this approach is:
"We do not wait until all testing is done to fix our defects."
Therefore, much of the regression effort can be accomplished long before the end of the project, if the
project is of reasonable length. If our testing effort will only last one week, the following techniques may
have to be modified. However, it is not usual for a product to be tested in such a short period of time.
Furthermore, as you study the techniques outlined below, you will see that as the project's length
increases, the benefits offered by these techniques also increase.
To answer the questions of what should we test and when, we will begin with a simple suite of ten tests. In
the real world, this suite would obviously be much larger, and not necessarily static, meaning that the
number of tests can increase or decrease as the need arises. After our first test run with the first beta
(which we will call "Code Drop 1") of our hypothetical software product, our matrix looks like this.
In the matrix above, we have cross-referenced the defects we found, with the tests that caused them. As
you can see, defect number 1 was caused by test 2, but it also occurred on test 3. The remaining failures
caused unique defects.
27
As we prepare to execute our second test run (Code Drop 2), we must decide what tests will be executed.
The rules we will use only apply to our regression effort. There are rules we can apply to the subset of
tests that have passed, in order to find out which ones we should re-execute. However, that will be the
topic of another article.
The fundamental question we must now ask is: "Have any of the defects found been fixed?" Let us
suppose that defects 1, 2, and 3 have, in fact, been reported as fixed by our developers. Let us also
suppose that three more tests have been added to our test suite. After "Code Drop 2", our matrix looks as
follows:
Of the tests that previously failed, only the tests that were associated with defects that were supposedly
fixed were executed. Test number 9, which caused defect number 4, was not executed on Code Drop 2,
because defect number 4 is not fixed.
We chose not to execute tests that had passed on Code Drop 1. This may often not be the case, since
turmoil in our code or the area's importance (such as a new feature, an improvement to an old feature, or
a feature as a key selling point of the product) may prompt us to re-execute these tests.
This simple, but efficient approach ensures that our matrix will never look like the matrix below (in order to
more clearly show the problem, we will omit the Defect # column after each code drop). We will also
consider Code Drop 5 to be our final regression pass.
We will address tests 2, 7, and 9 later, but here are a few key points to notice about this matrix:
Why were tests 1, 4, 5, 6, 10, 11, and 12 executed up to five times? They passed every single time.
Why were tests 3 and 8 executed up to five times? They first failed and were fixed. Did they need to be
executed on every code drop after the failure?
If test 13 failed, was the testing team erroneously told it had been fixed on each code drop? If not, why
was it executed four times with the same result? We can also ask the question: "Why isn't it fixed?" But we
will not concern ourselves with that issue, since we are only addressing the topic of regression.
In conclusion, we will list some general rules we can apply to our testing effort that will ensure
our regression efforts are justified and accurate. These rules are:
1. A test that has passed twice should be considered as regressed, unless turmoil in the code (or other
reasons previously stated, such as a feature's importance) indicates otherwise. By this we mean that the
only time a test should be executed more than twice is if changes to the code in the area the test
exercises (or the importance of the particular feature) justify sufficient concerns about the test's state or
the feature's condition.
2. A test that has failed once should not be re-executed unless the developer informs the test team that
the defect has been fixed. This is the case for tests 7 and 9. They should not have been re-executed until
Code Drops 4 and 5 respectively.
3. We must implement accurate algorithms to find out what tests that have already passed once should be
re-executed, in order to be aware of situations such as the one of test number 2. This test passed twice
after its initial failure and it failed again on Code Drop 4. Just as an additional note of caution: "When in
doubt, execute."
4. For tests that have already passed once, the second execution should be reserved for the final
regression pass, unless turmoil in the code indicates otherwise, or unless we do not have enough tests to
execute. However, we must be careful. Although it is true that this allows us to get some of the regression
effort out of the way earlier in the project, it may limit our ability to find defects introduced later in the
project.
5. The final regression pass should not consist of more than 30% to 40% of the total number of tests in our
suite. This subset should be allocated using the following priorities:
a. All tests that have failed more than once. By this we mean the tests that failed, the developer reported
them as fixed, and yet they failed again either immediately after they were fixed or some time during the
remainder of the testing effort.
b. All tests that failed once and then passed, once they were reported as fixed.
c. All, or a carefully chosen subset of the tests that have passed only once.
28
d. If there is still room to execute more tests, execute any other tests that do not fit the criteria above but
you feel should nevertheless be executed.
These common sense rules will ensure that regression testing is done smartly and in the right amount. In
an ideal world, we would have the time and the resources to test our product completely. Nevertheless,
today's world is a world of tight deadlines and even tighter budgets. Wise resource expenditure today will
ensure our ability to continue to develop reliable products tomorrow.
SPONSORED LINKS
Exception or error handling refers to the anticipation, detection, and resolution of programming,
application, and communications errors. Specialized programs, called error handlers, are available for
some applications. The best programs of this type forestall errors if possible, recover from them when they
occur without terminating the application, or (if all else fails) gracefully terminate an affected application
and save the error information to a log file.
In programming, a development error is one that can be prevented. Such an error can occur in syntax or
logic. Syntax errors, which are typographical mistakes or improper use of special characters, are handled
by rigorous proofreading. Logic errors, also called bugs, occur when executed code does not produce the
expected or desired result. Logic errors are best handled by meticulous program debugging.
This can be an ongoing process that involves, in addition to the traditional debugging routine, beta testing
prior to official release and customer feedback after official release.
A run-time error takes place during the execution of a program, and usually happens because of adverse
system parameters or invalid input data. An example is the lack of sufficient memory to run an application
or a memory conflict with another program. On the Internet, run-time errors can result from electrical
noise, various forms of malware or an exceptionally heavy demand on a server. Run-time errors can be
resolved, or their impact minimized, by the use of error handler programs, by vigilance on the part of
network and server administrators, and by reasonable security countermeasures on the part of Internet
users. In runtime engine environments such as Java or .NET there exist tools that attach to the runtime
engine and every time that an exception of interest occurs they record debugging information that existed
in memory at the time the exception was thrown (call stack and heap values). These tools are called
Automated Exception Handling or Error Interception tools and they provide 'root-cause' information for
exceptions.
Usage
- It determines the ability of applications system to process the incorrect transactions properly
- In some system approx. 50% of programming effort will be devoted to handling error condition.
Objective
- Determine Accountability of processing errors has been assigned and procedures provide a high
probability that errors will be properly corrected.
How to Use:
29
- A group of knowledgeable people is required to anticipate what can go wrong in the application system.
- It is needed that all the application knowledgeable people assemble to integrate their knowledge of user
area, auditing and error tracking.
- Then logical test error conditions should be created based on this assimilated information.
When to Use:
- Throughout SDLC.
- Impact from errors should be identified and should be corrected to reduce the errors to acceptable level.
Example:
- Create a set of erroneous transactions and enter them into the application system then find out whether
the system is able to identify the problems.
- Using iterative testing enters transactions and trap errors. Correct them. Then enter transactions with
errors, which were not present in the system earlier.
The point of exception handling routines is to ensure that the code can handle error conditions. In order to
establish that exception handling routines are sufficiently robust, it is necessary to present the code with a
wide spectrum of invalid or unexpected inputs, such as can be created via software fault injection and
mutation testing (which is also sometimes referred to as fuzz testing). One of the most difficult types of
software for which to write exception handling routines is protocol software, since a robust protocol
implementation must be prepared to receive input that does not comply with the relevant specification(s).
In order to ensure that meaningful regression analysis can be conducted throughout a software
development lifecycle process, any exception handling verification should be highly automated, and the
test cases must be generated in a scientific, repeatable fashion. Several commercially available systems
exist that perform such testing.
Compatibility Testing
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents
SPONSORED LINKS
A Testing to ensure compatibility of an application or Web site with different browsers, OS and
hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all
can impact the behavior of the product and introduce costly and embarrassing bugs. We test for
compatibility using real test environments. That is testing how will the system performs in the particular
software, hardware or network environment. Compatibility testing can be performed manually or can be
driven by an automated functional or regression test suite.
The purpose of compatibility testing is to reveal issues related to the product's interaction with other
software as well as hardware. The product compatibility is evaluated by first identifying the
hardware/software/browser components that the product is designed to support. Then a
hardware/software/browser matrix is designed that indicates the configurations on which the product will
be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate
compatibility between the product and the hardware/software/browser matrix. Finally, the script is
executed against the matrix, and any anomalies are investigated to determine exactly where the
incompatibility lies.
SPONSORED LINKS
1. Major features first. Create tests which will exercise all the principal features first, to give maximum
coverage. This will probably be the same as the regression test. Then exercise each feature in some depth.
2. Major use cases first. As major features. Requires that you know both the user profile and the use
profile. The test must be end-to-end such that some real-world user objective is reached.
3. Major inputs and outputs. If the application is I/O dominated, then identify the most common kinds of
inputs and outputs and create tests to exercise them.
Note that various test management tools will give you coverage metrics showing the proportion of
requirements “covered” by a test, test cases run, etc. These are of course purely arbitrary; just because a
tester has associated a test case to a requirement doesn’t mean that the requirement has been
adequately covered. That’s one of the things you as test manager must check.
Agile manifesto
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents
SPONSORED LINKS
The Agile Manifesto is a statement of the principles that underpin agile software development:
The QA team may want to add one more principle to the Agile Manifesto
Craftsmanship over Execution: This is meant to focus software developers on creating good code vs.
simply writing code that barely works. Both craftsmanship and execution are good things, but taking care
to create good code is viewed as more important from testers and customers point of view.
1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
4. Business people and developers must work together daily throughout the project.
5. Build projects around motivated individuals. Give them the environment and support
they need, and trust them to get the job done.
6. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
8. Agile processes promote sustainable development. The sponsors, developers, and users should be able
to maintain a constant pace indefinitely.
10. Simplicity -- the art of maximizing the amount of work not done -- is essential.
11. The best architectures, requirements, and designs emerge from self-organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its
behavior accordingly.
by Steven Woody
Software regression testing is an essential and challenging task for software test groups. By definition,
regression testing is the process of verifying that none of the existing system features have been
accidentally broken by any new features or recent bug fixes. The main challenges for regression testing
are performing as many tests as possible in as short a time as possible and finding any serious regression
defects as early as possible in the regression test cycle.
1. A final software version with all of the new features completed and with all of the major bugs
fixed becomes available from the software development team.
2. Functional tests (for new features) and regression tests (for existing features) are executed.
3. Performance, integration, interoperability, and stress tests are executed.
4. If any serious defects are found, or significant feature changes are needed, steps 1 through 3 are
repeated.
5. When all needed features are operational and no major defects remain, the software is released
from test, ready for customer use.
Lather-rinse-repeat
The goal of regression testing is to find problems in previously working software features. Although project
management may choose to defer fixes or, in some cases, not to fix problems at all, there will be some
defects found during regression testing that must be fixed before release. Then the fix-regression test
cycle repeats itself, perhaps many more times than the schedule allows.
A common approach to mitigate the risk of performing very little regression testing is to perform a more
comprehensive test on the "final" version. The problem with this is that it is impossible to know before
regression testing starts that the "final" testing won't find any major problems and require yet another
33
final version to be built and tested comprehensively, followed by another final version, and so on.
• Attempts to find the most serious bugs as quickly as possible, at the cost of finding the lesser
issues later in the regression test cycle
• Adapts to the various styles of regression testing: the complete regression test, the continuous
regression test, the regression smoke test, and the restartable regression test
• Does not require additional people, test equipment, or weekend work
In a word, yes. Finding the biggest problems as early as possible in the regression test cycle gives the
development team the maximum time to develop a comprehensive fix--not just a quick fix.
Immediate feedback on the software quality keeps the entire project team focused on the goal at hand,
which is to produce high-quality software as quickly as possible. In addition, immediate feedback prevents
other projects from diverting (stealing) your development resources.
Immediate feedback gives project management the maximum amount of time to react to problems.
Sometimes it is necessary to reset (lower) customer expectations. Knowing this as soon as possible makes
a difficult job easier.
Immediate feedback gives the test team greater awareness of where the problem areas are in the current
software version. This guides testers to avoid further testing of that feature until it is fixed and then to
perform additional testing of that feature after it is fixed.
Declaring the latest version "dead on arrival" can take pressure off the test team. Instead of spending
several days performing low-value tests that will have to be repeated anyway, you can spend the bulk of
your time performing, automating, and improving interesting, high-value regression tests.
The right regression tests to perform in the first hour will vary depending on the type of product, the
maturity and stability of the software, and the test resources available. The following questions will help
you select the right first-hour tests for your product.
Customer-Focused Tests
First, consider the typical use of the product from beginning to end and make sure that use is tested. Think
of global, macroscopic tests of your product. What tests would be the most meaningful to your customers?
How does the product fit into the customer's overall situation and expectation? What is the name of your
product? The first tests should make sure that the product lives up to its name. Which features would have
the greatest impact on the customer if they were broken? If stored data were lost? If security were
compromised? If service were unavailable? These critical features must be tested in the first hour.
The overall goal is to test the software version as deeply and quickly as possible. These tests may take
more than an hour, but try to prioritize and select the tests that will provide the most essential information
about the quality of the software version within a matter of a few hours.
Insert the one-hour regression test before your full regression testing but after testing the new features, as
follows:
• Test the bug fixes and test for any likely problems introduced by the fixes.
• Test any new features using a "one-hour new-feature test plan."
Continuously optimize your first-hour regression testing as bugs are found, new features are added, and
testing tools are improved.
Is your existing regression test plan aging or up to date? Is it constantly updated to integrate the new
feature tests into the regression test plan? Or are new feature tests simply tacked on to the end of the test
plan, waiting until the end to be executed?
After focusing on the first hour, ask yourself what you want to accomplish in the first four hours, the first
35
eight hours, and the first week. Make sure that you prioritize and rearrange your test cases so that if you
are asked to release a version a few days before all of the regression test cases can be completed, all of
the critical tests will already have been performed.
Smoke testing usually refers to superficial tests of the software, such as:
The one-hour regression test approach is more focused and does not replace the broader smoke test. If
possible, the smoke test should be run in parallel with the one-hour regression test. If this is not possible,
then run the smoke test immediately after the one-hour regression test.
Sanity testing usually refers to a small set of simple tests that verify basic functionality of the software,
such as:
There is typically no capacity, scale, or performance testing involved with sanity testing. Sanity testing
provides the starting point for in-depth regression testing of each feature. Sanity testing should be
performed only after the one-hour regression test is complete. Ideally, sanity tests should be re-examined
to provide greater benefit to the regression testing, once the one-hour regression tests are in place.
The one-hour regression test differs from smoke and sanity testing in another important way. The goal of
one-hour regression testing is to find the major problems as quickly as possible by testing complex
features deeply.
New features
If there are any new features introduced during the regression test cycle, hit them hard in the first hour,
shaking out any major design problems. After the new features survive this testing, methodically test the
details of the new features using the full new-feature test plan.
Summary
The one-hour regression testing approach performs a small number of the most important and most
complex tests during the first hour after each new software version is received, with the goal of finding the
big problems early in the regression test cycle.
The various approaches to software testing can all benefit from one-hour regression testing, whether your
test group uses a traditional, agile, context-driven, or exploratory test strategy.
The one-hour regression testing approach requires a slight change of thinking for the first hour, but it can
save days or weeks of repetitious regression testing, while increasing the overall software quality of your
product.
SPONSORED LINKS
yes,
36
Regression testing is the testing which is done on every build change. We retest already tested
functionality for any new bug introduced due to change.
Retesting is the testing of same functionality, this may be due to any bug fix, or due to any change in
implementation technology.
SPONSORED LINKS
It was found from recent survey (from small and big Software testing companies) that good bunch of the
defects reported by clients were due to last minute bug fixes creating side effects. And hence, selecting
the test cases for regression testing is not easy. It is an art and one QA Lead/QA Manager should be
perfect in this.
To select test cases for Regression Testing, QA Lead/QA Manager should know the following:
Selecting test cases for regression testing depends more on the criticality of bug fixes than the criticality of
the defect itself. Fix of a minor bug can result in major side effect and a bug fix for an Extreme defect or a
high severity defect can have no or a just a minor side effect. So the Test Lead/engineer or test Manager
needs to balance these aspects for selecting the test cases and test scenarios for regression testing.
Solution: Make good relations with Development team lead/Development Technical manager. They can
easily help QA team in identifying the above. A proper impact analysis should be done.
While selecting test cases and test scenarios for Regression Testing, we should not select only those test
cases which fail while regular test cycles because those test cases and scenarios can have no or less
relevance to the bug fixes. We need to select more positive test cases than negative test cases for final
regression test cycle. It is also recommended and a best software testing practise that the regular test
cycles (which are conducted before regression testing) should have right mix of both positive and negative
test scenarios. Negative test scenarios are those test cases which are introduced with intention to break
the application.
37
From a recent survey, it is found that several companies have "constant test cases set" for regression
testing and they are executed irrespective of the number and type of bug fixes. Sometimes this approach
may not find all bugs (side effects) due to the bug fixes in the application. Also in some cases it is
observed that the effort spend on executing test cases for regression testing can be minimized if analysis
or a study is done to find out what test cases are relevant for regression testing and what are not. This can
be done by Impact Analysis.
It is a good approach to Planning Regression testing is from the beginning of project before the test cycles
instead of hat you plan after the completion of regular test cycles. Best practise is –
Classify the test cases and test scenarios into various Priorities based on importance and customer usage.
Here it is suggested the test cases be classified into three classes:
Priority 0 – In this category, those test cases falls which checks basic functionality and are executed for
pre-system acceptance and when product goes thru major change. These test cases basically checks
whether the application is stable enough for further testing or not. These are Sanity Test cases which
delivers high project value to client and the entire software development/testing and quality assurance
team.
Priority-1 – In this category, we can add test cases of those functionalities (which are very important to
customer) in which we are getting major/critical bugs, the bugs of critical functionalities which are fixed in
rush.
Priority-2 – These test cases deliver moderate project value. Executed part of Software Testing cycle and
selected for regression testing on need basis.
There are various right approaches to regression testing which needs to be decided on "case to case" basis
and we can prioritize the test cases:
• Case 1: If the criticality and impact of the bug fixes are LOW, then it is enough a Software Tester
selects a few test cases from Test Case Database (TCDB) and executes them. These test cases
can fall under any Priority (Priority 0, Priority 1 or Priority 2).
• Case 2: If the criticality and the impact of the bug fixes are Medium, then we need to execute all
Priority 0 and Priority 1 test cases. If bug fixes need some additional test cases from Priority-2,
then those test cases can also be selected and executed for regression testing. Selecting Priority-
2 test cases in this case is desirable or optional but not must.
• Case 3: If the criticality and impact of the bug fixes are High, then testing team need to execute
all the Priority 0, Priority 1 and carefully selected Priority 2 test cases. Priority 2 test cases cannot
be skipped in this case. So be careful while choosing the test cases.
38
• Case 4: QA Lead or QA Manager can also go through the complete log of changes happened (can
be obtained from Configuration Management Team) because of bug fixes and select the test
cases to conduct regression testing. This is a detailed and sometimes complex process but can
give very good results. Also don’t forget the One Hour Regression Test Strategy.
SPONSORED LINKS
By definition regression testing is a new round of software validation after a build release with
bug fixes. According to Microsoft's statistics, in accordance with their experience, most developers will
introduce one fresh defect after solving 3 ~ 4 defects. So we need to do regression testing to find those
newly introduced bugs.
In general, higher the coverage of the regression, lower the risk, but the time it will take will be more and
vice versa. So, if time allows, once should cover all test cases as part of the regression test suite but
generally will not have so much time. This requires us to make a balance between effort it takes and
coverage of the test use cases used as regression testing.
When choosing regression testing, the first thing to determine is the ratio of regression test cases, this
39
situation should be based on time, and 100% are the best, but because of the time constrains this ratio is
generally around 60%. Then we have to determine the regression test cases' priority.
Let's check the seven most common Regression test selection method:
1. First check the newly modified features (if there is any) in the new build release.
2. Then find the impact areas, meaning because of the introduced new features. What the closely coupled
areas can get impacted? All those related modules need to be retested as part of regression testing.
3. Include the main flows or highly used areas of the program. You can easily get the frequency of the use
of a particular module, and if it is very high then that is the area you need to retest.
4. Furthermore, the most vulnerable parts of the program, for instance, security risks, data leakage,
encryption registration.
5. If the above done, there is still time, then best to include some of the alternative flows test cases found
the use case. Alternative flows are not happy path testing but some other ways to use the programs.
These are the regression test case selection priority. In most of the organization people use some
automation tools to automate the regression test cases. It always has got good ROI.
SPONSORED LINKS
What is Ramp Testing? - Continuously raising an input signal until the system breaks down.
What is Depth Testing? - A test that exercises a feature of a product in full detail.
What is Quality Policy? - The overall intentions and direction of an organization as regards quality as
formally expressed by top management.
What is Race Condition? - A cause of concurrency problems. Multiple accesses to a shared resource, at
least one of which is a write, with no mechanism used by either to moderate simultaneous access.
What is Emulator? - A device, computer program, or system that accepts the same inputs and produces
the same outputs as a given system.
What is Dependency Testing? - Examines an application's requirements for pre-existing software, initial
states and configuration in order to maintain proper functionality.
What is Documentation testing? - The aim of this testing is to help in preparation of the cover
documentation (User guide, Installation guide, etc.) in as simple, precise and true way as possible.
What is Code style testing? - This type of testing involves the code check-up for accordance with
development standards: the rules of code comments use; variables, classes, functions naming; the
maximum line length; separation symbols order; tabling terms on a new line, etc. There are special tools
for code style testing automation.
40
What is scripted testing? - Scripted testing means that test cases are to be developed before tests
execution and some results (and/or system reaction) are expected to be shown. These test cases can be
designed by one (usually more experienced) specialist and performed by another tester.
point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.
• Smoke Testing: A random test conducted before the delivery and after complete testing.
• Pilot Testing: Testing that involves the users just before actual release to ensure that users become
familiar with the release contents and ultimately accept it. Typically involves many users, is conducted
over a short period of time and is tightly controlled. (See beta testing)
• Performance Testing: Testing with the intent of determining how efficiently a product handles a
variety of events. Automated test tools geared specifically to test and fine-tune performance are used
most often for this type of testing.
• Exploratory Testing: Any testing in which the tester dynamically changes what they're doing for test
execution, based on information they learn as they're executing their tests.
• Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even
distributed to the public at large.
• Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not
go through all the in-house quality checks.
• Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the
test cases can discriminate the program from slight variants of the program.
• Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing
approach that examines the application's program structure, and derives test cases from the application's
program logic.
• Compatibility Testing: Testing used to determine whether other system software components such as
browsers, utilities, and competing software will conflict with the software being tested.
• Comparison Testing: Testing that compares software weaknesses and strengths to those of
competitors' products.
• Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to
reaching customers. Sometimes a selected group of users are involved. More often this testing will be
performed in-house or by an outside testing firm in close cooperation with the software engineering
department.
• Independent Verification and Validation (IV&V): The process of exercising software with the intent
of ensuring that the software system meets its requirements and user expectations and doesn't fail in an
unacceptable manner. The individual or group doing this work is not part of the group or organization that
developed the software.
• Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers
only the functionality of the application.
• Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates
and uses test drivers for components that have not yet been developed, because, with bottom-up testing,
low-level components are tested first. The objective of bottom-up testing is to call low-level components
first, for testing purposes.
• Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In
other words, if a program does not perform as intended, it is most likely a bug.
• Error: A mismatch between the program and its specification is an error in the program.
• Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra
data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is
a flaw in the software system and has no impact until it affects the user/customer and operational system.
90% of all the defects can be caused by process problems.
• Failure: A defect that causes an error in operation or negatively impacts a user/ customer.
• Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties
concerned with the project adhere to the process and procedures, standards and templates and test
readiness reviews.
• Quality Control: quality control or quality engineering is a set of measures taken to ensure that
defective products or services are not produced, and that the design meets performance requirements.
• Verification: Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code, requirements and
specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
• Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of
the product; validation typically involves actual testing and takes place after verifications are completed.
There are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing.
42
System Testing: To verify and validate behaviors of the entire system against the original system
objectives
Software testing is a process that identifies the correctness, completeness, and quality of software.
SPONSORED LINKS
Branch Testing
In branch testing, test cases are designed to exercise control flow branches or decision points in a unit.
This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage, need to test both
branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling) within the
branch should be exercised at least once.
Branch coverage (sometimes called Decision Coverage) measures which possible branches in flow control
structures are followed. Clover does this by recording if the Boolean expression in the control structure
evaluated to both true and false during execution.
Branch testing comes under white box testing or black box testing?
Branch testing is done while doing white box testing, where focus is given on code.There are many other
white box technique. Like Loop testing.
http://sites.google.com/a/softwaretestingtimes.com/osst/osst/Branch-Coverage.pdf?attredirects=0&d=1
Condition Testing
The object of condition testing is to design test cases to show that the individual components of logical
conditions and combinations of the individual components are correct. Test cases are designed to test the
individual elements of logical expressions, both within branch conditions and within other expressions in a
unit.
43
Condition testing is a test case design approach that exercises the logical conditions contained in a
program module. A simple condition is a Boolean variable or a relational expression, possibly with one
NOT operator. A relational expression takes the form:
where are arithmetic expressions and relational operator is one of the following <, =, , (nonequality) >,
or . A compound condition is made up of two or more simple conditions, Boolean operators, and
parentheses. We assume that Boolean operators allowed in a compound condition include OR, AND and
NOT.
The condition testing method concentrates on testing each condition in a program. The purpose of
condition testing is to determine not only errors in the conditions of a program but also other errors in the
program. A number of condition testing approaches have been identified. Branch testing is the most
basic. For a compound condition, C, the true and false branches of C and each simple condition in C must
be executed at least once.
Domain testing needs three and four tests to be produced for a relational expression. For a relational
expression of the form:
Three tests are required the make the value of greater than, equal to and less than , respectively.
Data definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is
anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The
objective is to create test cases that will drive execution through paths between specific definitions and
uses.
SPONSORED LINKS
ERRT exposes problems that can’t be found with conventional test techniques. Troubleshooting such
defects can be extremely difficult and very expensive.
Repeating test cases and critical operations over and over again during long sequence testing is one way
to uncover those intermittent failures. Typically, automatically generated test cases are randomly selected
from the test repository databank and executed over a very long time.
To test network-centric applications, high-volume long sequence testing (LST) is an efficient technique.
McGee and Kaner explored it using what they call extended random regression (ERR) testing. A more
promising method to test complex network-centric systems is using genetic algorithms coupled with high
volume testing.
Genetic algorithms, in particular, provide a powerful search technique that is effective in very large search
spaces, as represented by system environment attributes and input parameters in the testing arena.
SPONSORED LINKS
- A detailed procedure that fully tests a feature or an aspect of a feature. Whereas the test plan describes
what to test, a test case describes how to perform a particular test. You need to develop a test case for
each test listed in the test plan.
Test cases should be written by a team member who understands the function or technology being tested,
and each test case should be submitted for peer review.
Organizations take a variety of approaches to documenting test cases; these range from developing
detailed, recipe-like steps to writing general descriptions. In detailed test cases, the steps describe exactly
how to perform the test. In descriptive test cases, the tester decides at the time of the test how to perform
the test and what data to use.
Detailed test cases are recommended to test a software because determining pass or fail criteria is usually
easier with this type of case. In addition, detailed test cases are reproducible and are easier to automate
than descriptive test cases. This is particularly important if you plan to compare the results of tests over
time, such as when you are optimizing configurations. Detailed test cases are more time-consuming to
develop and maintain. On the other hand, test cases that are open to interpretation are not repeatable and
can require debugging, consuming time that would be better spent on testing.
When planning your tests, remember that it is not feasible to test everything. Instead of trying to test
every combination, prioritize your testing so that you perform the most important tests — those that focus
on areas that present the greatest risk or have the greatest probability of occurring — first.
Once the Test Lead prepared the Test Plan, the role of individual testers will start from the preparation of
Test Cases for each level in the Software Testing like Unit Testing, Integration Testing, System Testing and
User Acceptance Testing and for each Module.
As a tester, the best way to determine the compliance of the software to requirements is by designing
effective test cases that provide a thorough test of a unit. Various test case design techniques enable the
testers to develop effective test cases. Besides, implementing the design techniques, every tester needs
to keep in mind general guidelines that will aid in test case design:
a. The purpose of each test case is to run the test in the simplest way possible. [Suitable techniques -
Specification derived tests, Equivalence partitioning]
b. Concentrate initially on positive testing i.e. the test case should show that the software does what it is
intended to do. [Suitable techniques - Specification derived tests, Equivalence partitioning, State-transition
testing]
c. Existing test cases should be enhanced and further test cases should be designed to show that the
software does not do anything that it is not specified to do i.e. Negative Testing [Suitable techniques -
Error guessing, Boundary value analysis, Internal boundary value testing, State transition testing]
d. Where appropriate, test cases should be designed to address issues such as performance, safety
requirements and security requirements [Suitable techniques - Specification derived tests]
e. Further test cases can then be added to the unit test specification to achieve specific test coverage
objectives. Once coverage tests have been designed, the test procedure can be developed and the tests
executed [Suitable techniques - Branch testing, Condition testing, Data definition-use testing, State-
transition testing]
Fig 1: Common Columns in Test cases that are present in all Test case formats
46
The Name of this Test Case Document itself follows some name convention like below so that by seeing
the name we can identify the Project Name and Version Number and Date of Release.
The bolded words should be replaced with the actual Project Name, Version Number and Release Date. For
eg. Bugzilla Test Cases 1.2.0.3 01_12_04
On the Top-Left Corner we have company emblem and we will fill the details like Project ID, Project Name,
Author of Test Cases, Version Number, Date of Creation and Date of Release in this Template.
And we will maintain the fields Test Case ID, Requirement Number, Version Number, Type of
Test Case, Test Case Name, Action, Expected Result, Cycle#1, Cycle #2, Cycle#3, Cycle#4 for
each Test Case. Again this Cycle is divided into Actual Result, Status, Bug ID and Remarks.
Requirement Number:
It gives the reference of Requirement Number in SRS/FRD for Test Case. For Test Case we will specify to
which Requirement it belongs to. The advantage of maintaining this one here in Test Case Document is in
future if a requirement will get change then we can easily estimate how many test cases will affect if we
change the corresponding Requirement.
Version Number:
Under this column we will specify the Version Number, in which that particular test case was introduced.
So that we can identify finally how many Test Cases are there for each Version.
It provides the List of different type of Test Cases like GUI, Functionality, Regression, Security, System,
User Acceptance, Load, Performance etc., which are included in the Test Plan. So while designing Test
Cases we can select one of this option. The main objective of this column is we can predict totally how
many GUI or Functionality test cases are there in each Module. Based on this we can estimate the
resources.
This gives more specific name like particular Button or text box name, for which that particular Test Case
belongs to. I mean to say we will specify the Object name for which it belongs to. For eg., OK button, Login
form.
Action (Input):
This is very important part in Test Case because it gives the clear picture what you are doing on the
specific object. We can say the navigation for this Test Case. Based the steps we have written here we will
perform the operations on the actual application.
Expected Result:
This is the result of the above action. It specifies what the specification or user expects from that particular
action. It should be clear and for each expectation we will sub-divide that Test Case. So that we can specify
pass or fail criteria for each expectation.
Up to the above steps we will prepare the Test Case Document before seeing the actual application and
based on System Requirement Specification/Functional Requirement Document and Use Cases. After that
we will send this document to the concerned Test Lead for approval. He will review this document for
coverage of all user Requirements in the Test Cases. After that he approved the Document.
Now we are ready for testing with this Document and we will wait for the Actual Application. Now we will
use the Cycle #1 parts.
Under each Cycle#1 we are having Actual, Status, Bug ID and Remarks.
Number of Cycles is based on the Organization. Some organizations document Three Cycles some
organizations maintain the information for Four Cycles.
But here I provided only one Cycle in this Template but you have to add more cycles based on your
requirement.
Actual:
We will test the actual application against each Test Case and if it matches the Expected result then we
will say it as “As Expected” else we will write the actually what happened after doing those action.
Status:
It simply indicates Pass or Fail status of that particular Test Case. If Actual and Expected both mismatch
then the Status is Fail else it is Pass. For Passed Test Cases Bug ID should be null and for failed Test Cases
Bug ID should be Bug ID in the Bug Report corresponding to that Test Case.
Bug ID:
This is gives the reference of Bug Number in Bug Report. So that Developer/Tester can easily identify the
Bug associated with that Test Case.
Remarks:
Test script defect – Test case defect and Test case Review
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents
SPONSORED LINKS
48
During a testing phase, it is not uncommon at all that testers face issues other than application failures. It
is believed that during the first cycle of testing, about half the number of reported defects are related to
test scripts; i.e. test script defects count about 50% of all the reported defects, while the other 50% are
due to software failures and incorrect environment setup.
Test script defect is the discrepancies in the test cases which are prepared by Software testers.
The root cause of the defects found in test scripts/test cases can be attributed to the following:
• Not fully understanding the requirements or design or any other source documents that the test
script is derived from and based on.
• Designing test cases require thorough understanding of the application subject to test. Therefore,
it is imperative that the test designers should have a clear understanding of the requirements and
design flow documents, so they can write correct test cases.
• Not working with the latest version of the base documents.
• Proper change control and configuration management activities are the absolute necessary
efforts in order to prevent the pitfalls of working with old or wrong versions of documents.
• Not properly translating requirements and design flows in to test cases and breaking them down
to test steps.
• The same way a programmer needs to translate software requirement or software design into
code, a tester must also be able to analyze a requirement and derive test cases from it. This in
turn requires understandable and testable requirements.
• Not realizing the person executing the test script could be someone from outside without
knowledge of the application under test.
• A lot of the time, when testers design test cases, they assume that the only people who will
execute their scripts will be their team mates or peers and will have familiarity with the
applications. Therefore, the steps are condensed or merged which could appear to be vague for
someone less experienced with the applications and thus unable to follow the script to execute
the tests.
• Not proper use cases and specification documents.
Issues found in the test script can be categorized in to three levels of severity:
Level 1: Issues in test script stopping the tester to carry out the execution.
This is a serious issue with a high priority, since the software cannot be tested if the test script is
majorly defective, e.g. the workflow of the test case and steps do not matchup with what is
written up in the requirements or design specifications.
An example could be that the workflow and behavior of an application depends on a set of test
data and values that the tester should set before the execution, but the script does not contain or
list the required test data and thus the tester cannot verify the workflow.
A defect should be raised and logged and the changes and corrections to the test scripts must be made
immediately during the execution phase and the test should be carried out with the new version of the test
script.
Level 2: Issue in test script with a workaround, i.e. the tester can identify the issue and is able to
continue testing with the workaround:
This is a moderate issue with a medium priority. Of course, if too many workarounds have to be made
during the test execution phase, the priority for fixing the test script defects becomes high.
For example, an application requires username, password and a random generated key to verify user
credentials. The script only asks username and password to be entered while the application is expecting
49
the random number to be entered as well. The tester can enter the random number as well as the
username and password and carryon with the rest of the script.
Before the testing phase begins, all test script documents (amongst other documents) should be subjected
to formal reviews to prevent the above issues appearing during the formal testing phase. If at all possible,
there should be a “dry-running” of the scripts before the formal test execution begins. This gives the
testers a chance to raise any uncertainties or doubts about the nature of the scripts and to minimize the
number of issues listed above.
Also testers writing the test scripts must have a thorough understanding of the applications and workflows
in order to write effective test cases and to maximize the exposure of defects.
SPONSORED LINKS
The main reason of the reviewing: increase test cases quality and therefore product
quality.
As we know testers are involved on the Requirements Specification review process to provide the SQA
knowledge to the requirements written. As testers are involved on the process they become experts on the
area and on the application functionality and many times their knowledge helps avoid introducing future
defects into the functionality that later the developer will code (it’s the phase that we called: defect
prevention).
Once the requirements are approved and baselined, testers start designing test cases whether on their
mind, or writing them (test drafts). Once all the ideas or test case drafts are understood and prepared, the
SQA tester will start developing test cases. When this happens, each test case written is based on a
specific requirement, so with that we start assuring having traceability between requirement and
test cases. This will help SQA team to manage the Requirements coverage of what is going to be tested.
Once the test cases are developed, SQA tester should share-distribute-discuss those with the
same team that reviewed the requirements (SRS writer, Developers, SQA tester,
Implementation team, etc). However, sometimes this is not possible, as perhaps when the
Requirements are baselined, the person who is in charge of the SRS starts on another project and has not
even more time to dedicate reviewing a set of test cases. The same happens with the Implementations
team, as they are perhaps installing the product on a customer site. There are cases where SQA tester and
developer start more or less at the same time with their work based on the Requirements. Developer
starts developing code and Tester developing test cases. There are other times that SQ Tester starts
thinking or having test case drafts even before the Developer starter coding. That means that
developing code and test cases are and should be separate processes.
Of course that having a Requirements-Usability people reviewing test cases has a lot of value, also having
the implementations team doing the same. The problem has been that these often did not happen due the
lack of resources, so the test cases review would progress only with the developer involved on the same
project and functionality. In any case the developer review test cases always would go in the direction
50
of adding details, parameters or circumstances not included in the tester written test cases or well even
adding new test cases but never modifying the sense of the test cases written by the tester.
This is the approach and the how the test cases defined by testers need to be reviewed by the developer.
We should also notice that some times when the test cases writer is a beginner, not a senior tester, or well
does not have so much knowledge about the functionality, then someone from the SQA team with more
experience should check the test cases before sharing them with the developer for review.
Benefits of having test cases reviews for SQA test cases written, including on them the developers:
• Defect prevention while SRS review: SQA tester could advance during SRS reviews possible issues
before any code starts
• Conceptual and Technical Coverage: Requirements- Usability ensures the coverage from the Concept
point of view and Developer ensures the coverage from the Technical Point of view. The traceability
coverage track is assumed by traceability tools (Quality Center)
• Defect prevention while test cases review: If the developer has the opportunity to check the test
cases while implementing code, it is possible that this will help him to realize codification that may be a
cause of a defect. This will help to coding in the way of potential defects.
• Developer Knowledge add to test cases: Developer has also a very good understanding of the
requirement (SRS), explicit and implicit as well. Also has done a deep analysis of them since he had to
accomplish the SRA. He can bring experience on understanding better details or well some cases not being
considered.
After having the test cases reviewed, the SQ team receives all the feedback and decides, based on its
experience and knowledge on SQA and also on the functionality if feedback is applied or not. When not
applied, the reason should be explained and discussed with the developer since there should be a final
agreement on the test cases written.
SPONSORED LINKS
Use cases are popular largely because they tell coherent stories about how the system will behave in use.
The users of the system get to see just what this new system will be and get to react early.
By definition of use cases , we just follow the requirement document so we concentrate of testing like
functionality testing
acceptance testing
Alpha testing etc
How many User Acceptance Test Cases need to be prepared for an application?
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents
51
SPONSORED LINKS
For testing projects if we know the development effort in FP then there is a rule
called Caper Jones rule it says # of test cases = (Function Points) power 1.2
This will give the no. of user acceptance test cases can be prepared.
SPONSORED LINKS
Metrics are the means by which the software quality can be measured; they give you
confidence in the product. You may consider these product management indicators, which can be either
quantitative or qualitative. They are typically the providers of the visibility you need.
The goal is to choose metrics that will help you understand the state of your product.
SPONSORED LINKS
ADDITIONAL:
1) When I push the call button, does it come to the floor and open the door after stopping?
2) Do the doors stay open for at least 5 seconds?
3) When closing, do the doors reverse if someone is standing in their way?
4) Does the elevator wait for someone to push a floor button before moving?
5) Does the elevator ignore the floor button of the current floor?
6) Does the floor button light up when pressed?
7) Does the Open Door button work when the elevator is moving?
8) Does the elevator travel in a smooth fashion?
9) Is there an up button on the top floor or a down button on the bottom floor?
Four Test Cases on ATM, Cell Phone, Traffic Signal, Elevator – Frequently Discussed in
Interviews on Software Testing
Subscribe the QA and Software Testing Newsletter
Post Your Queries | QA and Testing - Table of Contents
SPONSORED LINKS
SPONSORED LINKS
Capability Maturity Model (CMM): A five level staged framework that describes the key
elements of an effective software process. The Capability Maturity Model covers practices for planning,
engineering and managing software development and maintenance.
Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an
effective product development and maintenance process. The Capability Maturity Model Integration covers
practices for planning, engineering and managing product development and maintenance. CMMI is the
designated successor of the CMM.
SPONSORED LINKS
SPONSORED LINKS
================================================================
======================
================================================================
======================
SPONSORED LINKS
ADDITIONAL:
1) When I push the call button, does it come to the floor and open the door after stopping?
2) Do the doors stay open for at least 5 seconds?
3) When closing, do the doors reverse if someone is standing in their way?
4) Does the elevator wait for someone to push a floor button before moving?
5) Does the elevator ignore the floor button of the current floor?
6) Does the floor button light up when pressed?
7) Does the Open Door button work when the elevator is moving?
8) Does the elevator travel in a smooth fashion?
9) Is there an up button on the top floor or a down button on the bottom floor?
Response time
page download time
Through put
transactions per second
Turn around time
what is conventional Testing? and what is unconventional testing?
Unconventional testing is a sort of testing done by the QA
people, in which they test each and every documents right
from the intial case of the 'SDLC'.
Where as conventional testing done by the test engineers on
the application in the testing phase of the 'SDLC'.
What is the difference between User Controls and Master Pages