Professional Documents
Culture Documents
Concepts of Testing
Concepts of Testing
The
plan typically contains a detailed understanding of what the eventual workflow will be.
• Scope of testing.
• Schedule.
• Test Deliverables.
• Release Criteria.
• Risks and Contingencies.
For example: "Master plan for 3A USB Host Mass Storage Driver TP_3A1.0"
Some type of unique company generated number to identify this test plan, its level
and the level of software that it is related to. Preferably the test plan level will be the
same as the related software level. The number may also identify whether the test
plan is a Master plan, a Level plan, an integration plan or whichever plan level it
represents. This is to assist in coordinating software and testware versions within
configuration management.
Keep in mind that test plans are like other software documentation, they are dynamic
in nature and must be kept up to date. Therefore, they will have revision numbers.
You may want to include author and contact information including the revision
history information as part of either the identifier section of as part of the
introduction...
References
List all documents that support this test plan: -
Documents that are referenced include:
• Project Plan.
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology.
• Low Level design.
Introduction
State the purpose of the Plan, possibly identifying the level of the plan (master etc.).
This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that
contain information relevant to this project/process.
Identify the objective of the plan or scope of the plan in relation to the Software
Project plan that it relates to. Other items may include, resource and budget
constraints, scope of the testing effort, how testing relates to other evaluation
activities (Analysis & Reviews), and possible the process to be used for change
control and communication and coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
Intention of this project has to be included
These are things you intend to test within the scope of this test plan. Essentially,
something you will test, a list of what is to be tested. This can be developed from the
software application inventories as well as other sources of documentation and
information.
This can be controlled on a local Configuration Management (CM) process if you have
one. This information includes version numbers, configuration requirements where
needed, (especially if multiple versions of the product are supported). It may also
include key delivery schedule issues for critical elements.
This section can be oriented to the level of the test plan. For higher levels it may be
by application or functional area, for lower levels it may be by program, unit, module
or build.
Identify what software is to be tested and what the critical areas are, such as:
There are some inherent software risks such as complexity; these need to be
identified.
1. Safety.
2. Multiple interfaces.
3. Impacts on Client.
4. Government regulations and rules.
Another key area of risk is a misunderstanding of the original requirements. This can
occur at the management, user and developer levels. Be aware of vague or unclear
requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify
potential areas within the software that are risky. If the unit testing discovered a
large number of defects or a tendency towards defects in a particular area of
the software, this is an indication of potential future problems. It is the
nature of defects to cluster and clump together. If it was defect ridden
earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several [brainstorming]
sessions.
• Start with ideas, such as, what worries me about this project/application.
Features to be tested
This is a listing of what is to be tested from the user's viewpoint of what the system
does. This is not a technical description of the software, but a USER'S view of the
functions.
Set the level of risk for each feature. 'Use a simple rating scale such as High,
Medium and Low(H, M, L). These types of levels are understandable to a
User'. You should be prepared to discuss why a particular level was chosen.
Sections 4 and 6 are very similar, and the only true difference is the point of view.
Section 4 is a technical type description including version numbers and other
technical information and Section 6 is from the User’s viewpoint. Users do not
understand technical software terminology; they understand functions and processes
as they relate to their jobs.
This is a listing of what is 'not' to be tested from both the user's viewpoint of what the
system does and a configuration management/version control view. This is not a
technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.
Approach (strategy)
This is your overall test strategy for this test plan; it should be appropriate to the
level of the plan (master, acceptance, etc.) and should be in agreement with all
higher and lower levels of plans. Overall rules and processes should be identified. It is
important to have instruction as to what is necessary in a test plan before trying to
create one's own strategy. Make sure that you are apprenticed in this area before
trying to teach yourself this important step in engineering.
If this is a master test plan the overall project testing approach and coverage
requirements must also be identified.
Specify if there are special requirements for the testing.
• MTBF, Mean Time Between Failures - if this is a valid measurement for the test
involved and if the data is available.
• SRE, Software Reliability Engineering - if this methodology is in use and if the
information is available.
Show stopper issues. Specify the criteria to be used to determine whether each test
item has passed or failed. Show Stopper severity requires definition within each
testing context.
Entry & exit criteria
Specify the criteria to be used to start testing and how you know when to stop the
testing process.
Suspension criteria specify the criteria to be used to suspend all or a portion of the
testing activities while resumption criteria specify when testing can resume after it
has been suspended.
Suspension criteria assumes that testing cannot go forward and that going backward
is also not possible. A failed build would not suffice as you could generally continue to
use the previous build. Most major or critical defects would also not constituted
suspension criteria as other areas of the system could continue to be tested.
Test deliverables
Environmental needs
Are there any special requirements for this test plan, such as:
Responsibilities
Who is in charge?
Don't leave people in charge of the test plan who have never done anything
resembling a test plan before; This is vital, they will learn nothing from it and the test
will fail.
This issue includes all areas of the plan. Here are some examples:
• Setting risks.
• Selecting features to be tested and not tested.
• Setting overall strategy for this level of plan.
• Ensuring all required elements are in place for testing.
• Providing for resolution of scheduling conflicts, especially, if testing is done on
the production system.
• Who provides the required training?
• Who makes the critical go/no go decisions for items not covered in the test
plans?
• Who is responsible for this risk.
What are the overall risks to the project with an emphasis on the testing process?
• The test schedule and development schedule will move out an appropriate
number of days. This rarely occurs, as most projects tend to have fixed
delivery dates.
• The number of tests performed will be reduced.
• The number of acceptable defects will be increased.
o These two items could lower the overall quality of the delivered
product.
• Resources will be added to the test team.
• The test team will work overtime (this could affect team morale).
• The scope of the plan may be changed.
• There may be some optimization of resources. This should be avoided, if
possible, for obvious reasons.
Management is usually reluctant to accept scenarios such as the one above even
though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is
that testing is cut back or omitted completely, neither of which should be an
acceptable option.
Approvals
Who can approve the process as complete and allow the project to proceed to the
next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:
• The audience for a unit test level plan is different from that of an integration,
system or master level plan.
• The levels and type of knowledge at the various levels will be different as well.
• Programmers are very technical but may not have a clear understanding of
the overall business process driving the project.
• Users may have varying levels of business acumen and very little technical
skills.
• Always be wary of users who claim high levels of technical skills and
programmers that claim to fully understand the business process. These types
of individuals can cause more harm than good if they do not have the skills
they believe they possess.
Glossary
Used to define terms and acronyms used in the document, and testing in general, to
eliminate confusion and promote consistent communications.
Changes Made by Abhishek Menon
Regression Testing
Regression testing is any type of software testing which seeks to uncover
regression bugs. Regression bugs occur whenever software functionality that
previously worked as desired, stops working or no longer works in the same way that
was previously planned. Typically regression bugs occur as an unintended
consequence of program changes.
Background
Experience has shown that as software is developed, this kind of reemergence of
faults is quite common. Sometimes it occurs because a fix gets lost through poor
revision control practices (or simple human error in revision control), but often a fix
for a problem will be "fragile" in that it fixes the problem in the narrow case where it
was first observed but not in more general cases which may arise over the lifetime of
the software. Finally, it has often been the case that when some feature is
redesigned, the same mistakes will be made in the redesign that were made in the
original implementation of the feature.
Therefore, in most software development situations it is considered good practice
that when a bug is located and fixed, a test that exposes the bug is recorded and
regularly retested after subsequent changes to the program. Although this may be
done through manual testing procedures using programming techniques, it is often
done using automated testing tools. Such a test suite contains software tools that
allow the testing environment to execute all the regression test cases automatically;
some projects even set up automated systems to automatically re-run all regression
tests at specified intervals and report any regressions. Common strategies are to run
such a system after every successful compile (for small projects), every night, or
once a week. Those strategies can be automated by an external tool, such as
BuildBot.
Regression testing is an integral part of the extreme programming software
development method. In this method, design documents are replaced by extensive,
repeatable, and automated testing of the entire software package at every stage in
the software development cycle.
Types of regression
• Local - changes introduce new bugs.
• Unmasked - changes unmask previously existing bugs.
• Remote - Changing one part breaks another part of the program. For example,
Module A writes to a database. Module B reads from the database. If changes
to what Module A writes to the database break Module B, it is remote
regression.
Test case
A test case in software engineering is a set of conditions or variables under which a
tester will determine if a requirement or use case upon an application is partially or
fully satisfied. It may take many test cases to determine that a requirement is fully
satisfied.
Test cases are often incorrectly referred to as test scripts. Test scripts are lines of
code used mainly in automation tools.
What characterizes a formal, written test case is that there is a known input and an
expected output, which is worked out before the test is executed. The known input
should test a precondition and the expected output should test a postcondition.
Test suite
In software engineering, a test suite, also known as a validation suite, is a collection
of test cases that are intended to be used as input to a software program to show
that it has some specified set of behaviors. Test suites are used to group similar test
cases together. A system might e.g. have a smoke test suite that consists only of
smoke tests or a test suite for some specific functionality in the system.
A test suite often contains detailed instructions or goals for each collection of test
cases and information on the system configuration to be used during testing. A group
of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They may also
be called a test script, or even a test scenario.
Test script
A test script in Software Testing is a set of instructions that will be performed on the
System Under Test to test that the system functions as expected. These steps can be
executed manually or automatically.
There are various means for executing test scripts.
Automated Testing
Automated testing has a major advantage in that these types of tests may be
executed 24/7 without the need for a continuous presence of people. Another
advantage over manual testing in that it is easily repeatable, and thus is favoured
when doing regression testing. This however is not always the case as automated
tests may be poorly written and can break during playback. Since most systems are
designed with human interaction in mind, it is good practice that a human tests the
system at some point. It is worth considering automating tests if they are to be
executed several times, for example as part of regression testing. However one
shouldn't fall in to the trap of spending more time automating a test then it would
take to simply execute it manually, unless it is planned to be executed several times.
Use case
A use case is a description of a system's behaviour as it responds to a request that
originates from outside of that system.
The use case technique is used in software and systems engineering to capture the
functional requirements of a system. Use cases describe the interaction between a
primary actor—the initiator of the interaction—and the system itself, represented as
a sequence of simple steps. Actors are something or someone which exist outside the
system under study, and that take part in a sequence of activities in a dialogue with
the system, to achieve some goal: they may be end users, other systems, or
hardware devices. Each use case is a complete series of events, described from the
point of view of the actor.
Each use case focuses on describing how to achieve a goal or task. For most software
projects this means that multiple, perhaps dozens, of use cases are needed to define
the scope of the new system.
Use cases should not be confused with the features of the system under
consideration. A use case may be related to one or more features, a feature may be
related to one or more use cases.
A use case defines the interactions between external actors and the system under
consideration to accomplish a goal. An actor specifies a role played by a person or
thing when interacting with the system. The same person using the system may be
represented as two different actors because they are playing different roles. For
example, "Joe" could be playing the role of a Customer when using an Automated
Teller Machine to withdraw cash, or playing the role of a Bank Teller when using the
system to restock the cash drawer.
Use cases treat the system as a black box, and the interactions with the system,
including system responses, are perceived as from outside the system. This is a
deliberate policy, because it forces the author to focus on what the system must do,
not how it is to be done, and avoids the trap of making assumptions about how the
functionality will be accomplished.
Use cases may be described at the abstract level (business use case, sometimes
called essential use case), or at the system level (system use case). The differences
between these is the scope.
• Describe what the system shall do for the actor to achieve a particular goal.
• Include no implementation-specific language.
• Be at the appropriate level of detail.
• Not include detail regarding user interfaces and screens. This is done in user-
interface design.
Traceability matrix
In a software development process, a traceability matrix is a table that correlates
any two baselined documents that require a many to many relationship to
determine the completeness of the relationship. It is often used with high-level
requirements (sometimes known as marketing requirements) and detailed
requirements of the software product to the matching parts of high-level design,
detailed design, test plan, and test cases.
Common usage is to take the identifier for each of the items of one document and
place them in the left column. The identifiers for the other document are placed
across the top row. When an item in the left column is related to an item across the
top, a mark is placed in the intersecting cell. The number of relationships are added
up for each row and each column. This value indicates the mapping of the two items.
Zero values indicate that no relationship exists. It must be determined if one must be
made. Large values imply that the item is too complex and should be simplified.
Functional requirements
In software engineering, a functional requirement defines a function of a software
system or its component. A function is described as a set of inputs, the behavior, and
outputs (see also software).
Process
A typical functional requirement will contain a unique name and number, a brief
summary, and a rationale. This information is used to help the reader understand
why the requirement is needed, and to track the requirement through the
development of the system.
The core of the requirement is the description of the required behavior, which must
be a clear and readable description of the required behavior. This behavior may
come from organizational or business rules, or it may be discovered through
elicitation sessions with users, stakeholders, and other experts within the
organization. Many requirements will be uncovered during the use case
development. When this happens, the requirements analyst should create a
placeholder requirement with a name and summary, and research the details later,
to be filled in when they are better known.
Software requirements must be clear, correct, unambiguous, specific, and verifiable.
Test suite
In software engineering, a test suite, also known as a validation suite, is a collection
of test cases that are intended to be used as input to a software program to show
that it has some specified set of behaviors. Test suites are used to group similar test
cases together. A system might e.g. have a smoke test suite that consists only of
smoke tests or a test suite for some specific functionality in the system.
A test suite often contains detailed instructions or goals for each collection of test
cases and information on the system configuration to be used during testing. A group
of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They may
also be called a test script, or even a test scenario.
Quality Assurance makes sure the project will be completed based on the previously
agreed specifications, standards and functionality required without defects and
possible problems. It monitors and tries to improve the development process from
the beginning of the project to ensure this. It is oriented to "prevention".
QA is involved in the project from the beginning. This helps the teams communicate
and understand the problems and concerns, also gives time to set up the testing
environment and configuration. On the other hand, actual testing starts after the test
plans are written, reviewed and approved based on the design documentation.
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable.
Test Plan is a document that describes the objectives, scope, approach, and focus of
a software testing effort.
A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
Good code is code that works according to the requirements, bug free, readable,
expandable in the future and easily maintainable.
In good design, the overall structure is clear, understandable, easily modifiable, and
maintainable. Works correctly when implemented and functionality can be traced
back to customer and end-user requirements.
Good test engineer has the ability to think the unthinkable, has the test to break
attitute, strong desire to quality and attention to detail.
What is Walkthrough?
The Software Life Cycle begins when an application is first conceived and ends when
it is no longer in use. It includes aspects such as initial concept, requirements
analysis, functional design, internal design, documentation planning, test planning,
coding, document preparation, integration, testing, maintenance, updates, retesting,
phase-out, and other aspects.
What is Inspection?
The purpose of inspection is trying to find defects and problems mostly in documents
such as test plans, specifications, test cases, coding etc. It helps to find the problems
and report it but not to fix it. It is one of the most cost effective methods of software
quality. Many people can join the inspections but normally one moderator, one
reader and one note taker are mandatory.
It's very valuable for long term and on going projects. You can automize some or all
of the tests which needs to be run from time to time repeatedly or diffucult to test
manually. It saves time and effort, also makes testing possible out of working hours
and nights. They can be used by different people and many times in the future. By
this way, you also standardize the testing process and you can depend on the results.
The main problem is the communication. To know the team members, sharing as
much information as possible whenever you need is very valuable to solve the
problems and concerns. On the other hand, increasing the wired communication as
much as possible, seting up meetings help to reduce the miscommunication
problems.
· Black box testing - You don't need to know the internal design or have deep knowledge about
the code to conduct this test. It's mainly based on functionality and specifications, requirements.
· White box testing - This test is based on knowledge of the internal design and code. Tests are
based on code statements, coding styles, etc.
· unit testing - the most 'micro' scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a well-
designed architecture with tight code, may require developing test driver modules or test
harnesses.
· system testing - black-box type testing that is based on overall requirements specifications;
covers all combined parts of a system.
· end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing
of a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
· sanity testing or smoke testing - typically an initial testing effort to determine if a new software
version is performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting
databases, the software may not be in a 'sane' enough condition to warrant further testing in its
current state.
· regression testing - re-testing after fixes or modifications of the software or its environment. It
can be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of testing.
· acceptance testing - final testing based on specifications of the end-user or customer, or based
on use by end-users/customers over some limited period of time.
· load testing - testing an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system's response time degrades or fails.
· stress testing - term often used interchangeably with 'load' and 'performance' testing. Also
used to describe such tests as system functional testing while under unusually heavy loads,
heavy repetition of certain actions or inputs, input of large numerical values, large complex
queries to a database system, etc.
· performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally
'performance' testing (and any other 'type' of testing) is defined in requirements documentation or
QA or Test Plans.
· usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on
the targeted end-user or customer. User interviews, surveys, video recording of user sessions,
and other techniques can be used. Programmers and testers are usually not appropriate as
usability testers.
· recovery testing - testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
· security testing - testing how well the system protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated testing techniques.
· exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it.
· ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.
· alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.
· beta testing - testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.
· mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.