Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 19

SOFTWARE TESTING

UNIT 1
TESTING CAPABILITIES

Testing is a process used to help identify the correctness,


completeness and quality of developed computer software. With that
in mind, testing can never completely establish the correctness of
computer software.

There are many approaches to software testing, but effective testing


of complex products is essentially a process of investigation, not
merely a matter of creating and following rote procedure. One
definition of testing is "the process of questioning a product in order
to evaluate it", where the "questions" are things the tester tries to
do with the product, and the product answers with its behavior in
reaction to the probing of the tester. Although most of the
intellectual processes of testing are nearly identical to that of review
or inspection, the word testing is connoted to mean the dynamic
analysis of the product—putting the product through its paces.

The quality of the application can and normally does vary widely
from system to system but some of the common quality attributes
include reliability, stability, portability, maintainability and usability.
Refer to the ISO standard ISO 9126 for a more complete list of
attributes and criteria.

Testing helps is Verifying and Validating if the Software is working


as it is intended to be working. Thins involves using Static and
Dynamic methodologies to Test the application.

Because of the fallibility of its human designers and its own


abstract, complex nature, software development must be
accompanied by quality assurance activities. It is not unusual for
developers to spend 40% of the total project time on testing. For
life-critical software (e.g. flight control, reactor monitoring), testing
can cost 3 to 5 times as much as all other activities combined. The
destructive nature of testing requires that the developer discard
preconceived notions of the correctness of his/her developed
software.

Software Testing Fundamentals

Testing objectives include

1. Testing is a process of executing a program with the intent of


finding an error.
2. A good test case is one that has a high probability of finding an
as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered
error.

Testing should systematically uncover different classes of errors in a


minimum amount of time and with a minimum amount of effort. A
secondary benefit of testing is that it demonstrates that the
software appears to be working as stated in the specifications. The
data collected through testing can also provide an indication of the
software's reliability and quality. But, testing cannot show the
absence of defect -- it can only show that software defects are
present.

When Testing should start:

Testing early in the life cycle reduces the errors. Test deliverables
are associated with every phase of development. The goal of
Software Tester is to find bugs, find them as early as possible, and
make them sure they are fixed.

The number one cause of Software bugs is the Specification. There


are several reasons specifications are the largest bug producer.

In many instances a Spec simply isn’t written. Other reasons may


be that the spec isn’t thorough enough, its constantly changing, or
it’s not communicated well to the entire team. Planning software is
vitally important. If it’s not done correctly bugs will be created.
The next largest source of bugs is the Design, That’s where the
programmers lay the plan for their Software. Compare it to an
architect creating the blue print for the building, Bugs occur here for
the same reason they occur in the specification. It’s rushed,
changed, or not well communicated.

Coding errors may be more familiar to you if you are a programmer.


Typically these can be traced to the Software complexity, poor
documentation, schedule pressure or just plain dump mistakes. It’s
important to note that many bugs that appear on the surface to be
programming errors can really be traced to specification. It’s quite
common to hear a programmer say, “ oh, so that’s what its
supposed to do. If someone had told me that I wouldn’t have
written the code that way.”

The other category is the catch-all for what is left. Some bugs can
blamed for false positives, conditions that were thought to be bugs
but really weren’t. There may be duplicate bugs, multiple ones that
resulted from the square root cause. Some bugs can be traced to
Testing errors.

Costs: The costs re logarithmic- that is, they increase tenfold as


time increases. A bug found and fixed during the early stages when
the specification is being written might cost next to nothing, or 10
cents in our example. The same bug, if not found until the software
is coded and tested, might cost $1 to $10. If a customer finds it,
the cost would easily top $100.

When to Stop Testing

This can be difficult to determine. Many modern software


applications are so complex, and run in such as interdependent
environment, that complete testing can never be done. "When to
stop testing" is one of the most difficult questions to a test engineer.
Common factors in deciding when to stop are:

• Deadlines ( release deadlines,testing deadlines.)


• Test cases completed with certain percentages passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a
specified point
• The rate at which Bugs can be found is too small
• Beta or Alpha Testing period ends
• The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on


the level of the risk acceptable to the management. As testing is a
never ending process we can never assume that 100 % testing has
been done, we can only minimize the risk of shipping the product to
client with X testing done. The risk can be measured by Risk
analysis but for small duration / low budget / low resources project,
risk can be deduced by simply: -

• Measuring Test Coverage.


• Number of test cycles.
• Number of high priority bugs.
Test Strategy:

How we plan to cover the product so as to develop an adequate


assessment of quality.

A good test strategy is:

Specific
Practical
Justified

The purpose of a test strategy is to clarify the major tasks and


challenges of the test project.

Test Approach and Test Architecture are other terms commonly


used to describe what I’m calling test strategy.

Example of a poorly stated (and probably poorly conceived) test


strategy:

"We will use black box testing, cause-effect graphing, boundary


testing, and white box testing to test this product against its
specification."

Test Strategy: Type of Project, Type of Software, when Testing will


occur, Critical Success factors, Tradeoffs

Test Plan - Why

• Identify Risks and Assumptions up front to reduce surprises


later.
• Communicate objectives to all team members.
• Foundation for Test Spec, Test Cases, and ultimately the Bugs
we find.
Failing to plan = planning to fail.
Test Plan - What

• Derived from Test Approach, Requirements, Project Plan,


Functional Spec., and Design Spec.
• Details out project-specific Test Approach.
• Lists general (high level) Test Case areas.
• Include testing Risk Assessment.
• Include preliminary Test Schedule
• Lists Resource requirements.

Test Plan

The test strategy identifies multiple test levels, which are going to
be performed for the project. Activities at each level must be
planned well in advance and it has to be formally documented.
Based on the individual plans only, the individual test levels are
carried out.

Entry means the entry point to that phase. For example, for unit
testing, the coding must be complete and then only one can start
unit testing. Task is the activity that is performed. Validation is the
way in which the progress and correctness and compliance are
verified for that phase. Exit tells the completion criteria of that
phase, after the validation is done. For example, the exit criterion
for unit testing is all unit test cases must pass.

Unit Test Plan {UTP}

The unit test plan is the overall plan to carry out the unit test
activities. The lead tester prepares it and it will be distributed to the
individual testers, which contains the following sections.

What is to be tested?

The unit test plan must clearly specify the scope of unit testing. In
this, normally the basic input/output of the units along with their
basic functionality will be tested. In this case mostly the input units
will be tested for the format, alignment, accuracy and the totals.
The UTP will clearly give the rules of what data types are present in
the system, their format and their boundary conditions. This list
may not be exhaustive; but it is better to have a complete list of
these details.

Sequence of Testing

The sequences of test activities that are to be carried out in this


phase are to be listed in this section. This includes, whether to
execute positive test cases first or negative test cases first, to
execute test cases based on the priority, to execute test cases
based on test groups etc. Positive test cases prove that the system
performs what is supposed to do; negative test cases prove that the
system does not perform what is not supposed to do. Testing the
screens, files, database etc., are to be given in proper sequence.

Basic Functionality of Units

How the independent functionalities of the units are tested which


excludes any communication between the unit and other units. The
interface part is out of scope of this test level. Apart from the above
sections, the following sections are addressed, very specific to unit
testing.

• Unit Testing Tools


• Priority of Program units
• Naming convention for test cases
• Status reporting mechanism
• Regression test approach
• ETVX criteria

Integration Test Plan

The integration test plan is the overall plan for carrying out the
activities in the integration test level, which contains the following
sections.
What is to be tested?

This section clearly specifies the kinds of interfaces fall under the
scope of testing internal, external interfaces, with request and
response is to be explained. This need not go deep in terms of
technical details but the general approach how the interfaces are
triggered is explained.

Sequence of Integration

When there are multiple modules present in an application, the


sequence in which they are to be integrated will be specified in this
section. In this, the dependencies between the modules play a vital
role. If a unit B has to be executed, it may need the data that is fed
by unit A and unit X. In this case, the units A and X have to be
integrated and then using that data, the unit B has to be tested.
This has to be stated to the whole set of units in the program.
Given this correctly, the testing activities will lead to the product,
slowly building the product, unit by unit and then integrating them.

System Test Plan {STP}

The system test plan is the overall plan carrying out the system test
level activities. In the system test, apart from testing the functional
aspects of the system, there are some special testing activities
carried out, such as stress testing etc. The following are the
sections normally present in system test plan.

What is to be tested?

This section defines the scope of system testing, very specific to the
project. Normally, the system testing is based on the requirements.
All requirements are to be verified in the scope of system testing.
This covers the functionality of the product. Apart from this what
special testing is performed are also stated here.

Functional Groups and the Sequence


The requirements can be grouped in terms of the functionality.
Based on this, there may be priorities also among the functional
groups. For example, in a banking application, anything related to
customer accounts can be grouped into one area, anything related
to inter-branch transactions may be grouped into one area etc.
Same way for the product being tested, these areas are to be
mentioned here and the suggested sequences of testing of these
areas, based on the priorities are to be described.

Acceptance Test Plan {ATP}

The client at their place performs the acceptance testing. It will be


very similar to the system test performed by the Software
Development Unit. Since the client is the one who decides the
format and testing methods as part of acceptance testing, there is
no specific clue on the way they will carry out the testing. But it will
not differ much from the system testing. Assume that all the rules,
which are applicable to system test, can be implemented to
acceptance testing also.

Since this is just one level of testing done by the client for the
overall product, it may include test cases including the unit and
integration test level details.

A sample Test Plan Outline along with their description is as shown


below:

Test Plan Outline

1. BACKGROUND – This item summarizes the functions of the


application system and the tests to be performed.
2. INTRODUCTION
3. ASSUMPTIONS – Indicates any anticipated assumptions which
will be made while testing the application.
4. TEST ITEMS - List each of the items (programs) to be tested.
5. FEATURES TO BE TESTED - List each of the features (functions or
requirements) which will be tested or demonstrated by the test.
6. FEATURES NOT TO BE TESTED - Explicitly lists each feature,
function, or requirement which won't be tested and why not. 7.
APPROACH - Describe the data flows and test philosophy.
Simulation or Live execution, Etc. This section also mentions all the
approaches which will be followed at the various stages of the test
execution.
8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of
expected output and tolerances
9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from
start to completion?
Under what circumstances it may be resumed in the middle?
Establish check-points in long tests.
10. TEST DELIVERABLES - What, besides software, will be
delivered?
Test report
Test software
11. TESTING TASKS Functional tasks (e.g., equipment set up)
Administrative tasks
12. ENVIRONMENTAL NEEDS
Security clearance
Office space & equipment
Hardware/software requirements
13. RESPONSIBILITIES
Who does the tasks in Section 10?
What does the user do?
14. STAFFING & TRAINING
15. SCHEDULE
16. RESOURCES
17. RISKS & CONTINGENCIES
18. APPROVALS

The schedule details of the various test pass such as Unit tests,
Integration tests, System Tests should be clearly mentioned along
with the estimated efforts.
Risk Analysis:

A risk is a potential for loss or damage to an Organization from


materialized threats. Risk Analysis attempts to identify all the risks
and then quantify the severity of the risks.A threat as we have seen
is a possible damaging event. If it occurs, it exploits vulnerability in
the security of a computer based system.

Risk Identification:

1. Software Risks: Knowledge of the most common risks associated


with Software development, and the platform you are working on.

2. Business Risks: Most common risks associated with the business


using the Software

3. Testing Risks: Knowledge of the most common risks associated


with Software Testing for the platform you are working on, tools
being used, and test methods being applied.

4. Premature Release Risk: Ability to determine the risk associated


with releasing unsatisfactory or untested Software Prodicts.

5. Risk Methods: Strategies and approaches for identifying risks or


problems associated with implementing and operating information
technology, products and process; assessing their likelihood, and
initiating strategies to test those risks.

Traceability means that you would like to be able to trace back and
forth how and where any workproduct fulfills the directions of the
preceeding (source-) product. The matrix deals with the where,
while the how you have to do yourself, once you know the where.

Take e.g. the Requirement of UserFriendliness (UF). Since UF is a


complex concept, it is not solved by just one design-solution and it
is not solved by one line of code. Many partial design-solutions may
contribute to this Requirement and many groups of lines of code
may contribute to it.

A Requirements-Design Traceability Matrix puts on one side (e.g. left)


the sub-requirements that together are supposed to solve the UF
requirement, along with other (sub-)requirements. On the other
side (e.g. top) you specify all design solutions. Now you can
connect on the crosspoints of the matrix, which design solutions
solve (more, or less) any requirement. If a design solution does not
solve any requirement, it should be deleted, as it is of no value.

Having this matrix, you can check whether any requirement has at
least one design solution and by checking the solution(s) you may
see whether the requirement is sufficiently solved by this (or the set
of) connected design(s).

If you have to change any requirement, you can see which designs
are affected. And if you change any design, you can check which
requirements may be affected and see what the impact is.

In a Design-Code Traceability Matrix you can do the same to keep


trace of how and which code solves a particular design and how
changes in design or code affect each other.

Demonstrates that the implemented system meets the user


requirements.

Serves as a single source for tracking purposes.

Identifies gaps in the design and testing.

Prevents delays in the project timeline, which can be brought about


by having to backtrack to fill the gaps

Software Testing Life Cycle:

The test development life cycle contains the following components:


Requirements
Use Case Document
Test Plan
Test Case
Test Case execution
Report Analysis
Bug Analysis
Bug Reporting

Typical interaction scenario from a user's perspective for system


requirements studies or testing. In other words, "an actual or
realistic example scenario". A use case describes the use of a
system from start to finish. Use cases focus attention on aspects of
a system useful to people outside of the system itself.

• Users of a program are called users or clients.


• Users of an enterprise are called customers, suppliers, etc.

Use Case:

A collection of possible scenarios between the system under


discussion and external actors, characterized by the goal the
primary actor has toward the system's declared responsibilities,
showing how the primary actor's goal might be delivered or might
fail.

Use cases are goals (use cases and goals are used interchangeably)
that are made up of scenarios. Scenarios consist of a sequence of
steps to achieve the goal, each step in a scenario is a sub (or mini)
goal of the use case. As such each sub goal represents either
another use case (subordinate use case) or an autonomous action
that is at the lowest level desired by our use case decomposition.

This hierarchical relationship is needed to properly model the


requirements of a system being developed. A complete use case
analysis requires several levels. In addition the level at which the
use case is operating at it is important to understand the scope it is
addressing. The level and scope are important to assure that the
language and granularity of scenario steps remain consistent within
the use case.

There are two scopes that use cases are written from: Strategic and
System. There are also three levels: Summary, User and Sub-
function.

Scopes: Strategic and System

Strategic Scope:

The goal (Use Case) is a strategic goal with respect to the system.
These goals are goals of value to the organization. The use case
shows how the system is used to benefit the organization.,/p>
These strategic use cases will eventually use some of the same
lower level (subordinate) use cases.

System Scope:

Use cases at system scope are bounded by the system under


development. The goals represent specific functionality required of
the system. The majority of the use cases are at system scope.
These use cases are often steps in strategic level use cases

Levels: Summary Goal , User Goal and Sub-function.

Sub-function Level Use Case:

A sub goal or step is below the main level of interest to the user.
Examples are "logging in" and "locate a device in a DB". Always at
System Scope.

User Level Use Case:

This is the level of greatest interest. It represents a user task or


elementary business process. A user level goal addresses the
question "Does your job performance depend on how many of these
you do in a day". For example "Create Site View" or "Create New
Device" would be user level goals but "Log In to System" would not.
Always at System Scope.

Summary Level Use Case:

Written for either strategic or system scope. They represent


collections of User Level Goals. For example summary goal
"Configure Data Base" might include as a step, user level goal "Add
Device to database". Either at System of Strategic Scope.

Test Documentation

Test documentation is a required tool for managing and maintaining


the testing process. Documents produced by testers should answer
the following questions:

• What to test? Test Plan


• How to test? Test Specification
• What are the results? Test Results Analysis Report

Bug Life cycle:

In entomology(the study of real, living Bugs), the term life cycle


refers to the various stages that an insect assumes over its life. If
you think back to your high school biology class, you will remember
that the life cycle stages for most insects are the egg, larvae, pupae
and adult. It seems appropriate, given that software problems are
also called bugs, that a similar life cycle system is used to identify
their stages of life. Figure 18.2 shows an example of the simplest,
and most optimal, software bug life cycle.
This example shows that when a bug is found by a Software Tester,
its logged and assigned to a programmer to be fixed. This state is
called open state. Once the programmer fixes the code , he assigns
it back to the tester and the bugs enters the resolved state. The
tester then performs a regression test to confirm that the bug is
indeed fixed and, if it closes it out. The bug then enters its final
state, the closed state.

In some situations though, the life cycle gets a bit more


complicated.

In this case the life cycle starts out the same with the Tester
opening the bug and assigning to the programmer, but the
programmer doesn’t fix it. He doesn’t think its bad enough to fix
and assigns it to the project manager to decide. The Project
Manager agrees with the Programmer and places the Bug in the
resolved state as a “wont-fix” bug. The tester disagrees, looks for
and finds a more obvious and general case that demonstrates the
bug, reopens it, and assigns it to the Programmer to fix. The
programmer fixes the bg, resolves it as fixed, and assign it to the
Tester. The tester confirms the fix and closes the bug.

You can see that a bug might undergo numerous changes and
iterations over its life, sometimes looping back and starting the life
all over again. Figure below takes the simple model above and adds
to it possible decisions, approvals, and looping that can occur in
most projects. Of course every software company and project will
have its own system, but this figure is fairly generic and should
cover most any bug life cycle that you’ll encounter
The generic life cycle has two additional states and extra connecting
lines. The review state is where Project Manager or the committee,
sometimes called a change Control Board, decides whether the bug
should be fixed. In some projects all bugs go through the review
state before they’re assigned to the programmer for fixing. In other
projects, this may not occur until near the end of the project, or not
at all. Notice that the review state can also go directly to the closed
state. This happens if the review decides that the bug shouldn’t be
fixed – it could be too minor is really not a problem, or is a testing
error. The other is a deferred. The review may determine that the
bug should be considered for fixing at sometime in the future, but
not for this release of the software.

The additional line from resolved state back to the open state
covers the situation where the tester finds that the bug hasn’t been
fixed. It gets reopened and the bugs life cycle repeats.

The two dotted lines that loop from the closed and the deferred
state back to the open state rarely occur but are important enough
to mention. Since a Tester never gives up, its possible that a bug
was thought to be fixed, tested and closed could reappear. Such
bugs are often called Regressions. It’s possible that a deferred bug
could later be proven serious enough to fix immediately. If either of
these occurs, the bug is reopened and started through the process
again. Most Project teams adopt rules for who can change the state
of a bug or assign it to someone else.For example, maybe only the
Project Manager can decide to defer a bug or only a tester is
permitted to close a bug. What’s important is that once you log a
bug, you follow it through its life cycle, don’t lose track of it, and
prove the necessary information to drive it to being fixed and
closed.

Bug Report - Why

• Communicate bug for reproducibility, resolution, and


regression.
• Track bug status (open, resolved, closed).
• Ensure bug is not forgotten, lost or ignored.

Used to back create test case where none existed before.

You might also like