Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 32

Dynamic Testing Techniques

DYNAMIC TESTING TECHNIQUES


Delegate Notes

Session 4, Version 3X

1of 32

Xansa 2015

Dynamic Testing Techniques

DynamicTestingTechniques
Theneedfortestingtechniques

Why dynamic test techniques?

Exhaustive testing (use of all possible inputs and


conditions) is impractical
Must use a subset of all possible test cases
Must have high probability of detecting faults

Need thought processes that help us select test cases


more intelligently
Test case design techniques are such thought processes

Version X1

ISEB Software Testing Dynamic. 4

In Session 1 we explained that testing everything is known as exhaustive testing


(defined as exercising every combination of inputs and preconditions) and
demonstrated that it is an impractical goal. Therefore, as we cannot test everything
we have to select a subset of all possible tests. In practice the subset we select is a
very tiny subset and yet it has to have a high probability of finding most of the faults
in a system.
Experience and experiments have shown us that selecting a subset at random is
neither very effective nor very efficient (even if it is tool supported). We have to
select tests using some intelligent thought process. Test techniques are such thought
processes.
Whatisatestingtechnique?
A testing technique is a thought process that helps us select a good set of tests from
the total number of all possible tests for a given system. Different techniques offer
different ways of looking at the software under test, possibly challenging assumptions
made about it. Each technique provides a set of rules or guidelines for the tester to
follow in identifying test conditions and test cases. They are based on either a
behavioural or a structural model of the system In other words, they are based on an
understanding of the system's behaviour (functions and non-functional attributes such
as performance or ease of use - what the system does) or its structure - how it does it.
There are a lot of different testing techniques and those that have been published have
been found to be successful at identifying tests that find faults. The use of testing
techniques is 'best practice' though they should not be used to the exclusion of any
other approach.
Session 4, Version 3X

2of 32

Xansa 2015

Dynamic Testing Techniques

What is a testing technique?

A procedure for selecting or designing tests


Based on a structural or functional model of the software
Successful at finding faults
'Best' practice
A way of deriving good test cases
A way of objectively measuring a test effort

Testing should be rigorous, thorough and systematic


Version X1

ISEB Software Testing Dynamic. 5

Put simply, a testing technique is a means of identifying good tests. Recall from
Session 1 that a good test case is four things:
effective - has potential to find faults;
exemplary
- represents other test cases;
evolvable - easy to maintain;
economic - doesnt cost much to use.
AdvantagesofTechniques

Advantages of techniques

Different people: similar probability find faults


Gain some independence of thought

Effective testing: find more faults


Focus attention on specific types of fault
Know you're testing the right thing

Efficient testing: find faults with less effort


Avoid duplication
Systematic techniques are measurable

Techniques make testing more effective and more efficient


Version X1

ISEB Software Testing Dynamic. 6

Different people using the same technique on the same system will almost certainly
arrive at different test cases but they will have a similar probability of finding faults.
This is because the technique will guide them into having a similar or the same view
of the system and to make similar or the same assumptions.
Session 4, Version 3X

3of 32

Xansa 2015

Dynamic Testing Techniques


Using a technique also gives the tester some independence of thought. Developers
who apply test techniques may find faults that they would not have found if they had
only tested intuitively.

Usingtechniquesmakestestingmoreeffective

This means that more faults will be found with less effort. Because a technique
focuses on a particular type of fault it becomes more likely that the tests will find
more of that type of fault. By selecting appropriate testing techniques it is possible to
control more accurately what is being tested and so reduce the chance of overlap
between different test cases. You also have more confidence that you are testing the
things that are most in need of being tested.

Usingtechniquesmakestestingmoreefficient

This means that faults will be found with less effort. Given two sets of tests that find
the same faults in the same system, if one set costs 25% less to produce then this is
better even though they both find the same faults. Good testing not only maximises
effectiveness but also maximises efficiency, i.e. minimises cost.

Measurement

Objective assessment of thoroughness of testing (with


respect to use of each technique)
Useful for comparison of one test effort to another

e.g.

Project A
60% Equivalence
partitions
50% Boundaries
75% Branches

Project B
40% Equivalence
partitions
45% Boundaries
60% Branches

Version X1

ISEB Software Testing Dynamic. 7

Systematic techniques are measurable, meaning that it is possible to quantify the


extent of their use making it possible to gain an objective assessment of the
thoroughness of testing with respect to the use of each testing technique. This is
useful for comparison of one test effort to another and for providing confidence in the
adequacy of testing.
TypesofTestingTechnique
Session 4, Version 3X

4of 32

Xansa 2015

Dynamic Testing Techniques

Three types of systematic technique

Static (non-execution)
Examination of documentation,
source code listings, etc.

Functional (black box)


Based on behaviour /
functionality of software

Structural (white box)


Based on structure
of software

Version X1

ISEB Software Testing Dynamic. 9

There are many different types of software testing technique, each with its own
strengths and weaknesses. Each individual technique is good at finding particular
types of fault and relatively poor at finding other types. For example, a technique that
explores the upper and lower limits of a single input range is more likely to find
boundary value faults than faults associated with combinations of inputs. Similarly,
testing performed at different stages in the software development life cycle is going to
find different types of faults; component testing is more likely to find coding faults
than system design faults.

Session 4, Version 3X

5of 32

Xansa 2015

Dynamic Testing Techniques

Each testing technique falls into one of a number of different categories. Broadly
speaking there are two main categories, static and dynamic. However, dynamic
techniques are subdivided into two more categories behavioural (black box) and
structural (white box). Behavioural techniques can be further subdivided into
functional and non-functional techniques. Each of these is summarised below.
StaticTestingTechniques

As we saw in Session 3, static testing techniques do not execute the code being
examined, and are generally used before any tests are executed on the software. They
could be called non-execution techniques. Most static testing techniques can be used
to test any form of document including source code, design, functional and
requirement specifications. However, static analysis is a tool supported version that
concentrates on testing formal languages and so is most often used to statically test
source code.
FunctionalTestingTechniques(BlackBox)

Functional testing techniques are also known as black-box and input / output-driven
testing techniques because they view the software as a black box with inputs and
outputs, but have no knowledge of how it is structured inside the box. In essence, the
tester is concentrating on the function of the software, that is, what it does, not how it
does it.
StructuralTestingTechniques(WhiteBox)

Structural testing techniques use the internal structure of the software to derive test
cases. They are commonly called white-box or glass-box techniques (implying
you can see into the system) since they require knowledge of how the software is
implemented, that is, how it works. For example, a structural technique may be
concerned with exercising loops in the software. Different test cases may be derived
to exercise the loop once, twice, and many times. This may be done regardless of the
functionality of the software.

Session 4, Version 3X

6of 32

Xansa 2015

Dynamic Testing Techniques

Some test techniques


Dynamic

Static
etc.

Reviews

Static Analysis

Inspection

Functional

Walkthroughs
Desk-checking

Equivalence
Partitioning

Structural

Boundary
Value Analysis
Data flow

Statement
Branch/Decision
Branch Condition

Version X1

Cause-Effect Graphing

MCDC

Random

LCSAJ

Branch Condition
Combination

Syntax
State Transition

ISEB Software Testing Dynamic. 10

NonFunctionalTestingTechniques
Non-functional aspects of a system (also known as quality aspects) include
performance, usability portability, maintainability, etc. This category of technique is
concerned with examining how well the system does something, not what it does or
how it does it. Techniques to test these non-functional aspects are less procedural and
less formalised than those of other categories as the actual tests are more dependent
on the type of system, what it does and the resources available for the tests. How to
specify non-functional tests is outside the scope of the syllabus for this course but an
approach to doing so is outlined in the supplementary section at the back of notes.
The approach uses quality attribute templates, a technique from Tom Gilbs book
Principles of Software Engineering Management, Addison-Wesley, 1988.
Non-functional testing at system level is part of the syllabus and was covered in
Session 2 (but techniques for deriving non-functional tests are not covered).

Session 4, Version 3X

7of 32

Xansa 2015

Dynamic Testing Techniques

Black Box versus White Box

Black box versus white box?


Black box appropriate
at all levels but
dominates higher
levels of testing
White box used
predominately
at lower levels
to compliment
black box

Acceptance

System

Integration

Component

Version X1

ISEB Software Testing Dynamic. 11

Black box techniques are appropriate at all stages of testing (Component Testing
through to User Acceptance Testing). While individual components form part of the
structure of a system, when performing Component Testing it is possible to view the
component itself as a black box, that is, design test cases based on its functionality
without regard for its structure. Similarly, white box techniques can be used at all
stages of testing but are typically used most predominately at Component and
Integration Testing in the Small.

Session 4, Version 3X

8of 32

Xansa 2015

Dynamic Testing Techniques

BlackBoxTestTechniques

Black Box Design & Measurement Techniques

Techniques defined in BS 7925-2


Equivalence partitioning

Also a measurement
technique?
= Yes
= No

Boundary value analysis


State transition testing
Cause-effect graphing
Syntax testing
Random testing

Also defines how to specify other techniques


Version X1

ISEB Software Testing Dynamic. 13

TechniquesDefinedinBS79252
The Software Component Testing Standard BS7925-2 defines the following black-box
testing techniques:
Equivalence Partitioning;
Boundary Value Analysis;
State Transition Testing;
Cause-Effect Graphing;
Syntax Testing;
Random Testing.
The standard also defines how other techniques can be specified. This is important
since it means that anyone wishing to conform to the Software Component Testing
Standard is not restricted to using the techniques that the standard defines.
EquivalencePartitioning&BoundaryValueAnalysis
Equivalencepartitioning
Equivalence Partitioning is a good all-round functional black-box technique. It can be
applied at any level of testing and is often a good technique to use first. It is a
common sense approach to testing, so much so that most testers practise it informally
even though they may not realise it. However, while it is better to use the technique
informally than not at all, it is much better to use the technique in a formal way to
attain the full benefits that it can deliver.

Session 4, Version 3X

9of 32

Xansa 2015

Dynamic Testing Techniques


The idea behind the technique is to divide or partition a set of test conditions into
groups or sets that can be considered the same or equivalent, hence 'equivalence
partitioning'. Equivalence partitions are also known as equivalence classes, the two
terms mean exactly the same thing.
The benefit of doing this is that we need test only one condition from each partition.
This is because we are assuming that all the conditions in one partition will be treated
in the same way by the software. If one condition in a partition works, we assume all
of the conditions in that partition will work and so there is no point in testing any of
these others. Conversely, if one of the conditions in a partition does not work, then
we assume that none of the conditions in that partition will work so again there is no
point in testing any more in that partition. Of course these are simplifying
assumptions that may not always be right but writing them down at least gives other
the chance to challenge the assumptions and hopefully help identify more accurate
equivalence partitions.

Equivalence partitioning (EP)

Divide (partition) the inputs, outputs, etc. Into areas which


are the same (equivalent)
Assumption: if one value works, all will work
One from each partition better than all from one
invalid

valid
0

100

Version X1

invalid
101
ISEB Software Testing Dynamic. 14

For example, a savings account in a bank earns a different rate of interest depending
on the balance in the account. In order to test the software that calculates the interest
due we can identify the ranges of balance values that each earns a different rate of
interest. For example, if a balance in the range 0 to 100 has a 3% interest rate, a
balance between 100 and 1,000 has a 5% interest rate, and balances of 1,000 and
over have a 7% interest rate, we would initially identify three equivalence partitions:
0 - 100, 100.01 - 999.99, and 1,000 and above. When designing the test cases
for this software we would ensure that these three equivalence partitions were each
covered once. So we might choose to calculate the interest on balances of 50, 260
and 1,348. Had we not have identified these partitions it is possible that at least one
of them could have been missed at the expense of testing another one several times
over (such as with the balances of 30, 140, 250, and 400).

Session 4, Version 3X

10of 32

Xansa 2015

Dynamic Testing Techniques

Boundaryvalueanalysis

Boundary value analysis (BVA)

Faults tend to lurk near boundaries


Good place to look for faults
Test values on both sides of boundaries
invalid

valid
0

invalid
100 101

Version X1

ISEB Software Testing Dynamic. 15

Boundary value analysis is based on testing on and around the boundaries between
partitions. If you have done "range checking", you were probably using the boundary
value analysis technique, even if you weren't aware of it. Note that we have both
valid boundaries (in the valid partitions) and invalid boundaries (in the invalid
partitions).

Session 4, Version 3X

11of 32

Xansa 2015

Dynamic Testing Techniques

ExamplesofBoundaryValueAnalysisandEquivalencePartitioning

Example: loan application


2-64 chars.

Customer name

4 digits, 1st
non-zero

Account number
Loan amount requested

200 to 5000

Term of loan
Monthly repayment

1 to 10 years

Term:

Minimum 10

Repayment:
Interest rate:
Total paid back:
Version 2x

ISEB Software Testing Dynamic. 16

Customer name
Number of characters:

invalid
Valid characters:

Conditions
Customer
name

Valid
Partitions
2 to 64 chars
valid chars

valid

A-Z
- a-z
space

invalid

Any
other

Invalid
Partitions
< 2 chars
> 64 chars
invalid chars

Valid
Boundaries
2 chars
64 chars

Version 2x

Session 4, Version 3X

64 65

Invalid
Boundaries
1 chars
65 chars
0 chars
ISEB Software Testing Dynamic. 17

12of 32

Xansa 2015

Dynamic Testing Techniques

Example: loan application


2-64 chars.

Customer name

4 digits, 1st
non-zero

Account number
Loan amount requested

200 to 5000

Term of loan
Monthly repayment

1 to 10 years

Term:

Minimum 10

Repayment:
Interest rate:
Total paid back:
Version 2x

ISEB Software Testing Dynamic. 18

Account number
first character:

valid: non-zero
invalid: zero

number of digits:
invalid

Conditions
Account
Number

Valid
Partitions
4 Digits
1st non zero

Invalid
Partitions
< 4 Digits
> 4 Digits
1st Digit = 0
non Digit

valid

Valid
Boundaries
1000
9999

Version 2x

Session 4, Version 3X

invalid

Invalid
Boundaries
3 Digits
5 Digits
0 Digits
ISEB Software Testing Dynamic. 19

13of 32

Xansa 2015

Dynamic Testing Techniques

Example: loan application


2-64 chars.

Customer name

4 digits, 1st
non-zero

Account number
Loan amount requested

200 to 5000

Term of loan
Monthly repayment

1 to 10 years

Term:

Minimum 10

Repayment:
Interest rate:
Total paid back:
Version 2x

ISEB Software Testing Dynamic. 20

Loan amount

199
invalid
Conditions
Loan
amount

Valid
Partitions
200 - 5000

200

5000
valid

Invalid
Partitions
< 200
>5000
0
non-numeric
null

invalid
Valid
Boundaries
200
5000

Version 2x

Session 4, Version 3X

5001

Invalid
Boundaries
199
5001

ISEB Software Testing Dynamic. 21

14of 32

Xansa 2015

Dynamic Testing Techniques

Condition template
Conditions

Valid
Partitions

Customer 2 - 64 chars
name
valid chars

Tag
V1
V2

Account
number

V3
4 digits
1st non-zero V4

Loan
amount

500 - 9000

V5

Invalid
Partitions

Valid
Invalid
Tag
Tag
Boundaries
Boundaries
X1 2 chars
B1 1 char
D1
< 2 chars
> 64 chars
X2 64 chars
B2 65 chars
D2
D3
invalid char X3
0 chars
X4 1000
B3 3 digits
D4
< 4 digits
X5 9999
B4 5 digits
D5
> 4 digits
1st digit = 0 X6
0 digits
D6
X7
non-digit
X8 200
B5 199
D7
< 200
X9 5000
B6 5001
D8
>5000
0
X10
non-integer X11
X12
null
Tag

Version 2x

ISEB Software Testing Dynamic. 22

DesignTestCases

Design test cases

Test
Case

Description

Expected Outcome

Name:
Acc no:
Loan:
Term:

John Smith
1234
2500
3 years

Term:
Repayment:
Interest rate:
Total paid:

3 years
79.86
10%
2874.96

V1, V2,
V3, V4,
V5 .....

Name:
Acc no:
Loan:
Term:

AB
1000
200
1 year

Term:
Repayment:
Interest rate:
Total paid:

1 year
17.92
7.5%
215.00

B1, B3,
B5, .....

Version X1

New Tags
Covered

ISEB Software Testing Dynamic. 23

Having identified the conditions that you wish to test, the next step is to design the
test cases. The more test conditions that can be covered in a single test case, the fewer
the test cases that are needed.

Session 4, Version 3X

15of 32

Xansa 2015

Dynamic Testing Techniques


Generally, each test case for invalid conditions should cover only one condition. This
is because programs typically stop processing input as soon as they encounter the first
fault. However, if it is known that the software under test is required to process all
input regardless of its validity it is sensible to continue as before and design test cases
that cover as many invalid conditions in one go as possible. In either case, there
should be separate test cases covering valid and invalid conditions.
The test cases to cover the boundary conditions are done in a similar way.
WhydobothEPandBVA?

Why do both EP and BVA?

If you do boundaries only, you have covered all the


partitions as well
Technically correct and may be OK if everything works correctly!
If the test fails, is the whole partition wrong, or is a boundary in the
wrong place - have to test mid-partition anyway
Testing only extremes may not give confidence for typical use
scenarios (especially for users)
Boundaries may be harder (more costly) to set up

Version X1

ISEB Software Testing Dynamic. 24

Technically, because every boundary is in some partition, if you did only boundary
value analysis (BVA) you would also have tested every equivalence partition (EP).
However this approach will cause problems when the value fails was it only the
boundary value that failed or did the whole partition fail? Also by testing only
boundaries we would probably not give the users too much confidence as we are
using extreme values rather than normal values. The boundaries may be more
difficult (and therefore more costly) to set up as well.
We recommend that you test the partitions separately from boundaries - this means
choosing partition values that are NOT boundary values.

Session 4, Version 3X

16of 32

Xansa 2015

Dynamic Testing Techniques

Test objectives?
Condition

Valid
Tag Invalid
Tag Valid
Partition
Partition
Boundary

Tag Invalid
Boundary

Tag

For a thorough approach: VP, IP, VB, IB


Under time pressure, depends on your test objective
minimal user-confidence: VP only?
maximum fault finding: VB first (plus IB?)

Version X1

ISEB Software Testing Dynamic. 25

What partitions and boundaries you exercise and which first depends on your
objectives. If your goal is the most thorough approach, then follow the traditional
approach and test valid partitions, then invalid partitions, then valid boundaries and
finally invalid boundaries. However if you are under time pressure and cannot test
everything (and who isn't), then your objective will help you decide what to test. If
you are after user confidence with minimum tests, you may do valid partitions only.
If you want to find as many faults as possible as quickly as possible, you may start
with invalid boundaries.

Session 4, Version 3X

17of 32

Xansa 2015

Dynamic Testing Techniques

StateTransitionTesting

State transition testing (analysis)


Uses a model that shows:
states the software may occupy
transitions between the states
events which cause the transitions
actions that result from the transitions
Invalid PIN
Beep

Card inserted
Ask for PIN
Wait for
card
Version X1

Cancel
Return card

Wait for
PIN

Valid PIN
Ask amount

ISEB Software Testing Dynamic. 27

This technique is not specifically required by the ISEB syllabus, although we do need
to cover one additional black box technique. Therefore the details of this technique
should not be in the exam, but only an awareness of another functional technique.
State transition testing is used where some aspect of the system can be described in
what is called a finite state machine. This simply means that the system can be in a
(finite) number of different states, and the transitions from one state to another are
determined by the rules of the machine. This is the model on which the system and
the tests are based.
Any system where you get a different output for the same input, depending on what
has happened before, is a finite state system. For example, if you request to withdraw
100 from a bank ATM, you may be given cash. Later you may make exactly the
same request but be refused the money (because your balance is insufficient). This
later refusal is because the state of your bank account had changed from having
sufficient funds to cover the withdrawal to having insufficient funds. The transaction
that caused your account to change its state was probably the earlier withdrawal.
Another example is a word processor. If a document is open, you are able to Close it.
If no document is open, then Close is not available. After you choose Close once,
you cannot choose it again for the same document unless you open that document. A
document thus has two states: open and closed.
A state transition model has four basic parts:
the states that the software may occupy (open/closed or funded/insufficient
funds);
the transitions from one state to another (not all transitions are allowed);
the events that cause a transition (withdrawing money, closing a file);
the actions that result from a transition (an error message, or being given
your cash).
Session 4, Version 3X

18of 32

Xansa 2015

Dynamic Testing Techniques


Note that a transition does not need to change to a different state, it could stay in the
same state. In fact, trying to input an invalid input would be likely to produce an error
message as the action, but the transition would be back to the same state the system
was in before.
Deriving test cases from the state transition model is a black box approach.
Measuring how much you have tested (covered) is getting close to a white box
perspective. However, state transition testing is generally regarded as a black box
technique.

State transition testing (design)

Test cases designed to achieve required coverage:

Limitation of
switch coverage:
covers only
valid transitions

State transitions (0-switch)


Transition pairs (1-switch)
Transition triples (2-switch)
Etc.

A more complete test set will test for possible invalid


transitions
Use state table to identify invalid transitions

Version X1

ISEB Software Testing Dynamic. 28

You can design tests to test every transition shown in the model. If every (valid)
transition is tested, this is known as 0-switch coverage. You could also test a series
of transitions through more than one state. If you covered all of the pairs of two valid
transitions, you would have 1-switch coverage, covering the sets of 3 transitions
would give 2-switch coverage, etc.
However, deriving tests only from the model may omit the negative tests, where we
could try to generate invalid transitions. In order to see the total number of
combinations of states and transitions, both valid and invalid, a state table can be
used.

Session 4, Version 3X

19of 32

Xansa 2015

Dynamic Testing Techniques

WhiteBoxTestTechniques

Design and Measurement Techniques

Techniques defined in BS 7925-2


Statement testing

Also a measurement
technique?
= Yes
= No

Branch / decision testing


Data flow testing
Branch condition testing
Branch condition combination testing
Modified condition decision testing
LCSAJ testing

Also defines how to specify other techniques

Version X1

ISEB Software Testing Dynamic. 36

White box techniques are normally used after an initial set of tests has been derived
using black box techniques. They are most often used to measure "coverage" - how
much of the structure has been exercised or covered by a set of tests.
Coverage measurement is best done using tools, and there are a number of such tools
on the market. These tools can help to increase productivity and quality. They
increase quality by ensuring that more structural aspects are tested, so faults on those
structural paths can be found. They increase productivity and efficiency by
highlighting tests that may be redundant, i.e. testing the same structure as other tests
(although this is not necessarily a bad thing!)

Session 4, Version 3X

20of 32

Xansa 2015

Dynamic Testing Techniques

WhatareCoverageTechniques?

Using structural coverage


Spec

Enough
tests?

Software

Tests

Results OK?

More tests

What's
covered?

Coverage OK?

Stronger structural
techniques (different
structural elements)
Increasing coverage

Version X1

ISEB Software Testing Dynamic. 37

Coverage techniques serve two purposes: test measurement and test case design.
They are often used in the first instance to assess the amount of testing performed by
tests derived from functional techniques. They are then used to design additional tests
with the aim of increasing the test coverage.
Coverage techniques are a good way of generating additional test cases that are
different from existing tests and in any case they help ensure breadth of testing in the
sense that test cases that achieve 100% coverage in any measure will be exercising all
parts of the software.

The test coverage trap


better testing

Function exercised,
insufficient structure
Functional
Testedness

Structure exercised,
insufficient function
% Statement

% Decision

Structural Testedness

Version X1

Coverage is not
Thoroughness

100% coverage does


not mean 100% tested!

Session 4, Version 3X

% Condition
Combination

ISEB Software Testing Dynamic. 38

21of 32

Xansa 2015

Dynamic Testing Techniques


There is also danger in these techniques. 100% coverage does not mean 100% tested.
Coverage techniques measure only one dimension of a multi-dimension concept. Two
different test cases may achieve exactly the same coverage but the input data of one
may find an error that the input data of the other doesnt. Further more, coverage
techniques measure coverage of the software code that has been written, they cannot
say anything about the software that has not been written. If a function has not been
implemented, only functional testing techniques will reveal the omission.
In common with all structural testing techniques, coverage techniques are best used on
areas of software code where more thorough testing is required. Safety critical code,
code that is vital to the correct operation of a system, and complex pieces of code are
all examples of where structural techniques are particularly worth applying. They
should always be used in addition to functional testing techniques rather than as an
alternative to them.
TypesofCoverage
Test coverage can be measured based on a number of different structural elements in
software. The simplest of these is statement coverage which measures the number of
executable statements executed by a set of tests and is usually expressed in terms of
the percentage of all executable statements in the software under test. In fact, all
coverage techniques yield a result which is the number of elements covered expressed
as a percentage of the total number of elements.
Statement coverage is the simplest and perhaps the weakest of all coverage
techniques. The adjectives weak and strong applied to coverage techniques refer to
their likelihood in finding errors. The stronger a technique the more errors you can
expect to find with test cases designed using that technique with the same measure of
coverage.
There are a lot of structural elements that can be used for coverage. Each technique
uses a different element; the most popular are described in later sections.
Besides statement coverage, there are number of different types of control flow
coverage techniques most of which are tool supported. These include branch or
decision coverage, LCSAJ (linear code sequence and jump) coverage, condition
coverage and condition combination coverage. Any representation of a system is in
effect a model against which coverage may be assessed. Call tree coverage is another
example for which tool support is commercially available.
Another popular, but often misunderstood, coverage measure is path coverage. Path
coverage is usually taken to mean branch or decision coverage because both these
techniques seek to cover 100% of the paths through the code. However, strictly
speaking for any code that contains a loop, path coverage is impossible since a path
that travels round the loop say 3 times is different from the path that travels round the
same loop 4 times. This is true even if the rest of the paths are identical. So if it is
possible to travel round the loop an unlimited number of times then there are an
unlimited number of paths through that piece of code. For this reason it is more
correct to talk about independent path segment coverage though the shorter term
path coverage is frequently used.

Session 4, Version 3X

22of 32

Xansa 2015

Dynamic Testing Techniques


There is currently very little tool support available for data flow coverage techniques,
though tool support is growing. Data flow coverage techniques include definitions,
uses, and definition-use pairs.
Other, more specific, coverage measures include things like database structural
elements (records, fields, and sub-fields) and files. State transition coverage is also
possible. It is worth checking for any new tools, as the test tool market can develop
quite rapidly.
HowtoMeasureCoverage
For most practical purposes coverage measurement is something that requires tool
support. However, knowledge of steps needed to measure coverage is useful in
understanding the relative merits of each technique.
1.
2.
3.
4.
5.

Decide on the structural element to be used.


Count the structural elements.
Instrument the code.
Run the tests for which coverage measure is required.
Using the output from the instrumentation, determine the percentage of elements exercised.

Instrumenting the code (step 3) implies inserting code along-side each structural
element in order to record that the associated structural element has been exercised.
Determine the actual coverage measure (step 5) is then a matter of analysing the
recorded information.
When a specific coverage measure is required or desired but not attained, additional
test cases have to be designed with the aim of exercising some or all of the structural
elements not yet reached. These are then run through the instrumented code and a
new coverage measure determined. This is repeated until the required coverage
measure is achieved.
Finally the instrumentation should be removed.
However, in practice the
instrumentation should be done to a copy of the source code such that it can be
deleted once you have finished measuring coverage. This avoids any errors that could
be made when removing instrumentation. In any case all the tests ought to be re-run
again on the un-instrumented code.

Session 4, Version 3X

23of 32

Xansa 2015

Dynamic Testing Techniques

StatementCoverage

Statement coverage
Statement coverage

is normally measured
by a software tool.

Percentage of executable statements


exercised by a test suite

Number of statements exercised

Total number of statements

Example:
Program has 100 statements
Tests exercise 87 statements
Statement coverage = 87%

Version X1

Typical ad hoc testing achieves 60 - 75%

ISEB Software Testing Dynamic. 39

Statement coverage is the number of executable statements exercised by a test or test


suite. This is calculated by:
Statement Coverage = x 100%
Typical ad hoc testing achieves 60% to 75% statement coverage.

Example of statement coverage


1
2
3

read(a)
IF a > 6 THEN
b=a

ENDIF

print b

Statement
numbers

Test
case

Input

Expected
output

As all 5 statements are covered by


this test case, we have achieved
100% statement coverage

Version 2x

Session 4, Version 3X

ISEB Software Testing Dynamic. 40

24of 32

Xansa 2015

Dynamic Testing Techniques

Branch & Decision Testing / Coverage

Decision
coverage
Decision Coverage (Branch
Coverage)

is normally measured
by a software tool.

Percentage of decision outcomes


exercised by a test suite

Number of decisions outcomes exercised

Total number of decision outcomes

False

True

Example:
Program has 120 decision outcomes
Tests exercise 60 decision outcomes
Decision coverage = 50%

Version X1

Typical ad hoc testing achieves 40 - 60%

ISEB Software Testing Dynamic. 41

Branch coverage is the number of branches (decisions) exercised by a test or test


suite. This is calculated by:
Branch Coverage = x 100%
Typical ad hoc testing achieves 40% to 60% branch coverage. Branch coverage is
stronger than statement coverage since it may require more test cases to achieve 100%
coverage. For example, consider the code segment shown below.
if a > b
c=0
endif
To achieve 100% statement coverage of this code segment just one test case is
required which ensures variable a contains a value that is greater than the value of
variable b. However, branch coverage requires each decision to have had both a true
and false outcome. Therefore, to achieve 100% branch coverage, a second test case is
necessary. This will ensure that the decision statement if a > b has a false outcome.
Note that 100% branch coverage guarantees 100% statement coverage.
Branch and decision coverage are actually slightly different for less than 100%
coverage, but at 100% coverage they give exactly the same results.

Session 4, Version 3X

25of 32

Xansa 2015

Dynamic Testing Techniques

Paths Through Code


1234
12

12

123

?
?

Version 2x

ISEB Software Testing Dynamic. 42

Paths Through Code With Loops


1 2 3 4 5 6 7 8 .

for as many times as it


is possible to go round
the loop (this can be
unlimited, i.e. infinite)

Version 2x

Session 4, Version 3X

ISEB Software Testing Dynamic. 43

26of 32

Xansa 2015

Dynamic Testing Techniques

Example 1

Wait

Wait for card to be inserted


IF card is a valid card THEN

Valid
card?

display enter PIN number


IF PIN is valid THEN
select transaction

Yes

Display
Enter..

No
Valid Yes
PIN?

Reject
card

No

ELSE (otherwise)
Display PIN invalid

Select
trans...

Display
PIN in..

ELSE (otherwise)
Reject card
End

End

Version 2x

ISEB Software Testing Dynamic. 44

Example 2
Read A
IF A > 0 THEN
IF A = 21 THEN
Print Key
ENDIF
ENDIF

Read
A>0

Yes

No

A=21
No

Yes

Print

Cyclomatic complexity:
Minimum tests to achieve:

End

Statement coverage:
Branch coverage:
Version 2x

Session 4, Version 3X

ISEB Software Testing Dynamic. 45

27of 32

Xansa 2015

Dynamic Testing Techniques

Example 3

Read

Read A
No
Yes
Print
B=0
A>0
Read B
IF A > 0 THEN
Yes
No
Yes
IF B = 0 THEN
A>21
Print
Print
Print No values
No
ELSE
Print B
End
IF A > 21 THEN
Print A
Cyclomatic complexity:
ENDIF
Minimum tests to achieve:
ENDIF
ENDIF
Statement coverage:
Branch coverage:
Version 2x

ISEB Software Testing Dynamic. 46

Example 4
Read A
Read B
IF A < 0 THEN
Print A negative
ELSE
Print A positive
ENDIF
IF B < 0 THEN
Print B negative
ELSE
Print B positive
ENDIF

Cyclomatic complexity:
Minimum tests to achieve:
Statement coverage:

Version 2x

Session 4, Version 3X

Branch coverage:

28of 32

ISEB Software Testing Dynamic. 47

Xansa 2015

Dynamic Testing Techniques

Example 5
Read A
Read B
IF A < 0 THEN
Print A negative
ENDIF
IF B < 0 THEN
Print B negative
ENDIF
Cyclomatic complexity:
Minimum tests to achieve:
Statement coverage:
Branch coverage:
Version 2x

ISEB Software Testing Dynamic. 48

Example 6
Read A
IF A < 0 THEN
Print A negative
ENDIF
IF A > 0 THEN
Print A positive
ENDIF
Cyclomatic complexity:
Minimum tests to achieve:
Statement coverage:
Branch coverage:
Version 2x

Session 4, Version 3X

ISEB Software Testing Dynamic. 49

29of 32

Xansa 2015

Dynamic Testing Techniques

Guessing
Although it is true that testing should be rigorous, thorough and systematic, this is not
all there is to testing. There is a definite role for non-systematic techniques.

Non-systematic test techniques

Trial and error / ad hoc


Error guessing / experience-driven
User testing
Unscripted testing

A testing approach that is only


rigorous, thorough and systematic
is incomplete
Version X1

ISEB Software Testing Dynamic. 51

Many people confuse error guessing with ad hoc testing. Ad hoc testing is unplanned
and usually done before (or instead of) rigorous testing. Error guessing is done last as
a supplement to rigorous techniques.

Error-guessing

Always worth including


After systematic techniques have been used
Can find some faults that systematic techniques can miss
A mopping up approach
Supplements systematic techniques

Not a good approach to start testing with


Version X1

ISEB Software Testing Dynamic. 52

Error guessing is a technique that should always be used after other more formal
techniques. The success of error guessing is very much dependent on the skill of the
tester, as good testers know where the errors are most likely to lurk.

Session 4, Version 3X

30of 32

Xansa 2015

Dynamic Testing Techniques


Some people seem to be naturally good at testing and others are good testers because
they have a lot of experience either as a tester or working with a particular system and
so are able to pin point its weaknesses. That is why error guessing is best done after
more formal techniques. In using other techniques the tester is likely to gain a better
understanding of the system, what it does and how it works. With a better
understanding anyone is likely to be more able to think of ways in which the system
may not work properly.

Error guessing: deriving test cases

Consider:
Past failures
Intuition
Experience
Brain storming
What is the craziest thing we can do?

Version X1

ISEB Software Testing Dynamic. 53

There are no rules for error guessing. The tester is encouraged to think of situations in
which the software may not be able to cope. Typical conditions to try include divide
by zero, blank (or no) input, empty files and the wrong kind of data (e.g. alphabetic
characters where numeric are required). If anyone ever says of a system or the
environment in which it is to operate That could never happen, it might be a good
idea to test that condition as such assumptions about what will and will not happen in
the live environment are often the cause of failures.
Other names for error guessing include experience-driven testing, heuristic testing and
lateral testing.

Session 4, Version 3X

31of 32

Xansa 2015

Dynamic Testing Techniques

SummaryQuiz
Test techniques are:

Black box based on:

White box based on:

Error guessing:

Error guessing best used:

Session 4, Version 3X

32of 32

Xansa 2015

You might also like