Professional Documents
Culture Documents
4 Dynamic 3x
4 Dynamic 3x
Session 4, Version 3X
1of 32
Xansa 2015
DynamicTestingTechniques
Theneedfortestingtechniques
Version X1
2of 32
Xansa 2015
Put simply, a testing technique is a means of identifying good tests. Recall from
Session 1 that a good test case is four things:
effective - has potential to find faults;
exemplary
- represents other test cases;
evolvable - easy to maintain;
economic - doesnt cost much to use.
AdvantagesofTechniques
Advantages of techniques
Different people using the same technique on the same system will almost certainly
arrive at different test cases but they will have a similar probability of finding faults.
This is because the technique will guide them into having a similar or the same view
of the system and to make similar or the same assumptions.
Session 4, Version 3X
3of 32
Xansa 2015
Usingtechniquesmakestestingmoreeffective
This means that more faults will be found with less effort. Because a technique
focuses on a particular type of fault it becomes more likely that the tests will find
more of that type of fault. By selecting appropriate testing techniques it is possible to
control more accurately what is being tested and so reduce the chance of overlap
between different test cases. You also have more confidence that you are testing the
things that are most in need of being tested.
Usingtechniquesmakestestingmoreefficient
This means that faults will be found with less effort. Given two sets of tests that find
the same faults in the same system, if one set costs 25% less to produce then this is
better even though they both find the same faults. Good testing not only maximises
effectiveness but also maximises efficiency, i.e. minimises cost.
Measurement
e.g.
Project A
60% Equivalence
partitions
50% Boundaries
75% Branches
Project B
40% Equivalence
partitions
45% Boundaries
60% Branches
Version X1
4of 32
Xansa 2015
Static (non-execution)
Examination of documentation,
source code listings, etc.
Version X1
There are many different types of software testing technique, each with its own
strengths and weaknesses. Each individual technique is good at finding particular
types of fault and relatively poor at finding other types. For example, a technique that
explores the upper and lower limits of a single input range is more likely to find
boundary value faults than faults associated with combinations of inputs. Similarly,
testing performed at different stages in the software development life cycle is going to
find different types of faults; component testing is more likely to find coding faults
than system design faults.
Session 4, Version 3X
5of 32
Xansa 2015
Each testing technique falls into one of a number of different categories. Broadly
speaking there are two main categories, static and dynamic. However, dynamic
techniques are subdivided into two more categories behavioural (black box) and
structural (white box). Behavioural techniques can be further subdivided into
functional and non-functional techniques. Each of these is summarised below.
StaticTestingTechniques
As we saw in Session 3, static testing techniques do not execute the code being
examined, and are generally used before any tests are executed on the software. They
could be called non-execution techniques. Most static testing techniques can be used
to test any form of document including source code, design, functional and
requirement specifications. However, static analysis is a tool supported version that
concentrates on testing formal languages and so is most often used to statically test
source code.
FunctionalTestingTechniques(BlackBox)
Functional testing techniques are also known as black-box and input / output-driven
testing techniques because they view the software as a black box with inputs and
outputs, but have no knowledge of how it is structured inside the box. In essence, the
tester is concentrating on the function of the software, that is, what it does, not how it
does it.
StructuralTestingTechniques(WhiteBox)
Structural testing techniques use the internal structure of the software to derive test
cases. They are commonly called white-box or glass-box techniques (implying
you can see into the system) since they require knowledge of how the software is
implemented, that is, how it works. For example, a structural technique may be
concerned with exercising loops in the software. Different test cases may be derived
to exercise the loop once, twice, and many times. This may be done regardless of the
functionality of the software.
Session 4, Version 3X
6of 32
Xansa 2015
Static
etc.
Reviews
Static Analysis
Inspection
Functional
Walkthroughs
Desk-checking
Equivalence
Partitioning
Structural
Boundary
Value Analysis
Data flow
Statement
Branch/Decision
Branch Condition
Version X1
Cause-Effect Graphing
MCDC
Random
LCSAJ
Branch Condition
Combination
Syntax
State Transition
NonFunctionalTestingTechniques
Non-functional aspects of a system (also known as quality aspects) include
performance, usability portability, maintainability, etc. This category of technique is
concerned with examining how well the system does something, not what it does or
how it does it. Techniques to test these non-functional aspects are less procedural and
less formalised than those of other categories as the actual tests are more dependent
on the type of system, what it does and the resources available for the tests. How to
specify non-functional tests is outside the scope of the syllabus for this course but an
approach to doing so is outlined in the supplementary section at the back of notes.
The approach uses quality attribute templates, a technique from Tom Gilbs book
Principles of Software Engineering Management, Addison-Wesley, 1988.
Non-functional testing at system level is part of the syllabus and was covered in
Session 2 (but techniques for deriving non-functional tests are not covered).
Session 4, Version 3X
7of 32
Xansa 2015
Acceptance
System
Integration
Component
Version X1
Black box techniques are appropriate at all stages of testing (Component Testing
through to User Acceptance Testing). While individual components form part of the
structure of a system, when performing Component Testing it is possible to view the
component itself as a black box, that is, design test cases based on its functionality
without regard for its structure. Similarly, white box techniques can be used at all
stages of testing but are typically used most predominately at Component and
Integration Testing in the Small.
Session 4, Version 3X
8of 32
Xansa 2015
BlackBoxTestTechniques
Also a measurement
technique?
= Yes
= No
TechniquesDefinedinBS79252
The Software Component Testing Standard BS7925-2 defines the following black-box
testing techniques:
Equivalence Partitioning;
Boundary Value Analysis;
State Transition Testing;
Cause-Effect Graphing;
Syntax Testing;
Random Testing.
The standard also defines how other techniques can be specified. This is important
since it means that anyone wishing to conform to the Software Component Testing
Standard is not restricted to using the techniques that the standard defines.
EquivalencePartitioning&BoundaryValueAnalysis
Equivalencepartitioning
Equivalence Partitioning is a good all-round functional black-box technique. It can be
applied at any level of testing and is often a good technique to use first. It is a
common sense approach to testing, so much so that most testers practise it informally
even though they may not realise it. However, while it is better to use the technique
informally than not at all, it is much better to use the technique in a formal way to
attain the full benefits that it can deliver.
Session 4, Version 3X
9of 32
Xansa 2015
valid
0
100
Version X1
invalid
101
ISEB Software Testing Dynamic. 14
For example, a savings account in a bank earns a different rate of interest depending
on the balance in the account. In order to test the software that calculates the interest
due we can identify the ranges of balance values that each earns a different rate of
interest. For example, if a balance in the range 0 to 100 has a 3% interest rate, a
balance between 100 and 1,000 has a 5% interest rate, and balances of 1,000 and
over have a 7% interest rate, we would initially identify three equivalence partitions:
0 - 100, 100.01 - 999.99, and 1,000 and above. When designing the test cases
for this software we would ensure that these three equivalence partitions were each
covered once. So we might choose to calculate the interest on balances of 50, 260
and 1,348. Had we not have identified these partitions it is possible that at least one
of them could have been missed at the expense of testing another one several times
over (such as with the balances of 30, 140, 250, and 400).
Session 4, Version 3X
10of 32
Xansa 2015
Boundaryvalueanalysis
valid
0
invalid
100 101
Version X1
Boundary value analysis is based on testing on and around the boundaries between
partitions. If you have done "range checking", you were probably using the boundary
value analysis technique, even if you weren't aware of it. Note that we have both
valid boundaries (in the valid partitions) and invalid boundaries (in the invalid
partitions).
Session 4, Version 3X
11of 32
Xansa 2015
ExamplesofBoundaryValueAnalysisandEquivalencePartitioning
Customer name
4 digits, 1st
non-zero
Account number
Loan amount requested
200 to 5000
Term of loan
Monthly repayment
1 to 10 years
Term:
Minimum 10
Repayment:
Interest rate:
Total paid back:
Version 2x
Customer name
Number of characters:
invalid
Valid characters:
Conditions
Customer
name
Valid
Partitions
2 to 64 chars
valid chars
valid
A-Z
- a-z
space
invalid
Any
other
Invalid
Partitions
< 2 chars
> 64 chars
invalid chars
Valid
Boundaries
2 chars
64 chars
Version 2x
Session 4, Version 3X
64 65
Invalid
Boundaries
1 chars
65 chars
0 chars
ISEB Software Testing Dynamic. 17
12of 32
Xansa 2015
Customer name
4 digits, 1st
non-zero
Account number
Loan amount requested
200 to 5000
Term of loan
Monthly repayment
1 to 10 years
Term:
Minimum 10
Repayment:
Interest rate:
Total paid back:
Version 2x
Account number
first character:
valid: non-zero
invalid: zero
number of digits:
invalid
Conditions
Account
Number
Valid
Partitions
4 Digits
1st non zero
Invalid
Partitions
< 4 Digits
> 4 Digits
1st Digit = 0
non Digit
valid
Valid
Boundaries
1000
9999
Version 2x
Session 4, Version 3X
invalid
Invalid
Boundaries
3 Digits
5 Digits
0 Digits
ISEB Software Testing Dynamic. 19
13of 32
Xansa 2015
Customer name
4 digits, 1st
non-zero
Account number
Loan amount requested
200 to 5000
Term of loan
Monthly repayment
1 to 10 years
Term:
Minimum 10
Repayment:
Interest rate:
Total paid back:
Version 2x
Loan amount
199
invalid
Conditions
Loan
amount
Valid
Partitions
200 - 5000
200
5000
valid
Invalid
Partitions
< 200
>5000
0
non-numeric
null
invalid
Valid
Boundaries
200
5000
Version 2x
Session 4, Version 3X
5001
Invalid
Boundaries
199
5001
14of 32
Xansa 2015
Condition template
Conditions
Valid
Partitions
Customer 2 - 64 chars
name
valid chars
Tag
V1
V2
Account
number
V3
4 digits
1st non-zero V4
Loan
amount
500 - 9000
V5
Invalid
Partitions
Valid
Invalid
Tag
Tag
Boundaries
Boundaries
X1 2 chars
B1 1 char
D1
< 2 chars
> 64 chars
X2 64 chars
B2 65 chars
D2
D3
invalid char X3
0 chars
X4 1000
B3 3 digits
D4
< 4 digits
X5 9999
B4 5 digits
D5
> 4 digits
1st digit = 0 X6
0 digits
D6
X7
non-digit
X8 200
B5 199
D7
< 200
X9 5000
B6 5001
D8
>5000
0
X10
non-integer X11
X12
null
Tag
Version 2x
DesignTestCases
Test
Case
Description
Expected Outcome
Name:
Acc no:
Loan:
Term:
John Smith
1234
2500
3 years
Term:
Repayment:
Interest rate:
Total paid:
3 years
79.86
10%
2874.96
V1, V2,
V3, V4,
V5 .....
Name:
Acc no:
Loan:
Term:
AB
1000
200
1 year
Term:
Repayment:
Interest rate:
Total paid:
1 year
17.92
7.5%
215.00
B1, B3,
B5, .....
Version X1
New Tags
Covered
Having identified the conditions that you wish to test, the next step is to design the
test cases. The more test conditions that can be covered in a single test case, the fewer
the test cases that are needed.
Session 4, Version 3X
15of 32
Xansa 2015
Version X1
Technically, because every boundary is in some partition, if you did only boundary
value analysis (BVA) you would also have tested every equivalence partition (EP).
However this approach will cause problems when the value fails was it only the
boundary value that failed or did the whole partition fail? Also by testing only
boundaries we would probably not give the users too much confidence as we are
using extreme values rather than normal values. The boundaries may be more
difficult (and therefore more costly) to set up as well.
We recommend that you test the partitions separately from boundaries - this means
choosing partition values that are NOT boundary values.
Session 4, Version 3X
16of 32
Xansa 2015
Test objectives?
Condition
Valid
Tag Invalid
Tag Valid
Partition
Partition
Boundary
Tag Invalid
Boundary
Tag
Version X1
What partitions and boundaries you exercise and which first depends on your
objectives. If your goal is the most thorough approach, then follow the traditional
approach and test valid partitions, then invalid partitions, then valid boundaries and
finally invalid boundaries. However if you are under time pressure and cannot test
everything (and who isn't), then your objective will help you decide what to test. If
you are after user confidence with minimum tests, you may do valid partitions only.
If you want to find as many faults as possible as quickly as possible, you may start
with invalid boundaries.
Session 4, Version 3X
17of 32
Xansa 2015
StateTransitionTesting
Card inserted
Ask for PIN
Wait for
card
Version X1
Cancel
Return card
Wait for
PIN
Valid PIN
Ask amount
This technique is not specifically required by the ISEB syllabus, although we do need
to cover one additional black box technique. Therefore the details of this technique
should not be in the exam, but only an awareness of another functional technique.
State transition testing is used where some aspect of the system can be described in
what is called a finite state machine. This simply means that the system can be in a
(finite) number of different states, and the transitions from one state to another are
determined by the rules of the machine. This is the model on which the system and
the tests are based.
Any system where you get a different output for the same input, depending on what
has happened before, is a finite state system. For example, if you request to withdraw
100 from a bank ATM, you may be given cash. Later you may make exactly the
same request but be refused the money (because your balance is insufficient). This
later refusal is because the state of your bank account had changed from having
sufficient funds to cover the withdrawal to having insufficient funds. The transaction
that caused your account to change its state was probably the earlier withdrawal.
Another example is a word processor. If a document is open, you are able to Close it.
If no document is open, then Close is not available. After you choose Close once,
you cannot choose it again for the same document unless you open that document. A
document thus has two states: open and closed.
A state transition model has four basic parts:
the states that the software may occupy (open/closed or funded/insufficient
funds);
the transitions from one state to another (not all transitions are allowed);
the events that cause a transition (withdrawing money, closing a file);
the actions that result from a transition (an error message, or being given
your cash).
Session 4, Version 3X
18of 32
Xansa 2015
Limitation of
switch coverage:
covers only
valid transitions
Version X1
You can design tests to test every transition shown in the model. If every (valid)
transition is tested, this is known as 0-switch coverage. You could also test a series
of transitions through more than one state. If you covered all of the pairs of two valid
transitions, you would have 1-switch coverage, covering the sets of 3 transitions
would give 2-switch coverage, etc.
However, deriving tests only from the model may omit the negative tests, where we
could try to generate invalid transitions. In order to see the total number of
combinations of states and transitions, both valid and invalid, a state table can be
used.
Session 4, Version 3X
19of 32
Xansa 2015
WhiteBoxTestTechniques
Also a measurement
technique?
= Yes
= No
Version X1
White box techniques are normally used after an initial set of tests has been derived
using black box techniques. They are most often used to measure "coverage" - how
much of the structure has been exercised or covered by a set of tests.
Coverage measurement is best done using tools, and there are a number of such tools
on the market. These tools can help to increase productivity and quality. They
increase quality by ensuring that more structural aspects are tested, so faults on those
structural paths can be found. They increase productivity and efficiency by
highlighting tests that may be redundant, i.e. testing the same structure as other tests
(although this is not necessarily a bad thing!)
Session 4, Version 3X
20of 32
Xansa 2015
WhatareCoverageTechniques?
Enough
tests?
Software
Tests
Results OK?
More tests
What's
covered?
Coverage OK?
Stronger structural
techniques (different
structural elements)
Increasing coverage
Version X1
Coverage techniques serve two purposes: test measurement and test case design.
They are often used in the first instance to assess the amount of testing performed by
tests derived from functional techniques. They are then used to design additional tests
with the aim of increasing the test coverage.
Coverage techniques are a good way of generating additional test cases that are
different from existing tests and in any case they help ensure breadth of testing in the
sense that test cases that achieve 100% coverage in any measure will be exercising all
parts of the software.
Function exercised,
insufficient structure
Functional
Testedness
Structure exercised,
insufficient function
% Statement
% Decision
Structural Testedness
Version X1
Coverage is not
Thoroughness
Session 4, Version 3X
% Condition
Combination
21of 32
Xansa 2015
Session 4, Version 3X
22of 32
Xansa 2015
Instrumenting the code (step 3) implies inserting code along-side each structural
element in order to record that the associated structural element has been exercised.
Determine the actual coverage measure (step 5) is then a matter of analysing the
recorded information.
When a specific coverage measure is required or desired but not attained, additional
test cases have to be designed with the aim of exercising some or all of the structural
elements not yet reached. These are then run through the instrumented code and a
new coverage measure determined. This is repeated until the required coverage
measure is achieved.
Finally the instrumentation should be removed.
However, in practice the
instrumentation should be done to a copy of the source code such that it can be
deleted once you have finished measuring coverage. This avoids any errors that could
be made when removing instrumentation. In any case all the tests ought to be re-run
again on the un-instrumented code.
Session 4, Version 3X
23of 32
Xansa 2015
StatementCoverage
Statement coverage
Statement coverage
is normally measured
by a software tool.
Example:
Program has 100 statements
Tests exercise 87 statements
Statement coverage = 87%
Version X1
read(a)
IF a > 6 THEN
b=a
ENDIF
print b
Statement
numbers
Test
case
Input
Expected
output
Version 2x
Session 4, Version 3X
24of 32
Xansa 2015
Decision
coverage
Decision Coverage (Branch
Coverage)
is normally measured
by a software tool.
False
True
Example:
Program has 120 decision outcomes
Tests exercise 60 decision outcomes
Decision coverage = 50%
Version X1
Session 4, Version 3X
25of 32
Xansa 2015
12
123
?
?
Version 2x
Version 2x
Session 4, Version 3X
26of 32
Xansa 2015
Example 1
Wait
Valid
card?
Yes
Display
Enter..
No
Valid Yes
PIN?
Reject
card
No
ELSE (otherwise)
Display PIN invalid
Select
trans...
Display
PIN in..
ELSE (otherwise)
Reject card
End
End
Version 2x
Example 2
Read A
IF A > 0 THEN
IF A = 21 THEN
Print Key
ENDIF
ENDIF
Read
A>0
Yes
No
A=21
No
Yes
Cyclomatic complexity:
Minimum tests to achieve:
End
Statement coverage:
Branch coverage:
Version 2x
Session 4, Version 3X
27of 32
Xansa 2015
Example 3
Read
Read A
No
Yes
Print
B=0
A>0
Read B
IF A > 0 THEN
Yes
No
Yes
IF B = 0 THEN
A>21
Print
Print
Print No values
No
ELSE
Print B
End
IF A > 21 THEN
Print A
Cyclomatic complexity:
ENDIF
Minimum tests to achieve:
ENDIF
ENDIF
Statement coverage:
Branch coverage:
Version 2x
Example 4
Read A
Read B
IF A < 0 THEN
Print A negative
ELSE
Print A positive
ENDIF
IF B < 0 THEN
Print B negative
ELSE
Print B positive
ENDIF
Cyclomatic complexity:
Minimum tests to achieve:
Statement coverage:
Version 2x
Session 4, Version 3X
Branch coverage:
28of 32
Xansa 2015
Example 5
Read A
Read B
IF A < 0 THEN
Print A negative
ENDIF
IF B < 0 THEN
Print B negative
ENDIF
Cyclomatic complexity:
Minimum tests to achieve:
Statement coverage:
Branch coverage:
Version 2x
Example 6
Read A
IF A < 0 THEN
Print A negative
ENDIF
IF A > 0 THEN
Print A positive
ENDIF
Cyclomatic complexity:
Minimum tests to achieve:
Statement coverage:
Branch coverage:
Version 2x
Session 4, Version 3X
29of 32
Xansa 2015
Guessing
Although it is true that testing should be rigorous, thorough and systematic, this is not
all there is to testing. There is a definite role for non-systematic techniques.
Many people confuse error guessing with ad hoc testing. Ad hoc testing is unplanned
and usually done before (or instead of) rigorous testing. Error guessing is done last as
a supplement to rigorous techniques.
Error-guessing
Error guessing is a technique that should always be used after other more formal
techniques. The success of error guessing is very much dependent on the skill of the
tester, as good testers know where the errors are most likely to lurk.
Session 4, Version 3X
30of 32
Xansa 2015
Consider:
Past failures
Intuition
Experience
Brain storming
What is the craziest thing we can do?
Version X1
There are no rules for error guessing. The tester is encouraged to think of situations in
which the software may not be able to cope. Typical conditions to try include divide
by zero, blank (or no) input, empty files and the wrong kind of data (e.g. alphabetic
characters where numeric are required). If anyone ever says of a system or the
environment in which it is to operate That could never happen, it might be a good
idea to test that condition as such assumptions about what will and will not happen in
the live environment are often the cause of failures.
Other names for error guessing include experience-driven testing, heuristic testing and
lateral testing.
Session 4, Version 3X
31of 32
Xansa 2015
SummaryQuiz
Test techniques are:
Error guessing:
Session 4, Version 3X
32of 32
Xansa 2015