Test Design Techniques: INF3121/ 12.02.2015 / © Raluca Florea

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

INF3121 : Software Testing

12. 02. 2015


Lecture 4

Test design techniques

Lecturer: Raluca Florea

INF3121/ 12.02.2015 / © Raluca Florea 1


Overview

1. The test development process


2. Categories of test design techniques
3. Specification-based techniques (black-box)
4. Structure-based techniques (white-box)
5. Experience-based techniques
6. Choosing test techniques

INF3121/ 12.02.2015 / © Raluca Florea 2


• 1.1 Background
• 1.2 Test analysis
• 1.3 Test design
• 1.4 Test implementation

1. The test development process

INF3121/ 12.02.2015 / © Raluca Florea 3


1. The test design process

1.1 Background
• The test design process can be done in different ways, from very
informal (little or no documentation), to very formal.
• The level of formality depends on the context of the testing, including:

The maturity of
The maturity of
development
testing process
process

The Time People


organization constraints involved

INF3121/ 12.02.2015 / © Raluca Florea 4


1. The test design process

1.2 Test analysis


• The test basis documentation is analyzed in order to determine what
to test, i.e.to identify the test conditions.

• Test condition (Def.) =


an item or event that could be verified by one or more test cases
Examples:

A function A transaction

Other structural
A quality element
characteristic
(security, i.e.) (menus in web
pages, i.e.)

INF3121/ 12.02.2015 / © Raluca Florea 5


1. The test design process

1.3 Test design


• During test design: Test cases Test data are created and specified.

• Test case = a set of:


Pre-conditions Inputs Expected results Post-conditions

, developed to cover certain test condition(s). (cf. IEEE 829)

• Expected results include: Outputs


Changes to Any other
data and consequence
states of the test

If expected results have not been defined, then a plausible but erroneous result
may be interpreted as the correct one.
Expected results should ideally be defined prior to test execution.

INF3121/ 12.02.2015 / © Raluca Florea 6


1. The test design process

1.4 Test implementation


• During test implementation the test cases are:
Developed

Organized Implemented in the test procedure specification.

Prioritized

• The test procedure (= manual test script):


specifies the sequence of action for the execution of a test.
• Test script ( - automated test procedure)
If tests are run using a test execution tool, the sequence of actions is specified in a test
script.
• The test execution schedule defines:
The order of execution of the test procedures & (possibly) automated test scripts
When they are to be carried out
By whom to be executed.

The test execution schedule will take into account such factors as:
- regression tests,
- prioritization,
-INF3121/
technical12.02.2015 / ©dependencies.
and logical Raluca Florea 7
2. Categories of test design techniques

INF3121/ 12.02.2015 / © Raluca Florea 8


2. Categories of test design techniques

• Common features of black-box techniques:


We use models (formal or informal) to specify the problem to be solved, the software or its
components.
We derive systematically the test cases from these models.

• Common features of white-box techniques:


The test cases are derived from information about how the software is constructed
for example: code and design.

For the existing test cases, we can measure the coverage of the software

Further test cases can be derived systematically to increase coverage.

• Common features of experience-based techniques:


The test cases are derived from the knowledge and experience of people:
• Knowledge of testers, developers, users and other stakeholders about the software, its
usage and its environment.
• Knowledge about likely defects and their distribution

INF3121/ 12.02.2015 / © Raluca Florea 9


• 3.1 Equivalence partition
• 3.2 Boundary value analysis
• 3.3 Decision table testing
• 3.4 State transition testing
• 3.5 Use case testing

3. Specification-based (black-box) techniques

INF3121/ 12.02.2015 / © Raluca Florea 10


3. Specification-based techniques

3.1 Equivalence partitioning


Technique:
Inputs/Outputs/Internal values of the software are divided into groups that are
expected to exhibit similar behavior.

Equivalence partitions (or classes) can be found for both valid data and invalid data,
i.e. values that should be rejected.
Notes:
•Tests can be designed to cover partitions.
•Equivalence partitioning is applicable at all levels of testing.
•Equivalence partitioning as a technique can be used to achieve input and output coverage.
Exercise:
A savings account in a bank earns a different rate of interest depending on the balance in the account.
In order to test the software that calculates the interest due, we can identify the ranges of balance values that
earn the different rates of interest.
If a balance in the range $0 up to $100 has a 3% interest rate, a balance over $100 and up to $1000 has a
5% inter-est rate, and balances of $1000 and over have a 7% interest rate, we would initially identify three
valid equivalence partitions and one invalid partition as shown below.

-$0.01 |$0.00 $100.00 | $100.01 $999.99 |$1000.00__________


Invalid partition | Valid (for 3% interest) | Valid (for 5% interest) |Valid (for 7% interest)

INF3121/ 12.02.2015 / © Raluca Florea 11


3. Specification-based techniques

3.2 Boundary value analysis


Boundary analysis: analysis at the edge of each equivalence partition.
Why: because there, the results are more likely to be incorrect.

The maximum and minimum values of a partition are its boundary values.

Valid and invalid boundary:


-A boundary value for a valid partition is a valid boundary value;
-The boundary of an invalid partition is an invalid boundary value.
Tests can be designed to cover both valid and invalid boundary values.
Notes:
•Boundary value analysis can be applied at all test levels.
•It is relatively easy to apply and its defect finding capability is high;
•Detailed specifications are helpful.
•This technique is often considered as an extension of equivalence partitioning
•Boundary values may also be used for test data selection.

INF3121/ 12.02.2015 / © Raluca Florea 12


3. Specification-based techniques

3.3 Decision table testing


• Decision tables are a good way to:
– capture system requirements that contain logical conditions,
– and to document internal system design.
• The input conditions and actions are most often stated in such a way
that they can either be true or false (Boolean).
• The strength of decision table testing is that it creates combinations of
conditions that might not otherwise have been exercised during testing.
It may be applied to all situations when the action of the software
depends on several logical decisions.
• Example: SF wine monopole store
Rule 1 Rule 2 Rule 3 Rule 4
Conditions
Oslo resident? False True True True
Over 18 years? Don’t care False True True
Happy hour? Don’t care Don’t care False True
Actions
Can buy wine? False False True True
Offer 10% discount? False False False True

INF3121/ 12.02.2015 / © Raluca Florea 13


3. Specification-based techniques

3.4 State transition testing


• Why? Because a system may exhibit a different
response depending on current conditions or
previous history (its state).
• It allows the tester to view:
– the software in terms of its states,
– transitions between states,
– the inputs or events that trigger state changes (transitions)
– the actions which may result from those transitions.
• The states of the system under test are separate, identifiable
and finite in number.
• A state table can highlight possible transitions that are
invalid.
• Tests can be designed:
– To cover a typical sequence of states,
– To cover every state,
– To exercise every transition,
– To exercise specific sequences of transitions
– To test invalid transitions.

INF3121/ 12.02.2015 / © Raluca Florea 14


3. Specification-based techniques

3.5 Use case testing


• Use case - describes
interactions between actors
(including users and the system), which
produce a result of value to a
system user.

• Each use case has pre-conditions, which need to be met for a use case to work successfully.
• Each use case terminates with post-conditions, which are the observable results and final state of the
system after the use case has been completed.

• A use case usually has a mainstream (i.e. most likely) scenario, and sometimes alternative
branches.
• Use cases are very useful for designing acceptance tests with customer/user participation.

INF3121/ 12.02.2015 / © Raluca Florea 15


• 4.1 Statement testing & coverage
• 4.2 Decision testing & coverage
• 4.3 Other structure-based techniques

4. Structure-based (white-box) techniques

INF3121/ 12.02.2015 / © Raluca Florea 16


4. Structure-based techniques

4.1 Background
• Structure-based testing (white-box) is based on an identified
structure of the software.
– Component level: the structure is that of the code itself:
- statements, decisions or branches.
– Integration level: the structure may be a call tree (a diagram in which modules
call other modules).
– System level: the structure may be a menu structure, business process or web
page structure.

INF3121/ 12.02.2015 / © Raluca Florea 17


4. Structure-based techniques

4.1 Statement testing & coverage

• In component testing:
Statement coverage is the
percentage of executable
statements that have been
exercised by a test case
suite.

INF3121/ 12.02.2015 / © Raluca Florea 18


4. Structure-based techniques

4.2 Decision testing & coverage

• Decision coverage - is the


assessment of the percentage of
decision outcomes (e.g. the True and
False options of an IF statement) that have
been exercised by a test case
suite.

• Decision coverage is stronger


than statement coverage: 100%
decision coverage guarantees
100% statement coverage, but
not vice versa.

INF3121/ 12.02.2015 / © Raluca Florea 19


4. Structure-based techniques

4.3 Other structure-base techiques

• There are stronger levels of structural


coverage beyond decision coverage,
i.e.:
– multiple condition coverage.

• The concept of coverage can also be


applied at other test levels, (i.e. integration
level) where the percentage of
– modules, components or classes
that have been exercised by a test
case suite could be expressed as
– module, component or class coverage.

INF3121/ 12.02.2015 / © Raluca Florea 20


5. Experience-based techniques

INF3121/ 12.02.2015 / © Raluca Florea 21


5. Experience-based techniques

Experienced-based testing
– Tests are derived from the tester’s skill and intuition and their experience with similar
applications and technologies.
– Especially applied after more formal approaches

1. Error guessing = a commonly used experienced-based technique.


– Generally testers anticipate defects based on experience.
– A structured approach to the error guessing technique is to enumerate a
list of possible errors and to design tests that attack these errors.
– This systematic approach is called fault attack.

2. Exploratory testing = concurrent


test design, test execution, test logging and learning,
based on a test charter containing test objectives,
and carried out within time-boxes.
• It is most useful:
– where there are few or inadequate specifications
– under severe time pressure,
– to complement other, more formal testing.
It can serve to help ensure that the most serious defects are found.
INF3121/ 12.02.2015 / © Raluca Florea 22
6. Choosing test techniques

INF3121/ 12.02.2015 / © Raluca Florea 23


6. Choosing test techniques

• The choice of which test techniques to use depends on a number of


factors, including:
Development life-
* Type of system Time and budget
cycle

Regulatory Customer Contractual


*
standards requirements requirements

* Test objectives Level of risk Type of risk

Documentation Knowledge of the


* Use case models
available testers

* Previous experience with types of defects found

INF3121/ 12.02.2015 / © Raluca Florea 24

You might also like