Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Software

Quality
Assurance
UNIT 3: Unit Testing: Boundary Value Testing, Equivalence Class Testing, Decision Table–Based
Testing, Path Testing, Data Flow Testing

Compiled By: Asst Prof. Seema


Vishwakarma
Vidyalankar School Of
Information Technology Seema.vishwakarma@vsit.edu.in
Wadala(E),Mumbai.
www.vsit.edu.in
Certificate

This is to certify that the e-book titled “software Quality Assurance” comprises all
elementary learning tools for a better understating of the relevant concepts. This e-book is comprehensively compiled
as per the
Predefined eight parameters and guidelines.

Ms. Seema Vishwakarma


Date: 18-03-2021
Assistant Professor
Department of IT

DISCLAIMER: The information contained in this e-book is compiled and distributed for
educational purposes only. This e-book has been designed to help learners understand relevant
concepts with a more dynamic interface. The compiler of this e-book and Vidyalankar School
of Information Technology give full and due credit to the authors of the contents, developers
and all websites from wherever information has been sourced. We acknowledge our gratitude
towards the websites YouTube, Wikipedia, and Google search engine. No commercial benefits
are being drawn from this project.

Unit III
Contents:

Unit Testing: Boundary Value Testing: Normal Boundary Value Testing, Robust Boundary
Value Testing, Worst-Case Boundary Value Testing, Special Value Testing, Examples,
Random Testing, Guidelines for Boundary Value Testing.

Equivalence Class Testing: Equivalence Classes, Traditional Equivalence Class Testing,


Improved Equivalence Class Testing, Edge Testing, Guidelines and Observations.

Decision Table–Based Testing: Decision Tables, Decision Table Techniques, Cause-and-


Effect Graphing, Guidelines and Observations.

Path Testing: Program Graphs, DD-Paths, Test Coverage Metrics, Basis Path Testing,
Guidelines and Observations.

Data Flow Testing: Define/Use Testing, Slice-Based Testing, Program Slicing Tools.

Recommended Books:

Sr.
Title Author/s Publisher Edition
No.

Software Testing and


William E. CRC
1 Continuous Quality Third
Lewis Press
Improvement
Software Testing: Principles,
2 M. G. Limaye TMH
Techniques and Tools
Dorothy
Graham, Erik
van Cengage
3 Foundations of Software Testing Third
Veenendaal, Learning
Isabel Evans,
Rex Black
Software Testing: A Craftsman’s Paul C. CRC
4 Fourth
Approach Jorgenson Press

Prerequisites and Linking:

No. Semester Name of the Course Topic/s


1 4 Software Engineering Different types of Tests

Unit 3: Unit Testing: Boundary Value Testing, Equivalence Class Testing,


Decision Table–Based Testing, Path Testing, Data Flow Testing
BOUNDARY VALUE TESTING

BOUNDARY VALUE TESTING

• Testing is one of the most important aspects in the process of software creation. When
testing the software, the crucial step is to design the test cases. One of the methods
related to test case design is Boundary Value Analysis.

There are four variations of Boundary Value Testing: -


• Normal boundary value testing
• Robust boundary value testing
• Worst-case boundary value testing
• Robust Worst-case boundary value testing

NORMAL BOUNDARY VALUE TSTING


• The underlying principle behind boundary value testing is that errors tend to occur near
the extreme values of an input variable.
• The basic idea of boundary value analysis is to use input variable values at their
minimum, just above the minimum, a nominal value, just below their maximum, and at
their maximum.
Boundary Value Analysis is based on “Critical Fault Assumption” also called as “Single
Fault Assumption”.

GENERALISING BOUNDARY VALUE ANALYSIS


There are two ways to generalize Boundary Value Analysis: -
1) Number of variable
2) Range of the variable

• Generalizing by the number of variable is simple and is same as Boundary Value


Analysis technique using the critical fault assumption.

• Generalizing by the range of the variable depends on the type or nature of the variable.
If in an example, we have variable for the year, month, and day then some programming
language will encode it as month’s variable so that January correspond to 1 and
February corresponds to 2 and so on.

LIMITATIONS OF BOUNDARY VALUE ANALYSIS

• Boundary value analysis works well when the program to be tested is a function of
several independent variables.
• By using Boundary Value Analysis, the dependencies between two inputs are not
tested. For dependent variables we can find deficiencies in the results.

ROBUST BOUNDARY VALUE TESTING

• Robust boundary value testing is a simple extension of normal boundary value testing.
In addition to the five boundary value analysis values of a variable (min, min+, nom,
max-, max), the extremes are exceeded with a value slightly greater than the maximum
(max+) and a value slightly less than the minimum (min–).
WORST-CASE BOUNDARY VALUE TESTING

• In Worst-Case Boundary Value Testing for each variable, we use the five-element set
that contains min, min+, nom, max-, max. Then we take the Cartesian product of these
sets to generate test cases.
• The Worst-Case testing for a function of n variables generates 5n test cases.
• The following Figure 4.4 shows the Worst-Case test cases for function of two variables.

ROBUST WORST-CASE BOUNDARY VALUE TESTING


• Robust Worst-Case uses the seven element sets (min-, min, min+, nom, max-1, max,
max+1) that are used in robustness testing.
• It involves the Cartesian product of these seven element sets, so total 7n test cases are
generated.
SPECIAL VALUE TESTING

• Special value testing is a form of functional testing.


• Special value testing occurs when a tester uses domain knowledge, experience with
similar programs.
• Even though special value testing is highly subjective, it often results in a set of test
cases that is more effective in revealing faults than the test sets generated by boundary
value methods.

GUIDELINES FOR BOUNDARY VALUE TESTING


• The test methods based on the input domain of a function are the most rudimentary of
all specification-based testing methods.
• The common assumption about Boundary Value analysis is that the input variables are
independent and when this assumption is wrong, the methods generate unsatisfactory
test cases.
• The tester should develop test cases to check that error messages are generated when
they are appropriate, and are not falsely generated.
• Boundary value analysis can also be used for internal variables, such as loop control
variables, indices and pointers.
• Robustness testing is a good choice for testing internal variables.
EQUIVALENCE CLASS TESTING

• In boundary value analysis there is massive redundancy and gaps existed. So the
motivation of using Equivalence class testing is: -
1) Sense of complete testing: -Equivalence classes form a partition of a set that is
a collection of mutually disjoint subsets whose union is the entire set.
2) Avoid redundancy: -The disjointness assures a form of non-redundancy.

TRADITIONAL EQUIVALENCE CLASS TESTING


WEAK NORMAL EQUIVALENCE CLASS TESTING

• The word ‘weak’ means ‘single fault assumption’.


• This type of testing is accomplished by using one variable from each
equivalence class in a test case.

STRONG NORMAL EQUIVALENCE CLASS TESTING

• This type of testing is based on the multiple fault assumption theory.


• Strong equivalence class testing is based on the Cartesian Product of the
partition subsets.
• The Cartesian product guarantees that we have a notion of “completeness” in
two senses:
1) Covers all equivalence classes.
2) Have one of each possible combination of inputs.
WEAK ROBUST EQUIVALENCE CLASS TESTING

• The word’ weak’ means single fault assumption theory and the word ‘Robust’
refers to invalid values.
• The process of weak robust equivalence class testing is a simple extension of
that for weak normal equivalence class testing.

STRONG ROBUST EQUIVALENCE CLASS TESTING

• “Robust’ means consideration of invalid values and the ‘strong’ means multiple
fault assumption.
• We obtain test cases from each element of the Cartesian product of all the
equivalence classes, both valid and invalid.
EDGE TESTING

• Edge testing is hybrid of boundary value analysis and equivalence class testing.
• These classes refer to variables that are “treated the same”.
• This suggests that there may be faults near the boundaries of the classes, and edge
testing will exercise these potential faults.
• For the example in Figure 5.2, a full set of edge testing test values are as follows:
Normal test values for x1: {a, a+, b–, b, b+, c–, c, c+, d–, d}
Robust test values for x1: {a–, a, a+, b–, b, b+, c–, c, c+, d–, d, d+}
Normal test values for x2: {e, e+, f–, f, f+, g–, g}
Robust test values for x2: {e–, e, e+, f–, f, f+, g–, g, g+}

GUIDELINES AND OBSERVATIONS

The observations about, and guidelines for, equivalence class testing are as follow:-
1) The weak forms of equivalence class testing (normal or robust) are not as
comprehensive as the corresponding strong forms.
2) If the implementation language is strongly typed (and invalid values cause run-time
errors), it makes no sense to use the robust forms.
3) If error conditions are a high priority, the robust forms are appropriate.
4) Equivalence class testing is appropriate when input data is defined in terms of intervals
and sets of discrete values. This is certainly the case when system malfunctions can
occur for out-of-limit variable values.
5) Equivalence class testing is strengthened by a hybrid approach with boundary value
testing.
6) Equivalence class testing is indicated when the program function is complex.
7) Strong equivalence class testing makes a presumption that the variables are
independent, and the corresponding multiplication of test cases raises issues of
redundancy.
8) When in doubt, the best bet is to try to second-guess aspects of any reasonable
implementation. This is sometimes known as the “competent programmer hypothesis.”
9) The difference between the strong and weak forms of equivalence class testing is
helpful in the distinction between progression and regression testing.

DECISION TABLE-BASED TESTING

DECISION TABLE

• Decision tables have been used to represent and analyses complex logical
relationships.
• A decision table has four portions:
1) The part to the left of the bold vertical line is the stub portion.
2) The right is the entry portion.
3) The part above the bold horizontal line is the condition portion.
4) The part below the bold horizontal line is the action portion.
• Thus, we can refer to them as condition stub, the condition entries, the action
stub, and the action entries.
• A column in the entry portion is a rule. Rules indicate which actions, if any, are
taken.

• Decision tables in which all the conditions are binary are called Limited Entry Decision
Tables (LETDs). If conditions are allowed to have several values, the resulting tables
are called Extended Entry Decision Tables (EEDTs).
TEST CASES FOR THE NEXTDATE FUNCTION

• The NextDate function shows the problem of dependencies in the input domain.
• Decision tables can highlight such dependencies.
• Here in the below section three tries of decision table formulation is done.

FIRST TRY

M1 = {month: month has 30 days}


M2 = {month: month has 31 days}
M3 = {month: month is February}
D1 = {day: 1 ≤ day ≤ 28}
D2 = {day: day = 29}
D3 = {day: day = 30}
D4 = {day: day = 31}
Y1 = {year: year is a leap year}
Y2 = {year: year is not a leap year}

Table 6.12 First Try Decision Table (Partial)

C1: month in M1? T T T T T T T T

C2: month in M2? T T T T

C3: month in M3?

C4: day in D1? T T T T

C5: day in D2? T T T T

C6: day in D3? T T

C7: day in D4? T T

C8: year in Y1? T T T T T T

C9: year in Y2? T T T T T T

A1: Impossible X X

A2: Next Date X X X X X X X X X X


SECOND TRY

• Extended entry decision table is used, so the equivalence classes should form a true
partition of the input domain.
• If there were any “overlaps” among the rule entries, we would have a redundant case
in which more than one rule could be satisfied. Here, Y2 is the set of years between
1812 and 2012, evenly divisible by four excluding the year 2000.
M1 = {month: month has 30 days}
M2 = {month: month has 31 days}
M3 = {month: month is February}
D1 = {day: 1 ≤ day ≤ 28}
D2 = {day: day = 29}
D3 = {day: day = 30}
D4 = {day: day = 31}
Y1 = {year: year = 2000}
Y2 = {year: year is a non-century leap year}
Y3 = {year: year is a common year}

I
1 2 3 4 5 6 7 8

c1: Month in M1 M1 M1 M1 M2 M2 M2 M2

c2: Day in D1 D2 D3 D4 D1 D2 D3 D4

c3: Year in — — — — — — — —

Actions

a1: Impossible X

a2: Increment day X X X X X

a3: Reset day X X

a4: Increment month X ?

a5: Reset month ?

a6: Increment year ?

9 10 11 12 13 14 15 16

c1: Month in M3 M3 M3 M3 M3 M3 M3 M3

c2: Day in D1 D1 D1 D2 D2 D2 D3 D4

c3: Year in Y1 Y2 Y3 Y1 Y2 Y3 — —

Actions
a1: Impossible X X X

a2: Increment day X X ?

a3: Reset day ? X X

a4: Increment month X X X X


a5: Reset month

a6: Increment year

6.4.1 THIRD TRY

• Third try is very specific about days and months, and we revert to the simpler
leap year or non- leap year condition of the first try—so the year 2000 gets no
special attention.
M1 = {month: month has 30 days}
M2 = {month: month has 31 days except December}
M3 = {month: month is December}
• M4 = {month: month is February}
D1 = {day: 1 ≤ day ≤ 27}
• D2 = {day: day = 28}
D3 = {day: day = 29}
D4
\ = {day: day = 30}
D5 = {day: day = 31}
Y1 = {year: year is a leap year}
Y2 = {year: year is a common year}

1 2 3 4 5 6 7 8 9 10
c1: Month in M1 M1 M1 M1 M1 M2 M2 M2 M2 M2
c2: Day in D1 D2 D3 D4 D5 D1 D2 D3 D4 D5
c3: Year in — — — — — — — — — —
Actions
a1: Impossible X
a2: Increment day X X X X X X X
a3: Reset day X X
a4: Increment
month X X
a5: Reset month
a6: Increment year
11 12 13 14 15 16 17 18 19 20 21 22
c1: Month in M3 M3 M3 M3 M3 M4 M4 M4 M4 M4 M4 M4
c2: Day in D1 D2 D3 D4 D5 D1 D2 D2 D3 D3 D4 D5
c3: Year in — — — — — — Y1 Y2 Y1 Y2 — —
Actions
a1: Impossible X X X
a2: Increment day X X X X X X
a3: Reset day X X X
a4: Increment
month X X
a5: Reset month X
a6: Increment year X

6.5 CAUSE-AND- EFFECT GRAPHING

• In the early years of computing, the software community borrowed many ideas
from the hardware community. In some cases this worked well, but in others,
the problems of software just did not fit well with established hardware
techniques. Cause-and-effect graphing is a good example of this.
• In Cause-and-effect graphs if there is any problem at an output, the path(s)
back to the inputs that affected the output can be retraced.

GUIDELINES AND OBSERVATIONS

• Decision table-based testing works well for applications in which lot of decision
making takes place (triangle problem), and also for application in which
important logical relationships exist among input variables (the NextDate
function). It is not worth for applications like commission problem.
• The decision table technique is suited for applications like:
1) Prominent if–then–else logic
2) Logical relationships among input variables
3) Calculations involving subsets of the input variables
4) Cause-and-effect relationships between inputs and outputs
5) High cyclomatic complexity
• Decision tables do not scale up very well. In order to deal with large data
extended entry decision table can be used, repeating patterns should be
looked for etc.
• The first set of conditions and actions identified may be unsatisfactory. So
iterate until decision table is identified.

PATH TESTING

PROGRAM GRAPH
Definition
• Given a program written in an imperative programming language, its program graph is
a directed graph in which nodes are statement fragments, and edges represent flow of
control.
• If i and j are nodes in the program graph, an edge exists from node i to node j if and
only if the statement corresponding to node j can be executed immediately after the
statement corresponding to node i.

STYLE CHOICES FOR PROGRAM GRAPH

Program graph for Triangle problem


DD-PATHS

• DD-Path is also known as a decision-to-decision path.


• A DD-path is a sequence of nodes in a program graph such that:-
Case 1: It consists of a single node with indeg = 0.
• Case 2: It consists of a single node with outdeg = 0.
Case 3: It consists of a single node with indeg ≥ 2 or outdeg ≥ 2.
• Case 4: It consists of a single node with indeg = 1 and outdeg = 1.
Case 5: It is a maximal chain of length ≥ 1.
• Given a program written in an imperative language, its DD -path graph is the directed
graph in which nodes are DD-paths of its program graph, and edges represent control
flow between successor DD-paths.
• DD-Paths are also known as segments.

TEST COVERAGE METRICS

• Test coverage metrics are a device to measure the extent to which a set of test cases
covers (or exercises) a program.

Basis Path Testing

• The basis path method enables the test case designer to derive a logical complexity
measure of a procedural design and use this measure as a guide for defining a basis set
of execution paths.
• Test cases derived to exercise the basis set are guaranteed to execute every statement
in the program at least one time during testing.
• The Basis Path Testing was first proposed by McCabe.

McCabe’s Basis Path Testing

• McCabe based his view of testing from graph theory which states the, Cyclomatic
complexity.
• Cyclomatic complexity defines the number of independent paths in the program.
• The number of linearly independent paths from the source node to sink node of the
graph is given by the Cyclomatic complexity formula V(G) = e – n +2 p
Where e is the number of edges
n is the number of nodes
p is the number of connected regions(usually 1)

The cyclomatic complexity is: -


V(G) = e – n +2 p
=10-7+2(1)
=5
• Strongly connected graph is created by adding an edge from the (every) sink node to
the (every) source node.
• The cyclomatic complexity for the strongly connected graph is calculated as
V(G) = e – n +2 p

The cyclomatic complexity is: -


V(G) = e – n + p
=11-7+1
=5

GUIDELINES AND OBSERVATIONS

• Path-based testing crosschecks on specification-based testing. It helps to resolve the


gap and redundancies.
• When we find that the same program path is traversed by several functional test cases,
we suspect that this redundancy is not revealing new faults.
• When we fail to attain DD-path cover-age, we know that there are gaps in the functional
test cases.
• Choosing multiple-condition coverage for modules with complex logic, while those
with extensive iteration might be tested with loop coverage techniques.
• This is probably the best view of structural testing: use the properties of the source code
to identify appropriate coverage metrics, and then use these as a crosscheck on
functional test cases.

DATA FLOW TESTING

• Data flow testing focus on the points at which variables receive values and the points
at which these values are used.
• Data flow testing are suitable for object-oriented code.
• The basic idea behind this form of testing, is to reveal the coding errors and mistakes,
which may result in to improper implementation and usage of the data variables or data
values in the programming code i.e. data anomalies, such as
a) A variable that is defined but never used (referenced)
b) A variable that is used before it is defined
c) A variable that is defined twice before it is used

DEFINE/USE TESTING

The following definitions refer to a program P that has a program graph G(P) and a set of
program variables V. The program graph G(P) is constructed with statement fragments as nodes
and edges that represent node sequences. G(P) has a single-entry node and a single-exit node.

Definition
A usage node USE(v, n) is a predicate use (denoted as P-use) if and only if the statement n is
a predicate statement; otherwise, USE(v, n) is a computation use (denoted C-use).
The nodes corresponding to predicate uses always have an outdegree ≥ 2, and nodes
corresponding to computation uses always have an outdegree ≤ 1.

Definition
A definition/use path with respect to a variable v (denoted du-path) is a path in PATHS(P) such
that, for some v ∈ V, there are define and usage nodes DEF(v, m) and USE(v, n) such that m
and n are the initial and final nodes of the path.

Definition
A definition-clear path with respect to a variable v (denoted dc-path) is a definition/use path in
PATHS(P) with initial and final nodes DEF(v, m) and USE(v, n) such that no other node in the
path is a defining node of v.

SLICE BASED TESTING

• The basic idea of Slice based testing is to divide the program into slices(components)
that contribute to value of a variable.
• These program slices refer to a set of program statements that may affect the
computations of variable v at statement s.
• The behavior of the slice must correspond to the behavior of the original program.
• The slice should exclude all non-executable statements and also O-use(output), L-
use(location) and I-use (iteration).
• A formal definition is
Given a program P and a set of variables V in P, A slice on the variable set V at
statement n is written as S(V, n) , is the set of all statements in P prior to n that contribute
to the values of variables in V at statement n.
PROGRAM SLICING TOOLS

• Program slicing is not a feasible manual approach.


• There are a few program slicing tools; most are academic or experimental, but there
are a very few commercial tools.
• Program slicing is used to improve the program understanding that maintenance
programmers need. JSlice is appropriate for object-oriented software.
• Following are the few program slicing tools as shown in Table 8.1.

Tool/Product Language Static/Dynamic?


Kamkar Pascal Dynamic
Spyder ANSI C Dynamic
Unravel ANSI C Static
CodeSonar® C, C++ Static
Indus/Kaveri Java Static
JSlice Java Dynamic
SeeSlice C Dynamic

Objectives

1) Boundary value testing is also called as _________________.

2) Boundary value assumption is based on __________ fault assumption.

3) Special value test is also called as __________ testing.

4) In boundary value analysis there is massive ____________ and ___________ so there


is a motivation of using Equivalence class testing.

5) The word ___________ means single fault assumption.

6) The Cartesian product guarantees that we have a notion of ______________.

7) _______ testing is hybrid of boundary value analysis and equivalence class testing.

8) DD path is also known as ______________.

9) Test coverage metrics are a device to measure the extent to which a set of
___________ covers (or exercises) a program.

10) Cyclomatic complexity defines the number of ________________ paths in the


program.

You might also like