Professional Documents
Culture Documents
SQA Unit 3
SQA Unit 3
Quality
Assurance
UNIT 3: Unit Testing: Boundary Value Testing, Equivalence Class Testing, Decision Table–Based
Testing, Path Testing, Data Flow Testing
This is to certify that the e-book titled “software Quality Assurance” comprises all
elementary learning tools for a better understating of the relevant concepts. This e-book is comprehensively compiled
as per the
Predefined eight parameters and guidelines.
DISCLAIMER: The information contained in this e-book is compiled and distributed for
educational purposes only. This e-book has been designed to help learners understand relevant
concepts with a more dynamic interface. The compiler of this e-book and Vidyalankar School
of Information Technology give full and due credit to the authors of the contents, developers
and all websites from wherever information has been sourced. We acknowledge our gratitude
towards the websites YouTube, Wikipedia, and Google search engine. No commercial benefits
are being drawn from this project.
Unit III
Contents:
Unit Testing: Boundary Value Testing: Normal Boundary Value Testing, Robust Boundary
Value Testing, Worst-Case Boundary Value Testing, Special Value Testing, Examples,
Random Testing, Guidelines for Boundary Value Testing.
Path Testing: Program Graphs, DD-Paths, Test Coverage Metrics, Basis Path Testing,
Guidelines and Observations.
Data Flow Testing: Define/Use Testing, Slice-Based Testing, Program Slicing Tools.
Recommended Books:
Sr.
Title Author/s Publisher Edition
No.
• Testing is one of the most important aspects in the process of software creation. When
testing the software, the crucial step is to design the test cases. One of the methods
related to test case design is Boundary Value Analysis.
• Generalizing by the range of the variable depends on the type or nature of the variable.
If in an example, we have variable for the year, month, and day then some programming
language will encode it as month’s variable so that January correspond to 1 and
February corresponds to 2 and so on.
• Boundary value analysis works well when the program to be tested is a function of
several independent variables.
• By using Boundary Value Analysis, the dependencies between two inputs are not
tested. For dependent variables we can find deficiencies in the results.
• Robust boundary value testing is a simple extension of normal boundary value testing.
In addition to the five boundary value analysis values of a variable (min, min+, nom,
max-, max), the extremes are exceeded with a value slightly greater than the maximum
(max+) and a value slightly less than the minimum (min–).
WORST-CASE BOUNDARY VALUE TESTING
• In Worst-Case Boundary Value Testing for each variable, we use the five-element set
that contains min, min+, nom, max-, max. Then we take the Cartesian product of these
sets to generate test cases.
• The Worst-Case testing for a function of n variables generates 5n test cases.
• The following Figure 4.4 shows the Worst-Case test cases for function of two variables.
• In boundary value analysis there is massive redundancy and gaps existed. So the
motivation of using Equivalence class testing is: -
1) Sense of complete testing: -Equivalence classes form a partition of a set that is
a collection of mutually disjoint subsets whose union is the entire set.
2) Avoid redundancy: -The disjointness assures a form of non-redundancy.
• The word’ weak’ means single fault assumption theory and the word ‘Robust’
refers to invalid values.
• The process of weak robust equivalence class testing is a simple extension of
that for weak normal equivalence class testing.
• “Robust’ means consideration of invalid values and the ‘strong’ means multiple
fault assumption.
• We obtain test cases from each element of the Cartesian product of all the
equivalence classes, both valid and invalid.
EDGE TESTING
• Edge testing is hybrid of boundary value analysis and equivalence class testing.
• These classes refer to variables that are “treated the same”.
• This suggests that there may be faults near the boundaries of the classes, and edge
testing will exercise these potential faults.
• For the example in Figure 5.2, a full set of edge testing test values are as follows:
Normal test values for x1: {a, a+, b–, b, b+, c–, c, c+, d–, d}
Robust test values for x1: {a–, a, a+, b–, b, b+, c–, c, c+, d–, d, d+}
Normal test values for x2: {e, e+, f–, f, f+, g–, g}
Robust test values for x2: {e–, e, e+, f–, f, f+, g–, g, g+}
The observations about, and guidelines for, equivalence class testing are as follow:-
1) The weak forms of equivalence class testing (normal or robust) are not as
comprehensive as the corresponding strong forms.
2) If the implementation language is strongly typed (and invalid values cause run-time
errors), it makes no sense to use the robust forms.
3) If error conditions are a high priority, the robust forms are appropriate.
4) Equivalence class testing is appropriate when input data is defined in terms of intervals
and sets of discrete values. This is certainly the case when system malfunctions can
occur for out-of-limit variable values.
5) Equivalence class testing is strengthened by a hybrid approach with boundary value
testing.
6) Equivalence class testing is indicated when the program function is complex.
7) Strong equivalence class testing makes a presumption that the variables are
independent, and the corresponding multiplication of test cases raises issues of
redundancy.
8) When in doubt, the best bet is to try to second-guess aspects of any reasonable
implementation. This is sometimes known as the “competent programmer hypothesis.”
9) The difference between the strong and weak forms of equivalence class testing is
helpful in the distinction between progression and regression testing.
DECISION TABLE
• Decision tables have been used to represent and analyses complex logical
relationships.
• A decision table has four portions:
1) The part to the left of the bold vertical line is the stub portion.
2) The right is the entry portion.
3) The part above the bold horizontal line is the condition portion.
4) The part below the bold horizontal line is the action portion.
• Thus, we can refer to them as condition stub, the condition entries, the action
stub, and the action entries.
• A column in the entry portion is a rule. Rules indicate which actions, if any, are
taken.
• Decision tables in which all the conditions are binary are called Limited Entry Decision
Tables (LETDs). If conditions are allowed to have several values, the resulting tables
are called Extended Entry Decision Tables (EEDTs).
TEST CASES FOR THE NEXTDATE FUNCTION
• The NextDate function shows the problem of dependencies in the input domain.
• Decision tables can highlight such dependencies.
• Here in the below section three tries of decision table formulation is done.
FIRST TRY
A1: Impossible X X
• Extended entry decision table is used, so the equivalence classes should form a true
partition of the input domain.
• If there were any “overlaps” among the rule entries, we would have a redundant case
in which more than one rule could be satisfied. Here, Y2 is the set of years between
1812 and 2012, evenly divisible by four excluding the year 2000.
M1 = {month: month has 30 days}
M2 = {month: month has 31 days}
M3 = {month: month is February}
D1 = {day: 1 ≤ day ≤ 28}
D2 = {day: day = 29}
D3 = {day: day = 30}
D4 = {day: day = 31}
Y1 = {year: year = 2000}
Y2 = {year: year is a non-century leap year}
Y3 = {year: year is a common year}
I
1 2 3 4 5 6 7 8
c1: Month in M1 M1 M1 M1 M2 M2 M2 M2
c2: Day in D1 D2 D3 D4 D1 D2 D3 D4
c3: Year in — — — — — — — —
Actions
a1: Impossible X
9 10 11 12 13 14 15 16
c1: Month in M3 M3 M3 M3 M3 M3 M3 M3
c2: Day in D1 D1 D1 D2 D2 D2 D3 D4
c3: Year in Y1 Y2 Y3 Y1 Y2 Y3 — —
Actions
a1: Impossible X X X
• Third try is very specific about days and months, and we revert to the simpler
leap year or non- leap year condition of the first try—so the year 2000 gets no
special attention.
M1 = {month: month has 30 days}
M2 = {month: month has 31 days except December}
M3 = {month: month is December}
• M4 = {month: month is February}
D1 = {day: 1 ≤ day ≤ 27}
• D2 = {day: day = 28}
D3 = {day: day = 29}
D4
\ = {day: day = 30}
D5 = {day: day = 31}
Y1 = {year: year is a leap year}
Y2 = {year: year is a common year}
1 2 3 4 5 6 7 8 9 10
c1: Month in M1 M1 M1 M1 M1 M2 M2 M2 M2 M2
c2: Day in D1 D2 D3 D4 D5 D1 D2 D3 D4 D5
c3: Year in — — — — — — — — — —
Actions
a1: Impossible X
a2: Increment day X X X X X X X
a3: Reset day X X
a4: Increment
month X X
a5: Reset month
a6: Increment year
11 12 13 14 15 16 17 18 19 20 21 22
c1: Month in M3 M3 M3 M3 M3 M4 M4 M4 M4 M4 M4 M4
c2: Day in D1 D2 D3 D4 D5 D1 D2 D2 D3 D3 D4 D5
c3: Year in — — — — — — Y1 Y2 Y1 Y2 — —
Actions
a1: Impossible X X X
a2: Increment day X X X X X X
a3: Reset day X X X
a4: Increment
month X X
a5: Reset month X
a6: Increment year X
• In the early years of computing, the software community borrowed many ideas
from the hardware community. In some cases this worked well, but in others,
the problems of software just did not fit well with established hardware
techniques. Cause-and-effect graphing is a good example of this.
• In Cause-and-effect graphs if there is any problem at an output, the path(s)
back to the inputs that affected the output can be retraced.
• Decision table-based testing works well for applications in which lot of decision
making takes place (triangle problem), and also for application in which
important logical relationships exist among input variables (the NextDate
function). It is not worth for applications like commission problem.
• The decision table technique is suited for applications like:
1) Prominent if–then–else logic
2) Logical relationships among input variables
3) Calculations involving subsets of the input variables
4) Cause-and-effect relationships between inputs and outputs
5) High cyclomatic complexity
• Decision tables do not scale up very well. In order to deal with large data
extended entry decision table can be used, repeating patterns should be
looked for etc.
• The first set of conditions and actions identified may be unsatisfactory. So
iterate until decision table is identified.
PATH TESTING
PROGRAM GRAPH
Definition
• Given a program written in an imperative programming language, its program graph is
a directed graph in which nodes are statement fragments, and edges represent flow of
control.
• If i and j are nodes in the program graph, an edge exists from node i to node j if and
only if the statement corresponding to node j can be executed immediately after the
statement corresponding to node i.
• Test coverage metrics are a device to measure the extent to which a set of test cases
covers (or exercises) a program.
•
• The basis path method enables the test case designer to derive a logical complexity
measure of a procedural design and use this measure as a guide for defining a basis set
of execution paths.
• Test cases derived to exercise the basis set are guaranteed to execute every statement
in the program at least one time during testing.
• The Basis Path Testing was first proposed by McCabe.
• McCabe based his view of testing from graph theory which states the, Cyclomatic
complexity.
• Cyclomatic complexity defines the number of independent paths in the program.
• The number of linearly independent paths from the source node to sink node of the
graph is given by the Cyclomatic complexity formula V(G) = e – n +2 p
Where e is the number of edges
n is the number of nodes
p is the number of connected regions(usually 1)
• Data flow testing focus on the points at which variables receive values and the points
at which these values are used.
• Data flow testing are suitable for object-oriented code.
• The basic idea behind this form of testing, is to reveal the coding errors and mistakes,
which may result in to improper implementation and usage of the data variables or data
values in the programming code i.e. data anomalies, such as
a) A variable that is defined but never used (referenced)
b) A variable that is used before it is defined
c) A variable that is defined twice before it is used
DEFINE/USE TESTING
The following definitions refer to a program P that has a program graph G(P) and a set of
program variables V. The program graph G(P) is constructed with statement fragments as nodes
and edges that represent node sequences. G(P) has a single-entry node and a single-exit node.
Definition
A usage node USE(v, n) is a predicate use (denoted as P-use) if and only if the statement n is
a predicate statement; otherwise, USE(v, n) is a computation use (denoted C-use).
The nodes corresponding to predicate uses always have an outdegree ≥ 2, and nodes
corresponding to computation uses always have an outdegree ≤ 1.
Definition
A definition/use path with respect to a variable v (denoted du-path) is a path in PATHS(P) such
that, for some v ∈ V, there are define and usage nodes DEF(v, m) and USE(v, n) such that m
and n are the initial and final nodes of the path.
Definition
A definition-clear path with respect to a variable v (denoted dc-path) is a definition/use path in
PATHS(P) with initial and final nodes DEF(v, m) and USE(v, n) such that no other node in the
path is a defining node of v.
• The basic idea of Slice based testing is to divide the program into slices(components)
that contribute to value of a variable.
• These program slices refer to a set of program statements that may affect the
computations of variable v at statement s.
• The behavior of the slice must correspond to the behavior of the original program.
• The slice should exclude all non-executable statements and also O-use(output), L-
use(location) and I-use (iteration).
• A formal definition is
Given a program P and a set of variables V in P, A slice on the variable set V at
statement n is written as S(V, n) , is the set of all statements in P prior to n that contribute
to the values of variables in V at statement n.
PROGRAM SLICING TOOLS
Objectives
7) _______ testing is hybrid of boundary value analysis and equivalence class testing.
9) Test coverage metrics are a device to measure the extent to which a set of
___________ covers (or exercises) a program.