Professional Documents
Culture Documents
White and Block Box Testing
White and Block Box Testing
Procedure Average:
i = 1;
total.input = total.valid = 0;
sum = 0;
DO WHILE value[i] <> -999 AND total.input < 100
increment total.input by 1;
IF value[i] >= minimum AND value[i] <= maximum
THEN increment total.valid by 1;
sum = sum + value[i];
ELSE skip
ENDIF
increment i by 1;
ENDDO
IF total.valid > 0
THEN average = sum/total.valid;
ELSE average = -999;
ENDIF
END Average;
Answer
Procedure Average:
i = 1; (1)
total.input = total.valid = 0;
sum = 0;
WHILE value[i] <> -999 (2) AND total.input < 100 (3)
increment total.input by 1; (4)
IF value[i] >= minimum (5) AND value[i] <= maximum (6)
THEN increment total.valid by 1; (7)
sum = sum + value[i];
ELSE skip
ENDIF (8)
increment i by 1;
ENDDO (9)
IF total.valid > 0 (10)
THEN average = sum/total.valid; (11)
ELSE average = -999; (12)
ENDIF (13)
END Average;
V(G) = 6 regions
V(G) = 17 edges - 13 nodes + 2 = 6
V(G) = 5 predicate nodes + 1 = 6
Path 1: 1-2-10-11-13
Path 2: 1-2-10-12-13
Path 3: 1-2-3-10-11-13
Path 4: 1-2-3-4-5-8-9-2-. . .
Path 5: 1-2-3-4-5-6-8-9-2-. . .
Path 6: 1-2-3-4-5-6-7-8-9-2-. . .
The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path through the remainder of the
control structure is acceptable.
Q-1)
begin
int a, b, power; 1
float c; 2
input (a, b); 3
if (b < 0) 4
power = -b; 5
else
power = b; 6
c = 1; 7
while (power != 0) 8
{
c = c * a; 9
power = power – 1; 10
}
if (b < 0) 11
c = 1/c; 12
output(c); 13
end
V(G) = No of regions = 4
V(G) = No. of predicate nodes + 1 = 3 + 1 = 4
V(G) = E – N + 2 = 15 – 13 + 2 = 4
Path 1 : 1 – 2 – 3 – 4 – 5 – 7 – 8 – 9 – 10 – 8 – 11 – 12 – 13 – 14
Path 2 : 1 – 2 – 3 – 4 – 6 – 7 – 8 – 9 – 10 – 8 – 11 – 12 – 13 – 14
Path 3 : 1 – 2 – 3 – 4 – 6 – 7 – 8 – 11 – 12 – 13 – 14
Path 4 : 1 – 2 – 3 – 4 – 6 – 7 – 8 – 11 – 13 – 14
Q-2)
int binsearch(int X, int V[ ], int n) { 1
int low, high, mid; 2
low = 0; 3
high = n - 1; 4
while (low <= high) { 5
mid = (low + high)/2; 6
if (X < V[mid]) 7
high = mid - 1; 8
else if (X > V[mid]) 9
low = mid + 1; 10
else
return mid; 11
} 12
return -1; 13
}
Q-3)
1 begin
2 int staff_dis, markedPrice, amount, discountPrice, finalPrice
3 staff_dis = 0.1
4 markedPrice = 0
5 read (finalPrice)
6 while (finalPrice != -1) do
7 markedPrice = markedPrice + finalPrice
8 read (finalPrice)
9 end do while
10 print (markedPrice)
11 if (markedPrice > 25.00) then
12 discountPrice = (staff_dis * markedPrice) + 0.75
13 else
14 discountPrice = staff_dis * markedPrice
15 end if
16 print (discountPrice)
17 amount = totalPrice - discountPrice
18 print (amount)
19 end
V(G) = No of regions = 3
V(G) = No of predicate nodes + 1 = 2 + 1 = 3
V(G) = E – N + 2 = 19 = 19 – 18 + 2 = 3
Let us
1 begin
2 int staff_dis, markedPrice, amount, discountPrice, finalPrice
3 staff_dis = 0.1
4 markedPrice = 0
5 read (finalPrice)
6 while (finalPrice != -1) do
7 markedPrice = markedPrice + finalPrice
8 read (finalPrice)
9 end do while
10 print (markedPrice)
11 if (markedPrice > 25.00) then
12 discountPrice = (staff_dis * markedPrice) + 0.75
13 else
14 discountPrice = staff_dis * markedPrice
15 end if
16 print (discountPrice)
17 amount = totalPrice - discountPrice
18 print (amount)
19 end
understand with an example. Consider following algorithm:
Corresponding CFG would be as shown in Fig.3.12:
With respect to finalPrice variable, defining nodes will be (finalPrice, 5) and (finalPrice, 8).
Usage nodes will be (finalPrice, 6) and (finalPrice, 7).
Hence, definition usage paths will be DU (5, 6) , DU (5, 6, 7) , DU (8, 9, 6) and DU (8, 9, 6,
7). Definition clear paths will be the same as definition usage paths.
In most cases, code coverage system gathers information about the running
program. It also combines that with source code information to generate a
report about the test suite's code coverage.
Statement Coverage
Condition Coverage
Branch Coverage
Condition Coverage
Statement Coverage
What is Statement Coverage?
In White Box Testing, the tester is concentrating on how the software works.
In other words, the tester will be concentrating on the internal working of
source code concerning control flow graphs or flow charts.
Generally in any software, if we look at the source code, there will be a wide
variety of elements like operators, functions, looping, exceptional handlers,
etc. Based on the input to the program, some of the code statements may
not be executed. The goal of Statement coverage is to cover all the possible
path's, line, and statement in the code.
Scenario 1:
If A = 3, B = 9
The statements marked in yellow color are those which are executed as
per the scenario
Scenario 2:
If A = -3, B = -9
The statements marked in yellow color are those which are executed as
per the scenario.
But overall if you see, all the statements are being covered by 2nd scenario's
considered. So we can conclude that overall statement coverage is 100%.
1. Unused Statements
2. Dead Code
3. Unused Branches
Decision Coverage
Decision coverage reports the true or false outcomes of each Boolean
expression. In this coverage, expressions can sometimes get complicated.
Therefore, it is very hard to achieve 100% coverage.
That's why there are many different methods of reporting this metric. All
these methods focus on covering the most important combinations. It is very
much similar to decision coverage, but it offers better sensitivity to control
flow.
Demo(int a) {
If (a> 5)
a=a*3
Print (a)
}
Scenario 1:
Value of a is 2
The code highlighted in yellow will be executed. Here the "No" outcome of
the decision If (a>5) is checked.
Scenario 2:
Value of a is 6
The code highlighted in yellow will be executed. Here the "Yes" outcome of
the decision If (a>5) is checked.
1 2 2 50%
2 6 18 50%
Branch Coverage
In the branch coverage, every outcome from a code module is tested. For
example, if the outcomes are binary, you need to test both True and False
outcomes.
It helps you to ensure that every possible branch from each decision
condition is executed at least a single time.
By using Branch coverage method, you can also measure the fraction of
independent code segments. It also helps you to find out which is sections
of code don't have any branches.
Demo(int a) {
If (a> 5)
a=a*3
Print (a)
}
1 2 2 50% 33%
2 6 18 50% 67%
Condition Coverage
Conditional coverage or expression coverage will reveal how the variables
or subexpressions in the conditional statement are evaluated. In this
coverage expressions with logical operands are only considered.
For example, if an expression has Boolean operations like AND, OR, XOR,
which indicated total possibilities.
Conditional coverage offers better sensitivity to the control flow than decision
coverage. Condition coverage does not give a guarantee about full decision
coverage
Example:
TT
FF
TF
FT
Y=4
B=4
Program Slicing
Program slicing is a technique which allows the programmer/tester to focus on only relevant
part of the source code that does a specific computation or needs focus of testing. Hence, from
the entire source, slice the code which should be tested for a behaviour, to make the focus
smaller rather than dealing the entire source code which might have irrelevant statements with
respect to the test cases. In this way, the sliced code will be a subset of the actual source code.
For example, your source code computes wrong value of a variable y at some line number 200,
though the statement at line 200 is correct. How can you minimize the number of statements to
investigate?
Solution is only focus of computation of statements that influence the value of variable y and
skip the rest.
Program slicing is a technique of debugging, testing and software maintenance. Two different
types of program slicing are: Static slicing and dynamic slicing.
Static slicing: Static slicing is done for a statically available information only. Static
slicing on a program consists of all executable statements in program that affects the
variable value in a statement s for any possible input. It is defined as slicing criteria, C
= (s, v) where s is a statement in program P and v is a variable in s. Static slices are
computed by backtracking the dependencies. In order to compute the static slice for (s,
v), we need to find all the program statements that directly affects the value of v before
encountering statement s. Recursively, each statement si with a variable vi that
transitively affects the value of variable v will also be added in the slice.
Algorithm 3.4
Now, let’s consider, we want to compute slice for C = (write (prod), prod) i.e. C = (11,
prod). Line number 4 and 7 are directly affecting the value of prod. And variable i and no are
indirectly affecting the value of prod. Hence, the slice for C = (write (prod), prod) will be as
follows:
Consider another example where we want to compute slice for C = (3, add). This will be under
the classification of forward slicing as add present in line number 3 will be affected by which
statements in future. So, forward slice of S contains all the statements that transitively depend
on S.
Dynamic slicing: A dynamic slice contains all executable statements at a program point
p for a particular execution e of the program that actually affects the value of a variable
v rather than all statements that may have affected the variable value at a program point
p for any arbitrary execution of the program.
Slicing criterion, here will be, C = (i, v, p) where i is input, v is a variable and p is a
program point. The slicing uses the execution history of the program for an input i.
Consider the following algorithm:
1 read (n)
2 for i = 1 to n do
3 x = 10
4 if condition1 then
5 if condition2 then
6 x = 20
7 else
8 x = 30
9 result = x
10 write (result)
Algorithm 3.5
Assuming condition1 and condition2 both are true, the execution history will be all the
statements that are getting executed for the value of n = 1, which are line 1, 2, 3, 4, 5, 6, 9 and
later goes back to line 2 where value of i will be incremented to 2 which fails the condition (i
< n) for loop, hence, execution history jumps to line 10 directly.
Now, out of the execution history, which lines actually affect the value of variable result will
be a part of dynamic slice.
1 read (n)
2 for i = 1 to n do
4 if condition1 then
5 if condition2 then
6 x = 20
9 result = x
10 write (result)
Hence, this reduces the time of the developer/tester to execute the test cases by constricting the
focus of analysis.
So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just
Inside-Just Outside values are called boundary values and the testing is called
"boundary testing".
The basic idea in boundary value testing is to select input variable values at
their:
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
In Boundary Testing, Equivalence Class Partitioning plays a good role
Boundary Testing comes after the Equivalence Class Partitioning.
Equivalent Class Partitioning is a black box technique (code is not visible to tester)
which can be applied to all levels of testing like unit, integration, system, etc. In this
technique, you divide the set of test condition into a partition that can be considered
the same.
It divides the input data of software into different equivalence data classes.
You can apply this technique, where there is a range in the input field.
Submit
Order Pizza:
1. Any Number greater than 10 entered in the Order Pizza field(let say 11) is
considered invalid.
2. Any Number less than 1 that is 0 or below, then it is considered invalid.
3. Numbers 1 to 10 are considered valid
4. Any 3 Digit Number say -100 is invalid.
We cannot test all the possible values because if done, the number of test cases will
be more than 100. To address this problem, we use equivalence partitioning
hypothesis where we divide the possible values of tickets into groups or sets as
shown below where the system behavior can be considered the same.
The divided sets are called Equivalence Partitions or Equivalence Classes. Then we
pick only one value from each partition for testing. The hypothesis behind this
technique is that if one condition/value in a partition passes all others will also
pass. Likewise, if one condition in a partition fails, all other conditions in that
partition will fail.
In our earlier example instead of checking, one value for each partition you will check
the values at the partitions like 0, 1, 10, 11 and so on. As you may observe, you test
values at both valid and invalid boundaries. Boundary Value Analysis is also
called range checking.
Equivalence partitioning and boundary value analysis(BVA) are closely related and
can be used together at all levels of testing.
That means results for values in partitions 0-5, 6-10, 11-14 should be equivalent
Submit
Enter Password:
Summary:
Combinational test technique, as the name suggests, is a technique of combining the data
/ entities as input parameters for testing, to increase the scope. This technique is beneficial
when we have to test with huge number data having many permutations and
combinations.
The beauty of this technique is that, it maximizes the coverage by comparatively lesser
number of test cases. The pairs of parameters which are identified should be independent
of each other. It’s a black box technique, so like other BB technique; we don’t need to
have the implementation knowledge of the system. The point here is to identify the correct
pair of input parameters.
There are many technique of CTD, where OATS (Orthogonal array testing
technique) is widely used.
In this case:
1. Number of independent variables (factors) are = 4
2. Value that each variable can take = 3 values (displayed, not displayed and error
message)
3. Orthogonal array would be 34.
4. Google and find an appropriate array for 4 factors and 3 levels. For this example, I am
referencing the bellow table
7. Based on the table above, design your test cases. Also look out for the special test
cases / left over test cases.
Conclusion:
None of the testing technique provides a guarantee of 100% coverage. Each technique
has its own way of selecting the test conditions. In the similar lines, there are some
limitations of using this technique:
Testing will fail if we fail to identify the good pairs.
Probability of not identifying the most important combination which can result in
losing a defect.
This technique will fail if we do not know the interactions between the pairs.
Applying only this technique will not ensure the complete coverage.
It can find only those defects which arise due to pairs, as input parameters.