Professional Documents
Culture Documents
Shivansh Exp 9 and 10 SE LAb
Shivansh Exp 9 and 10 SE LAb
Shivansh Exp 9 and 10 SE LAb
1. Objectives:
o Determine the effectiveness of equivalence class partitioning in identifying
representative test cases.
o Evaluate the fault detection capability of the test suite derived from equivalence
class partitioning.
o Analyze the relationship between the number of test cases and the fault
detection rate.
2. Setup:
o Select a software program or module for testing.
o Identify the input conditions and their valid and invalid ranges or values.
o Choose a programming language and develop a faulty version(s) of the program
with seeded defects.
3. Equivalence Class Partitioning:
o Define equivalence classes for each input condition based on valid, invalid, and
boundary value ranges.
o Derive test cases by selecting representative values from each equivalence class.
o Document the equivalence classes, test cases, and expected results.
4. Test Suite Execution:
o Execute the derived test suite against the original (correct) version of the
program.
o Record the number of test cases that passed and failed.
o Execute the test suite against the faulty version(s) of the program.
o Record the number of test cases that detected faults and those that did not.
5. Data Collection and Analysis:
o Collect data on the number of test cases derived from each equivalence class.
o Analyze the fault detection rate of the test suite.
o Investigate the relationship between the number of test cases and the fault
detection rate.
o Identify any missing or inadequate test cases based on the faults detected or not
detected.
6. Comparison and Evaluation:
o Compare the fault detection rate of the equivalence class partitioning test suite
with other testing techniques (e.g., boundary value analysis, random testing).
o Evaluate the effectiveness and efficiency of equivalence class partitioning in
identifying representative test cases and detecting faults.
7. Reporting and Recommendations:
o Document the experiment methodology, results, and conclusions.
o Provide recommendations for improving the equivalence class partitioning
process or combining it with other testing techniques.
o Suggest areas for further investigation or additional experiments.
8. Validation and Replication:
o Validate the experiment by repeating it on different programs or modules.
o Replicate the experiment with different input conditions, equivalence classes,
or seeded faults to ensure the reliability of the results.
Shivansh Thapliyal 2101330100213
Equivalence class partitioning is a software testing technique that divides the input domain of
a program into classes of equivalent inputs. Each class represents a set of inputs that should
produce the same output from the system under test.
Input Parameters: e user inputs, configuration settings, or any other relevant data.
Equivalence Classes: Divide the possible values of each input parameter into equivalence
classes. An equivalence class is a set of inputs that should produce the same output and are
likely to exhibit similar behavior from the system. The equivalence classes should cover both
valid and invalid inputs.
Representative Values: Choose representative values from each equivalence class to create
test cases. These values should adequately represent the behavior of the entire class.
Test Cases: For each input parameter, create test cases that cover different combinations of
values from their respective equivalence classes. Ensure that you include both boundary
values and values from the middle of each range.
Execute Test Cases: Execute the designed test cases on the system under test and observe the
outputs. Compare the actual outputs with expected outputs to determine if the system behaves
as expected.
Evaluate Coverage: Assess the coverage achieved by the test suite by analyzing how many
equivalence classes have been tested and how thoroughly they have been covered. Adjust the
test suite as necessary to improve coverage.
Here's a simplified example to illustrate the process:
Scenario: Testing a login system with two input parameters: username and password.
Practical 10 :
Boundary value analysis (BVA) is a software testing technique that focuses
on testing boundaries of input domains. It aims to identify errors at the
boundaries rather than in the centre of input ranges. Here's how you can
design test cases using boundary value analysis:
Input Parameters: Determine the input parameters or variables that have boundaries that need
to be tested.
Boundary Values: For each input parameter, identify the boundaries and determine the values
immediately before and after those boundaries. These values are the focus of boundary value
analysis.
Test Cases: Create test cases that cover the boundary values identified in step 2. Each test case
should test one boundary value at a time, focusing on the transitions from one state to another.
Additional Test Cases: In addition to the boundary values, include test cases for values just
inside and just outside the boundary values. This ensures thorough testing of the system's
behaviour around the boundaries.
Execution Test Cases: Execute the designed test cases on the system under test and observe the
behaviour at the boundaries.
Evaluate Coverage: Assess the coverage achieved by the test suite by analyzing how many
boundary values have been tested and how thoroughly they have been covered. Adjust the test
suite as necessary to improve coverage.
Here's a simplified example to illustrate the process:
Scenario: Testing a system that calculates the area of a rectangle based on its length and
width.