Shivansh Exp 9 and 10 SE LAb

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Shivansh Thapliyal 2101330100213

Practical 9 : To design an experiment for creating a test suite using the


equivalence class partitioning technique, you can follow these steps:

1. Objectives:
o Determine the effectiveness of equivalence class partitioning in identifying
representative test cases.
o Evaluate the fault detection capability of the test suite derived from equivalence
class partitioning.
o Analyze the relationship between the number of test cases and the fault
detection rate.
2. Setup:
o Select a software program or module for testing.
o Identify the input conditions and their valid and invalid ranges or values.
o Choose a programming language and develop a faulty version(s) of the program
with seeded defects.
3. Equivalence Class Partitioning:
o Define equivalence classes for each input condition based on valid, invalid, and
boundary value ranges.
o Derive test cases by selecting representative values from each equivalence class.
o Document the equivalence classes, test cases, and expected results.
4. Test Suite Execution:
o Execute the derived test suite against the original (correct) version of the
program.
o Record the number of test cases that passed and failed.
o Execute the test suite against the faulty version(s) of the program.
o Record the number of test cases that detected faults and those that did not.
5. Data Collection and Analysis:
o Collect data on the number of test cases derived from each equivalence class.
o Analyze the fault detection rate of the test suite.
o Investigate the relationship between the number of test cases and the fault
detection rate.
o Identify any missing or inadequate test cases based on the faults detected or not
detected.
6. Comparison and Evaluation:
o Compare the fault detection rate of the equivalence class partitioning test suite
with other testing techniques (e.g., boundary value analysis, random testing).
o Evaluate the effectiveness and efficiency of equivalence class partitioning in
identifying representative test cases and detecting faults.
7. Reporting and Recommendations:
o Document the experiment methodology, results, and conclusions.
o Provide recommendations for improving the equivalence class partitioning
process or combining it with other testing techniques.
o Suggest areas for further investigation or additional experiments.
8. Validation and Replication:
o Validate the experiment by repeating it on different programs or modules.
o Replicate the experiment with different input conditions, equivalence classes,
or seeded faults to ensure the reliability of the results.
Shivansh Thapliyal 2101330100213

Equivalence class partitioning is a software testing technique that divides the input domain of
a program into classes of equivalent inputs. Each class represents a set of inputs that should
produce the same output from the system under test.

Input Parameters: e user inputs, configuration settings, or any other relevant data.
Equivalence Classes: Divide the possible values of each input parameter into equivalence
classes. An equivalence class is a set of inputs that should produce the same output and are
likely to exhibit similar behavior from the system. The equivalence classes should cover both
valid and invalid inputs.
Representative Values: Choose representative values from each equivalence class to create
test cases. These values should adequately represent the behavior of the entire class.
Test Cases: For each input parameter, create test cases that cover different combinations of
values from their respective equivalence classes. Ensure that you include both boundary
values and values from the middle of each range.
Execute Test Cases: Execute the designed test cases on the system under test and observe the
outputs. Compare the actual outputs with expected outputs to determine if the system behaves
as expected.
Evaluate Coverage: Assess the coverage achieved by the test suite by analyzing how many
equivalence classes have been tested and how thoroughly they have been covered. Adjust the
test suite as necessary to improve coverage.
Here's a simplified example to illustrate the process:
Scenario: Testing a login system with two input parameters: username and password.

Identify Input Parameters:


Username (string)
Password (string)
Define Equivalence Classes:
Username:
Valid usernames (e.g., alphanumeric, special characters)
Invalid usernames (e.g., empty string, non-alphanumeric characters)
Password:
Valid passwords (e.g., minimum length, maximum length)
Invalid passwords (e.g., empty string, too short, too long)
Select Representative Values:
Valid usernames: "user123", "john_doe"
Shivansh Thapliyal 2101330100213

Invalid usernames: "", "user@#$%"


Valid passwords: "password123", "securePassword"
Invalid passwords: "", "pass"
Design Test Cases:
Test Case 1: Valid username, valid password
Test Case 2: Invalid username, valid password
Test Case 3: Valid username, invalid password
Test Case 4: Invalid username, invalid password
Execute Test Cases:
Execute each test case on the login system and observe the behavior.
Evaluate Coverage:
If all equivalence classes have been covered by the test cases. If not, add additional test cases
to improve coverage.
Shivansh Thapliyal 2101330100213

Practical 10 :
Boundary value analysis (BVA) is a software testing technique that focuses
on testing boundaries of input domains. It aims to identify errors at the
boundaries rather than in the centre of input ranges. Here's how you can
design test cases using boundary value analysis:

Input Parameters: Determine the input parameters or variables that have boundaries that need
to be tested.
Boundary Values: For each input parameter, identify the boundaries and determine the values
immediately before and after those boundaries. These values are the focus of boundary value
analysis.
Test Cases: Create test cases that cover the boundary values identified in step 2. Each test case
should test one boundary value at a time, focusing on the transitions from one state to another.
Additional Test Cases: In addition to the boundary values, include test cases for values just
inside and just outside the boundary values. This ensures thorough testing of the system's
behaviour around the boundaries.
Execution Test Cases: Execute the designed test cases on the system under test and observe the
behaviour at the boundaries.
Evaluate Coverage: Assess the coverage achieved by the test suite by analyzing how many
boundary values have been tested and how thoroughly they have been covered. Adjust the test
suite as necessary to improve coverage.
Here's a simplified example to illustrate the process:
Scenario: Testing a system that calculates the area of a rectangle based on its length and
width.

Identify Input Parameters:


Length (numeric)
Width (numeric)
Determine Boundary Values:
For Length:
Lower boundary: 0
Upper boundary: Maximum allowed length
Values just inside and just outside the boundaries
For Width (similar to Length)
Shivansh Thapliyal 2101330100213

Design Test Cases:


Test Case 1: Length = 0, Width = (valid)
Test Case 2: Length = 1, Width = (valid)
Test Case 3: Length = (maximum allowed length - 1), Width = (valid)
Test Case 4: Length = maximum allowed length, Width = (valid)
Test Case 5: Length = (maximum allowed length + 1), Width = (valid)
(Similar test cases for Width)
Execution Test Cases:
Execute each test case on the system and observe the behavior, specifically focusing on how
the system handles values at the boundaries.
Evaluate Coverage:
If all boundary values have been covered by the test cases. If not, add additional test cases to
improve coverage.

You might also like