Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

How Important is it for a Tester or Developer to Communicate with

Each Other?
Important Software Test Metrics and Measurements Explained with
Examples and Graphs
In software projects, it is most important to measure the quality, cost and
effectiveness of the project and the processes. Without measuring these,
project cant be completed successfully.

Software test metrics and measurements and how to use these in


software testing process.

There is a famous statement: We cant control things which we cant


measure.
Here controlling the projects means, how a project manager/lead can
identify the deviations from the test plan ASAP in order to react in
the perfect time. Generation of test metrics based on the project needs is
very much important to achieve the quality of the software being tested.
Definitions and Formulas for Calculating Metrics:
1) percentage Test cases Executed: This metric is used to obtain the
execution status of the test cases in terms of percentage.
percentage Test cases Executed = (No. of Test cases executed / Total
no. of Test cases written) * 100. So, from the above data, percentage
Test cases Executed = (65 / 100) * 100 = 65%
2) percentage Test cases not executed: This metric is used to obtain the
pending execution status of the test cases in terms of percentage.
percentage Test cases not executed = (No. of Test cases not executed /
Total no. of Test cases written) * 100. So, from the above data,
percentage Test cases Blocked = (35 / 100) * 100 = 35%
3) %ge Test cases Passed: This metric is used to obtain the Pass %ge of
the executed test cases.
%ge Test cases Passed = (No. of Test cases Passed / Total no. of Test
cases Executed) * 100. So, from the above data,
%ge Test cases Passed = (30 / 65) * 100 = 46%
#4) %ge Test cases Failed: This metric is used to obtain the Fail %ge of
the executed test cases.
%ge Test cases Failed = (No. of Test cases Failed / Total no. of Test
cases Executed) * 100.
So, from the above data,
%ge Test cases Passed = (26 / 65) * 100 = 40%
#5) %ge Test cases Blocked: This metric is used to obtain the blocked
%ge of the executed test cases. A detailed report can be submitted by
specifying the actual reason of blocking the test cases.
%ge Test cases Blocked = (No. of Test cases Blocked / Total no. of Test
cases Executed) * 100.
So, from the above data,
%ge Test cases Blocked = (9 / 65) * 100 = 14%

6) Defect Density = No. of Defects identified / size


(Here Size is considered as requirement. Hence here the Defect
Density is calculated as number of defects identified per requirement.
Similarly, Defect Density can be calculated as number of Defects
identified per 100 lines of code [OR] No. of defects identified per
module etc.) So, from the abovedata,
Defect Density = (30 / 5) = 6
7) Defect Removal Efficiency (DRE) = (No. of Defects found during
QA testing / (No. of Defects found during QA testing +No. of Defects
found by End user)) * 100
DRE is used to identify the test effectiveness of the system.
Suppose, During Development & QA testing, we have identified 100
defects.
After the QA testing, during Alpha & Beta testing, end user / client
identified 40 defects, which could have been identified during QA
testing phase.
Now, The DRE will be calculated as,
DRE = [100 / (100 + 40)] * 100 = [100 /140] * 100 = 71%
$8) Defect Leakage: Defect Leakage is the Metric which is used to
identify the efficiency of the QA testing i.e., how many defects are
missed / slipped during the QA testing.
Defect Leakage = (No. of Defects found in UAT / No. of Defects found
in QA testing.) * 100
Suppose, During Development & QA testing, we have identified 100
defects.
After the QA testing, during Alpha & Beta testing, end user / client
identified 40 defects, which could have been identified during QA
testing phase.
Defect Leakage = (40 /100) * 100 = 40%

#9) Defects by Priority: This metric is used to identify the no. of


defects identified based on the Severity / Priority of the defect which is
used to decide the quality of the software.
%ge Critical Defects = No. of Critical Defects identified / Total no. of
Defects identified * 100
From the data available in the above table,
%ge Critical Defects = 6/ 30 * 100 = 20%
%ge High Defects = No. of High Defects identified / Total no. of
Defects identified * 100
From the data available in the above table,
%ge High Defects = 10/ 30 * 100 = 33.33%
%ge Medium Defects = No. of Medium Defects identified / Total no. of
Defects identified * 100
From the data available in the above table,
%ge Medium Defects = 6/ 30 * 100 = 20%
%ge Low Defects = No. of Low Defects identified / Total no. of Defects
identified * 100
From the data available in the above table,
%ge Low Defects = 8/ 30 * 100 = 27%
Software Test Release Metrics
Some of the Software test release related metrics are as below. However they vary
from company to company and project to project.
Test Cases executed
General Guidelines:
1. All the test cases should be executed atleast once. 100% test case execution.
2. Pass test cases >= 98% (this number can vary).

Effort Distribution
General Guidelines:
1. Sufficient effort has been spent on all the phases, components/modules of the
software/product under test.
2. This needs to be quantified as (Effort spent per module / Effort spent on all
modules)*100
Example: effort needs to be quantified for each phase like Requirements analysis,
Design(test cases design), execution, etc.

Open Defects with priority


General Guidelines:
1. If we plot a graph with number of open defects against time, it should show a
downward trend.
2. There should not be any open show stoppers/blockers before release. So 0
blockers in open state.
3. Close to 0 Critical/major defects before release. However this is never 0, as
these fixes will be postponed to the next release as long as they are ok to have.

Defect Removal Efficiency %(DRE%)


Definition: - DRE % indicates the effective identification and removal of defects
both at phase-level and project-level
Conclusion from DRE %: -
If Over-all Project DRE is between 90%-100%, then the efficiency is said to be at
High/Good level.
If Over-all Project DRE is between 75%-90%, then the efficiency is said to be at
Medium/Moderate level.
If Over-all Project DRE is below 75%, then the efficiency is said to be at
Low/Alarming level.
Note- This percentage may vary from organization to organization and project to
Project.
The DRE for each phase and overall project is calculated and given to the
management at the end of the project.
DRE % = (Defects Removed during a Phase)/(Defects Removed till date) X 100
Example:
Phase Introduced...Requirement...Design...Code/Unit Test
Phase Deducted
Requirement.........10............--.......--
Design..............3.............18.......--
Coding..............0.............4........26
Testing.............2.............5........8
Customer............1.............2........7
DRE % Requirement Phase = 10/(10+3+0+2+1) x 100 = 62.50 %
DRE % Design Phase = (3+18)/(3+0+2+1+18+4+5+2) x100 = 60.00 %
DRE % Coding Phase =(0+4+26)/(0+2+1+4+5+2+26+8+7) x100 = 54.54 %
DRE % Testing Phase =(2+5+8)/(2+1+5+2+8+7) x 100 = 60.00 %

Defect Removal Efficiency %(DRE%) for the project


DRE % = (Total No. Of Defects before Release or Delivery) / (Total No. Of
Defects for the Project) X 100
For the above mentioned example,
Over-all DRE % = (10+3+2+18+4+5+26+8) / (10+3+2+1+18+4+5+2+26+8+7)
x 100 = 88.37 %
In other words,
Defect Removal Efficiency = D1 / (D1 + D2)
D1 = defects found before implementation
D2 = defects found after implementation by the customer

Effort Metrics
Effort Slippage%= (Actual Effort Planned Effort)/(Planned Effort) X 100
Note:
The Efforts are always in terms of Hours

If the outcome is negative, we conclude that the effort in the project is low/less.

If the outcome is positive, we conclude that the effort in the project is


high/more.

In both cases please specify the reason while giving metrics to Management.

Schedule Metrics
Schedule Slippage% = (Actual Schedule-Planned Schedule)/Planned Schedule X
100
Note:
The schedule is always in terms of no. of days/hours.

If the outcome is negative, conclude that effects put in the project is less.

If the outcome is positive, conclude that efforts put in the project are more.

In both cases please specify the reason while giving metrics to management.

Software Test Metrics and Measurement Progress Metrics


In my last post, I mentioned that software test metrics can be used for different
purposes. One of them was tracking progress. Usually when we track progress, it
is related to time, or other unit that indicates a schedule. Most often we use
progress metrics to track planned versus actual over time. What we track
depends on our role. If we are financial people, then we track money spent. But
for software quality assurance, we want to track progress of such things as
defects, test cases, man-hours, etc. Basically anything that is related to results or
effort spent to get those results. For instance, here are a few:
man-hours/test case executed: The natural tendency in driving costs down is to
force this as low as possible, but remember the faster they go, and the more
tests that are executed, does not translate into higher quality software.
planned hours/actual hours: We want to track the effort we plan versus what
we spend not only just to see if we are planning our resources well, but to see
deviations, AND then think and find out why those deviations exist which could
point to problems. If we find out that planned versus actual deviates on certain
days of the week, or that deviations occur only from certain testers, and those
testers are working on specific parts of the software, this is useful information.
test cases executed/planned: This just keeps us on track to make sure we get
the bare minimum done in terms of getting our test cases executed. If we find
that it takes too long, on a repeated basis, then something needs to be changed.
Or if we go faster than normal on a regular basis, then this may point to a
problem with the test cases (especially if they find no defects).
test cases executed/defects found: This is a metric that indicates how good
our test cases are at finding defects. Test cases run with no defects found or with
a low ratio does not mean there are no defects in the software.
Here is a generic chart which shows planned versus actual in terms of test cases
executed.

You might also like