Unit-2 SQE

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

1|PageSQE UNIT_2

Unit-2
Software Metrics
Software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring
software performance, planning work items, measuring productivity, and many other
uses.
Within the software development process, many metrics are that are all connected.
Software metrics are similar to the four functions of management: Planning,
Organization, Control, or Improvement.
Software metrics can be classified into three categories −
 Product metrics − Describes the characteristics of the product such as size,
complexity, design features, performance, and quality level.
 Process metrics − These characteristics can be used to improve the
development and maintenance activities of the software.
 Project metrics − This metrics describe the project characteristics and
execution. Examples include the number of software developers, the staffing
pattern over the life cycle of the software, cost, schedule, and productivity.
Some metrics belong to multiple categories. For example, the in-process quality
metrics of a project are both process metrics and project metrics.
Software quality metrics are a subset of software metrics that focus on the quality
aspects of the product, process, and project. These are more closely associated with
process and product metrics than with project metrics.
Software quality metrics can be further divided into three categories −
 Product quality metrics
 In-process quality metrics
 Maintenance quality metrics
Product Quality Metrics
This metrics include the following −
 Mean Time to Failure
 Defect Density
 Customer Problems
 Customer Satisfaction
Mean Time to Failure
It is the time between failures. This metric is mostly used with safety critical systems
such as the airline traffic control systems, avionics, and weapons.
Defect Density
It measures the defects relative to the software size expressed as lines of code or
function point, etc. i.e., it measures code quality per unit. This metric is used in many
commercial software systems.
2|PageSQE UNIT_2

Customer Problems
It measures the problems that customers encounter when using the product. It contains
the customer’s perspective towards the problem space of the software, which includes
the non-defect oriented problems together with the defect problems.
The problems metric is usually expressed in terms of Problems per User-Month
(PUM).
PUM = Total Problems that customers reported (true defect and non-defect oriented
problems) for a time period + Total number of license months of the software during
the period
Where,
Number of license-month of the software = Number of install license of the software ×
Number of months in the calculation period
PUM is usually calculated for each month after the software is released to the market,
and also for monthly averages by year.
Customer Satisfaction
Customer satisfaction is often measured by customer survey data through the five-
point scale −
 Very satisfied
 Satisfied
 Neutral
 Dissatisfied
 Very dissatisfied
Satisfaction with the overall quality of the product and its specific dimensions is usually
obtained through various methods of customer surveys. Based on the five-point-scale
data, several metrics with slight variations can be constructed and used, depending on
the purpose of analysis. For example −
 Percent of completely satisfied customers
 Percent of satisfied customers
 Percent of dis-satisfied customers
 Percent of non-satisfied customers
Usually, this percent satisfaction is used.
In-process Quality Metrics
In-process quality metrics deals with the tracking of defect arrival during formal
machine testing for some organizations. This metric includes −
 Defect density during machine testing
 Defect arrival pattern during machine testing
 Phase-based defect removal pattern
 Defect removal effectiveness
Defect density during machine testing
Defect rate during formal machine testing (testing after code is integrated into the
system library) is correlated with the defect rate in the field. Higher defect rates found
3|PageSQE UNIT_2

during testing is an indicator that the software has experienced higher error injection
during its development process, unless the higher testing defect rate is due to an
extraordinary testing effort.
This simple metric of defects per KLOC or function point is a good indicator of quality,
while the software is still being tested. It is especially useful to monitor subsequent
releases of a product in the same development organization.
Defect arrival pattern during machine testing
The overall defect density during testing will provide only the summary of the defects.
The pattern of defect arrivals gives more information about different quality levels in the
field. It includes the following −
 The defect arrivals or defects reported during the testing phase by time interval
(e.g., week). Here all of which will not be valid defects.
 The pattern of valid defect arrivals when problem determination is done on the
reported problems. This is the true defect pattern.
 The pattern of defect backlog overtime. This metric is needed because
development organizations cannot investigate and fix all the reported problems
immediately. This is a workload statement as well as a quality statement. If the
defect backlog is large at the end of the development cycle and a lot of fixes
have yet to be integrated into the system, the stability of the system (hence its
quality) will be affected. Retesting (regression test) is needed to ensure that
targeted product quality levels are reached.
Phase-based defect removal pattern
This is an extension of the defect density metric during testing. In addition to testing, it
tracks the defects at all phases of the development cycle, including the design reviews,
code inspections, and formal verifications before testing.
Because a large percentage of programming defects is related to design problems,
conducting formal reviews, or functional verifications to enhance the defect removal
capability of the process at the front-end reduces error in the software. The pattern of
phase-based defect removal reflects the overall defect removal ability of the
development process.
With regard to the metrics for the design and coding phases, in addition to defect rates,
many development organizations use metrics such as inspection coverage and
inspection effort for in-process quality management.
Defect removal effectiveness
It can be defined as follows –
4|PageSQE UNIT_2

This metric can be calculated for the entire development process, for the front-end
before code integration and for each phase. It is called early defect removal when
used for the front-end and phase effectiveness for specific phases. The higher the
value of the metric, the more effective the development process and the fewer the
defects passed to the next phase or to the field. This metric is a key concept of the
defect removal model for software development.
Maintenance Quality Metrics
Although much cannot be done to alter the quality of the product during this phase,
following are the fixes that can be carried out to eliminate the defects as soon as
possible with excellent fix quality.
 Fix backlog and backlog management index
 Fix response time and fix responsiveness
 Percent delinquent fixes
 Fix quality
Fix backlog and backlog management index
Fix backlog is related to the rate of defect arrivals and the rate at which fixes for
reported problems become available. It is a simple count of reported problems that
remain at the end of each month or each week. Using it in the format of a trend chart,
this metric can provide meaningful information for managing the maintenance process.
Backlog Management Index (BMI) is used to manage the backlog of open and
unresolved problems.

If BMI is larger than 100, it means the backlog is reduced. If BMI is less than 100, then
the backlog increased.
Fix response time and fix responsiveness
The fix response time metric is usually calculated as the mean time of all problems
from open to close. Short fix response time leads to customer satisfaction.
The important elements of fix responsiveness are customer expectations, the agreed-
to fix time, and the ability to meet one's commitment to the customer.
Percent delinquent fixes
It is calculated as follows –
5|PageSQE UNIT_2

Fix Quality
Fix quality or the number of defective fixes is another important quality metric for the
maintenance phase. A fix is defective if it did not fix the reported problem, or if it fixed
the original problem but injected a new defect. For mission-critical software, defective
fixes are detrimental to customer satisfaction. The metric of percent defective fixes is
the percentage of all fixes in a time interval that is defective.
A defective fix can be recorded in two ways: Record it in the month it was discovered
or record it in the month the fix was delivered. The first is a customer measure; the
second is a process measure. The difference between the two dates is the latent
period of the defective fix.
Usually the longer the latency, the more will be the customers that get affected. If the
number of defects is large, then the small value of the percentage metric will show an
optimistic picture. The quality goal for the maintenance process, of course, is zero
defective fixes without delinquency.
Software Quality Indicators
A Software Quality Indicator can be calculated to provide an indication of the quality of the system by assessing
system characteristics.

Method

Assemble a quality indicator from factors that can be determined automatically with commercial or custom code
scanners, such as the following:

· cyclomatic complexity of code.

· unused/unreferenced code segments (these should be eliminated over time),


· average number of application calls per module (complexity is directly proportional to the number of calls),
· size of compilation units (reasonably sized units have approximately 20 functions (or paragraphs), or about
2000 lines of code; these guidelines will vary greatly by environment),
· use of structured programming constructs (e.g., elimination of GOTOs, and circular procedure calls).
These measures apply to traditional 3GL environments and are more difficult to determine in environments which
are using object-oriented languages, 4GLs, or code generators.
Tips and Hints
With existing software, the Software Quality Indicator could also include a measure of the reliability of the code.
This can be determined by keeping a record of how many times each module has to be fixed in a given time period.
There are other factors which contribute to the quality of a system such as:
· procedure re-use,
· clarity of code and documentation,
· consistency in the application of naming conventions,
· adherence to standards,
· consistency between documentation and code,
6|PageSQE UNIT_2

· the presence of current unit test plans.


These factors are harder to determine automatically. However, with the introduction of CASE tools and reverse-
engineering tools, and as more of the design and documentation of a system is maintained in structured repositories,
these measures of quality will be easier to determine, and could be added to the indicator.

Quality Indicators

The four quality indicators are based primarily on the measurement of software change
across evolving baselines of engineering data (such as design models and source
code).

1. Change Traffic and Stability

Overall change traffic is one specific indicator of progress and quality. Change traffic is defined
as the number of software change orders opened and closed over the life cycle (Figure 13-5).
This metric can be collected by change type, by release, across all releases, by team, by
components, by subsystem, and so forth. Coupled with the work and progress metrics, it
provides insight into the stability of the software and its convergence toward stability (or
divergence toward instability). Stability is defined as the relationship between opened versus
closed SCOs. The change traffic relative to the release schedule provides insight into schedule
predictability, which is the primary value of this metric and an indicator of how well the process
is performing. The next three quality metrics focus more on the quality of the product.

Project Schedule
Figure 13-5. Stability expectation over a healthy project's life cycle

Project Schedule

Figure 13-5. Stability expectation over a healthy project's life cycle

2. Breakage and Modularity

Breakage is defined as the average extent of change, which is the amount of software baseline
that needs rework (in SLOC, function points, components, subsystems, files, etc.). Modularity is
7|PageSQE UNIT_2

the average breakage trend over time. For a healthy project, the trend expectation is decreasing
or stable (Figure 13-6).
This indicator provides insight into the benign or malignant character of software change. In a
mature iterative development process, earlier changes are expected to result in more scrap than
later changes. Breakage trends that are increasing with time clearly indicate that product
maintainability is suspect.
3. Rework and Adaptability

Rework is defined as the average cost of change, which is the effort to analyze, resolve, and
retest all changes to software baselines. Adaptability is defined as the rework trend over time.
For a healthy project, the trend expectation is decreasing or stable

Not all changes are created equal. Some changes can be made in a staff-hour, while others
take staff-weeks. This metric provides insight into rework measurement. In a mature iterative
development process, earlier changes (architectural changes, which affect multiple components
and people) are expected to require more rework than later changes (implementation changes,
which tend to be confined to a single component or person). Rework trends that are increasing
with time clearly indicate that product maintainability is suspect.

4. MTBF and Maturity


MTBF is the average usage time between software faults. In rough terms, MTBF is computed by
dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is defined as the
MTBF trend over time (Figure 13-8).
Early insight into maturity requires that an effective test infrastructure be established.
Conventional testing approaches for monolithic software programs focused on achieving
complete test coverage of every line of code, every branch, and so forth. In m

Released Baselines

Project Schedule

Figure 13-8. Maturity expectation over a healthy project's life cycle today's distributed and
componentized software systems, such complete test coverage is achievable only for discrete
components. Systems of components are more efficiently tested by using statistical techniques.
Consequently, the maturity metrics measure statistics over usage time rather than product
coverage.
8|PageSQE UNIT_2

Software errors can be categorized into two types: deterministic and nondeter-ministic.
Physicists would characterize these as Bohr-bugs and Heisen-bugs, respectively. Bohr-bugs
represent a class of errors that always result when the software is stimulated in a certain way.
These errors are predominantly caused by coding errors, and changes are typically isolated to a
single component. Heisen-bugs are software faults that are coincidental with a certain
probabilistic occurrence of a given situation. These errors are almost always design errors
(frequently requiring changes in multiple components) and typically are not repeatable even
when the software is stimulated in the same apparent way. To provide adequate test coverage
and resolve the statistically significant Heisen-bugs, extensive statistical testing under realistic
and randomized usage scenarios is necessary.

Conventional software programs executing a single program on a single processor typically


contained only Bohr-bugs. Modern, distributed systems with numerous interoperating
components executing across a network of processors are vulnerable to Heisen-bugs, which
are far more complicated to detect, analyze, and resolve. The best way to mature a software
product is to establish an initial test infrastructure that allows execution of randomized usage
scenarios early in the life cycle and continuously evolves the breadth and depth of usage
scenarios to optimize coverage across the reliability-critical components.

As baselines are established, they should be continuously subjected to test scenarios. From this
base of testing, reliability metrics can be extracted. Meaningful insight into product maturity can
be gained by maximizing test time (through independent test environments, automated
regression tests, randomized statistical testing, after-hours stress testing, etc.). This testing
approach provides a powerful mechanism for encouraging automation in the test activities as
early in the life cycle as practical. This technique could also be used for monitoring performance
improvements and measuring reliability.

You might also like