Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

SQM Notes

Software Quality: The degree to which a system, component, or process meets specific requirements or
customer/user needs/expectations.
The goal of software quality is to determine:
1. How well is the design of the software?
2. How well the software conforms to the developed design?

Software Quality Attributes: Software quality can be measured in terms of attributes. The attribute
domains that are required to define for a given software are as follows:
1. Functionality: The degree to which the purpose of the software is satisfied.
2. Usability: The degree to which the software is easy to use.
3. Testability: The ease with which the software can be tested to demonstrate the faults.
4. Reliability: The degree to which the software performs failure-free functions.
5. Maintainability: The ease with which the faults can be located and fixed, quality of the software can be
improved or software can be modified in the maintenance phase
6. Adaptability: The degree to which the software is adaptable to different technologies and platforms.

• Functionality:
➢ Completeness - The degree to which the software is complete.
➢ Correctness - The degree to which the software is correct.
➢ Efficiency - The degree to which the software requires resources to perform a software function.
➢ Traceability - The degree to which requirement is traceable to the software design and source
code.
➢ Security - The degree to which the software is able to prevent unauthorized access to the program
data.
• Usability:
➢ Learnability - The degree to which the software is easy to learn.
➢ Operability - The degree to which the software is easy to operate.
➢ User-friendliness - The degree to which the interfaces of the software are easy to use and
understand.
➢ Installability - The degree to which the software is easy to install.
➢ Satisfaction - The degree to which the user feels satisfied with the software.
• Testability:
➢ Verifiability - The degree to which the software deliverable meets the specified standards,
procedures and process.
➢ Validatable - The ease with which the software can be executed to demonstrate whether the
established testing criterion is met.
• Reliability:
➢ Robustness - The degree to which the software performs reasonably under unexpected
circumstances.
➢ Recoverability - The speed with which the software recovers after the occurrence of a failure.
• Maintainability:
➢ Agility - The degree to which the software is quick to change or modify.
➢ Modifiability - The degree to which the software is easy to implement, modify and test in the
maintenance phase.
➢ Readability - The degree to which the software documents and programs are easy to understand
so that the faults can be easily located and fixed in the maintenance phase.
➢ Flexibility - The ease with which changes can be made in the software in the maintenance phase.
• Adaptability:
➢ Portability - The ease with which the software can be transferred from one platform to another
platform.
➢ Interoperability - The degree to which the system is compatible with other systems.

There are two goals of the software quality system (SQS):


➢ The first goal is to build quality into the software from the beginning. This means assuring that the
problem or need to be addressed is clearly and accurately stated, and that the requirements for
the solution are properly defined, expressed, and understood.
➢ The second goal of the SQS is to keep that quality in the software throughout the software life cycle
(SLC).

Elements of a Software Quality System:


1. Standards, processes and metrics
• Standards – They provide procedures that must be enforced during the SDLC. They are defined by
IEEE, ANSI and ISO. They should be supported by well-defined, concrete and effective processes so
that they are effectively implemented. Standards include the following:
➢ Necessity: No standard will be observed for long if there is no real reason for its existence.
➢ Feasibility: Common sense tells us that if it is not possible to comply with the tenets of a
standard, then it will be ignored.
➢ Measurability: It must be possible to demonstrate that the standard is being followed.
• Process – It is a collection of activities that are required to produce good quality product. An
effective process is well-practiced, enforced, documented and measured.
• Metric – They are used to measure effectiveness of the software processes and practices followed
during software development.
2. Reviews and audits
• Reviews – They are conducted as a part of verification activities. They are effective as they are
conducted in early phases of SDLC. They minimize the occurrence of faults in earlier phases of
software development (before validation). They are cost-effective. They are of 2 types:
➢ Formal Review – They are carried at the end of SDLC phase.
➢ Informal Review – They are carried out throughout SDLC and also known as in-process
reviews. They include walkthrough and inspection.
a) Walkthrough – It is a scheduled review usually conducted in peer groups. The author of
his/her component or code unit explains it to the group. The goal is defect removal.
b) Inspection – It is more structured type of walkthrough. The format of meeting and roles
of participant are more strictly defined and formal records are prepared. The goal is defect
removal.
• Audit – It is a thorough review of a software product to check its quality, progress, standards and
regulations. It basically checks the health of a product and ensures that everything is going as
planned. It is of 2 types:
➢ Physical Audit (PA) – It is a formal audit. It compares final form of code against final
documentation for that code. Its goal is that 2 products, documentation and code are in
agreement before they are released to user.
➢ Functional Audit (FA) – It is also a formal audit. It compares the test results with the currently
approved requirements to assure that all requirements have been satisfied.
3. Software testing
• Testing planning begins during requirements phase. As each requirement is generated,
corresponding method of test for that requirement is considered.
• A requirement is faulty if it is not testable.
• Actual testing begins with debugging and early unit and module testing.
• Detailed test procedures are required for test execution.
• Test report documents the actual results of testing efforts as it progresses.
• A report of expected result, actual result and conclusion of test conductor concerning success of
test is prepared for each test execution.
4. Defect management and trend analysis
• Defect analysis is the combination of defect detection, correction and defect trend analysis.
• Recording of defects and their solutions can do the following:
➢ Prevent defects from remaining lengths of unsolved for inappropriate lengths of time
➢ Prevent unwarranted changes
➢ Point out inherently weak areas in the software
➢ Provide analysis data for development process evaluation and correction
➢ Provide warnings of potential defects through analysis of defect trends
5. Configuration management
• Configuration identification is the naming and documentation, of each component (document,
unit, module, subsystem, and system) so that at any given time, the particular component of
interest can be uniquely identified.
• Configuration control is that activity that prevents unauthorized changes to any software product.
Early in the SLC, documentation is the primary product. Configuration control takes on an
increasingly formal role as the documents move from draft to final form.
• Configuration accounting keeps track of the status of each component.
• The latest version or update of each software component is recorded.
• Thus, when changes or other activities are necessary with respect to the component, the correct
version of the component can be located and used.

6. Risk management activities


• Risks range from the simple, such as the availability of trained personnel to undertake the project,
to threatening, such as more improper implementation of complicated algorithms, to the deadly,
such as failure to detect an alarm in a nuclear plant.
• Risk management includes identification of the risk; determining the probability, cost, or threat of
the risk; and taking action to eliminate, reduce, or accept the risk.
7. Supplier control/Vendor Management
• Following are 3 basic types of purchased software:
➢ Off-the-shelf Software – It is the package we buy at the store. For example, Microsoft Office,
Adobe Photoshop, virus checkers.
➢ Tailored Shell Software – A basic, existing framework is purchased and the vendor then adds
specific capabilities as required by the contract.
➢ Contracted Software – This software is contractually specified and provided by a third-party
developer.
8. Training

• Training assures that the people involved with software development, and those people using the
software once it is developed, are able to do their jobs correctly.
• It is important to the quality of the software that the producers be educated in the use of the
various development tools at his or her disposal.
• The proper use of the software once it has been developed and put into operation is another area
requiring education.
9. Documentation

• During the software development phases, the SRS document, SDD, test plans and test cases, user
manuals and system guides etc. must be produced.
• The specified documentation standards must be followed in order to create these documents.
• The documentation helps strongly in the debugging process and hence the maintenance of the
product.
10. Safety and security

• Security:
➢ The highest quality software system is of no use if the data centre in which it is to be used is
damaged or destroyed.
➢ Another frequent damager of the quality of output of an otherwise high-quality software
system is data that has been unknowingly modified.
➢ Additionally, though not really a software quality issue per se, is the question of theft of data.
➢ Finally, the recent onslaught of hackers and software attackers and the burgeoning
occurrences of viruses also need to be considered.
➢ The software quality practitioner is responsible for alerting management to the absence, or
apparent inadequacy, of security provisions in the software.
➢ In addition, the software quality practitioner must raise the issue of data centre security and
disaster recovery to management's attention.
• Safety:
➢ Every software project must consciously consider the safety implications of the software and
the system of which it is a part.
➢ The project management plan should include a paragraph describing the safety issues to be
considered. If appropriate, a software safety plan should be prepared.

Bug - A bug refers to defects which means that the software product or the application is not working as
per the adhered requirements set. When we have any type of logical error, it causes our code to break,
which results in a bug.
Defect - A defect refers to a situation when the application is not working as per the requirement and the
actual and expected result of the application or software are not in sync with each other.
Error - Error is a situation that happens when the development team or the developer fails to understand
a requirement definition and hence that misunderstanding gets translated into buggy code.
Fault – Sometimes, due to certain factors such as lack of resources or not following proper steps, fault
occurs in software which means that the logic was not incorporated to handle the errors in the
application. It mainly happens due to invalid documented steps or a lack of data definitions.
Failure - It is the accumulation of several defects that ultimately lead to Software failure and results in the
loss of information in critical modules thereby making the system unresponsive. It is very rare.

Process Vs Procedure
• A process is a series of related tasks that turns input into output.
• A process describes a series of events leading to achieving a specific objective.
• A process normally operates at a higher level.
• You complete a process to achieve your desired outcome (or achieve your end goal).
• The procedure is a way of undertaking a process or part of a process.
• There can be one or more procedures in a process or procedures from other processes may be
referenced in a process.
• The procedure is an instructional guideline that guides you through each step of the process.
Phases of Software Development Life Cycle:
1. Requirement analysis phase—gathering and documentation of requirements.
2. Design phase—preliminary and detailed design of the software.
3. Implementation and unit testing phase—development of source code and initial testing of
independent units.
4. Integration and system testing phase—testing the integrated parts of various units and the system as
a whole.
5. Operational phase—delivering and installing the software at the customer’s site.
6. Maintenance phase—removing defects, accommodating changes and improving the quality of the
software after it goes into the operational phase.
Software Quality Assurance: It is the planned and systematic pattern of actions required to ensure quality
in the software. The term quality assurance includes the following:

• Actions taken to ensure that standards and procedures are adhered.


• Policy, procedures and systematic actions are established.
• Assurance that an appropriate development methodology is in place.
• Effective reviews and audits conducted.
• Appropriate documentation to support maintenance and enhancement.
• Setting up the software configuration management to control the change.
• Well-planned testing performed.
SQA deals with following risks:

• Technical risks – Software will not perform as intended or be hard to operate, modify and/or maintain.
• Programmatic risks – Project will overrun cost or schedule.
The goal of SQA is to reduce these risks by:

• Monitoring the software and development process appropriately.


• Ensuring the compliance with standards and procedures for software and process.
• Ensuring that inadequacies in product, process, or standards are brought to management's attention
so that they may be fixed.
Responsibilities in SQA are:

• Prepare SQA plan for the project.


• Review all development and quality plans for completeness.
• Participate in the development of the project's software process description.
• Participate as inspection moderators in design and code inspection.
• Review all test plans for adherence to set standards and procedures.
• Review samples of all test results to determine adherence to plan.
• Review all documented procedures.
• Review all project phases and write down any non-compliance.

SQA Life Cycle:


• Initialization phase
➢ Writing and reviewing management plan
➢ Verifying specified standards, procedures and processes in plan.
• Requirement phase
➢ Assuring that software requirements are complete and testable.
➢ Assuring that software requirements are properly expressed as functional, performance and
interface requirements.
• Preliminary design phase
➢ Assuring adherence to approved design standards as designed in management plan.
➢ Assuring all software requirements are assigned to software components etc.
• Detailed design phase
➢ Assuring that approved design standards followed.
➢ Assuring that allocated modules and results of design inspections are included.
• Implementation phase
➢ Auditing the results of coding and design activities including schedule in software
development phase, status of all deliverable items.
➢ Auditing configuration management activity and nonconformance reporting and corrective
action system.
• Integration and Testing phase
➢ Assuring readiness of all deliverable items and after testing, test reports are complete and
correct.
➢ Assuring that all test run according to test plans and procedures, all nonconformance are
reported and corrected.
• Acceptance and delivery phase
➢ Assuring the performance of a final configuration audits.
➢ Assuring readiness of all the deliverable items.

SQA Activities
1. Revision:
a. A draft version of an artifact/document submitted by the group will be evaluated.
b. Reviewers are allowed to post their comments about possible flaws in the documentation.
c. If there is any contradiction then it is discussed.
d. Review is composed of walkthrough and inspection.
2. Process Evaluation:
a. Define the process standards such as how reviews should be conducted and when reviews
should be held.
b. Monitor the development process to ensure that the standards are being followed.
c. Report the software project management and to the customer.
3. Software Standards:
a. Standards provides encapsulation of best or at least most appropriate practice.
b. Documentation standards specify form and contents for planning, control of the
documentation.
c. Design standards specify the form and content of design of the product and documentation
of the design.
d. Code standards specify the language in which the code is to be written and define any
restriction on use of language features.
SQA system components can be classified into 6 classes:
1) Pre-project components
2) Software project life cycle components
3) Infrastructure components for error prevention and improvements
4) Management SQA components
5) SQA standards, system certification and assessment components
6) Organizing for SQA- the human components

McCall’s Software Quality Model:


McCall proposed a software quality model in 1997 which included many quality factors (McCall 1977). The
aim of this software quality model is to reduce the gap between users and the developers. The model is
divided into two levels of quality attributes:

• Product Operation - Operation characteristics of the software.


• Product Revision - The extent to which the software can be modified.
• Product Transition - The quality of the software to adapt to new environment.

Boehm Software Quality Model:


McCall's model primarily focuses on precise measurement of high-level characteristics, whereas Boehm's
quality model is based on a wider range of characteristics.
The Boehm's model has three levels for quality attributes. These levels are divided based on their
characteristics. These levels are:

• Primary Uses (high level characteristics)


➢ As is utility - Extent to which, we can use software as-is.
➢ Maintainability - Effort required to detect and fix an error during maintenance.
➢ Portability Effort required to change software to fit in a new environment.
• Intermediate Constructs (mid-level characteristics)
➢ Portability
➢ Reliability
➢ Efficiency
➢ Usability (Human Engineering)
➢ Testability
➢ Understandability
➢ Modifiability
• Primitive Constructs (primitive characteristics)
➢ Device independence
➢ Accuracy
➢ Completeness
➢ Consistency
➢ Device efficiency
➢ Accessibility
➢ Communicativeness
➢ Self-descriptiveness
➢ Legibility
➢ Structuredness
➢ Conciseness
➢ Augment-ability
Advantages of Boehm’s Model:

• It focuses and tries to satisfy the needs of the user.


• It focuses on software maintenance cost effectiveness.
Disadvantages of Boehm’s Model:

• It doesn't suggest, how to measure the quality characteristics.


• It is difficult to evaluate the quality of software using the top-down approach.

ISO 9126 Standard:


ISO 9126 is an international standard that is guided by ISO/IEC. It consists of six characteristics given below.
The major difference between ISO 9126 and McCall and Boehm quality models is that ISO 9126 follows a
strict hierarchy. Each characteristic is related to one attribute only.
1. Functionality:
• It is an essential feature of any software product that achieves the basic purpose for which the
software is developed.
• For example, the LMS should be able to maintain book details, maintain member details, issue
book, return book, reserve book, etc.
• Functionality includes the essential features that a product must have. It includes suitability,
accuracy, interoperability and security.
2. Reliability:
• Once the functionality of the software has been completed, the reliability is defined as the
capability of defect-free operation of software for a specified period of time and given conditions.
• One of the important features of reliability is fault tolerance.
• For example, if the system crashes, then when it recovers the system should be able to continue
its normal functioning.
• Other features of reliability are maturity and recoverability.
3. Usability:
• The ease with which the software can be used for each specified function is another attribute of
ISO 9126.
• The ability to learn, understand and operate the system is the sub-characteristics of usability.
• For example, the ease with which the operation of cash withdrawal function of an ATM system
can be learned is a part of usability attribute.
4. Efficiency:
• This characteristic concerns with performance of the software and resources used by the
software under specified conditions.
• For example, if a system takes 15 minutes to respond, then the system is not efficient.
• Efficiency includes time behaviour and resource behaviour.
5. Maintainability:
• The ability to detect and correct faults in the maintenance phase is known as maintainability.
• Maintainability is affected by the readability, understandability and modifiability of the source
code.
• The ability to diagnose the system for identification of cause of failures (analysability), the effort
required to test a software (testability) and the risk of unexpected effect of modifications
(stability) are the sub-characteristics of maintainability.
6. Portability: This characteristic refers to the ability to transfer the software from one platform or
environment to another platform or environment.

ISO 9000
• The International Organization for Standardization (ISO) made different attempts to improve the
quality with ISO 9000 series. It is a well-known and widely used series.
• The ISO 9000 series of standards is not only for software, but it is a series of five related standards
that are applicable to a wide range of applications such as industrial tasks including design,
production, installation and servicing. ISO 9001 is the standard that is applicable to software quality.
• The aim of ISO 9001 is to define, understand, document, implement, maintain, monitor, improve and
control the following processes:
1. Management responsibility
2. Quality system
3. Contract review
4. Design control
5. Document control
6. Purchasing
7. Purchaser-supplied product
8. Product identification and traceability
9. Process control
10. Inspection and testing
11. Inspection, measuring and test equipment
12. Inspection and test status
13. Control of nonconforming product
14. Corrective action
15. Handling, storage, packaging and delivery
16. Quality records
17. Internal quality audits
18. Training
19. Servicing
20. Statistical techniques
• They are not specific to any one industry and can be applied to organizations of any size.
• ISO 9000 can help a company satisfy its customers, meet regulatory requirements, and achieve
continual improvement.
• It should be considered to be a first step or the base level of a quality system.
• Quality management ensures that an organization, product or service consistently functions well. It
has four main components: quality planning, quality assurance, quality control and quality
improvement.
• Quality Management System (QMS) is simply defined as a formalized system that documents the
guidelines, processes, procedures, and responsibilities for achieving policies and objectives that
control, maintain or bring in quality in the product or service as per the agreed upon standards.
• Within the ISO 9000 series, standard ISO 9001 for quality system is most applicable to software
development.
• The ISO 9000 family contains these standards:
➢ ISO 9001:2015: Quality Management Systems—Requirements
➢ ISO 9000:2015: Quality Management Systems—Fundamentals and Vocabulary (definitions)
➢ ISO 9004:2009: Quality Management Systems—Managing for the Sustained Success of an
Organization (continuous improvement)
➢ ISO 19011:2011: Guidelines for Auditing Management Systems

ISO 9001
ISO 9001 is defined as the international standard that specifies requirements for a quality management
system (QMS). ISO 9001 was first published in 1987 by the International Organization for Standardization
(ISO). The current version of ISO 9001 was released in September 2015. Here are the key components and
principles of ISO 9001:
1. Process Approach: ISO 9001 encourages organizations to focus on their core processes and how they
interact to deliver quality.
2. Customer Focus: Prioritize understanding and meeting customer needs to enhance satisfaction.
3. Leadership: Top management plays a vital role in aligning quality with the organization's strategy.
4. Involvement of People: Empower and involve employees to contribute to quality improvement.
5. Process Improvement: Continuously assess and enhance processes and the overall quality
management system.
6. Evidence-Based Decision Making: Make decisions based on data and evidence to improve quality.
7. Relationship Management: Manage relationships with suppliers and stakeholders to support quality
goals.
8. Documentation: Maintain necessary documentation for consistency and traceability.
9. Risk-Based Thinking: Identify and address risks and opportunities for better quality outcomes.
10. Audit and Certification: Organizations can seek ISO 9001 certification to demonstrate their
commitment to quality management to customers and stakeholders.
Capability Maturity Model:
It is a strategy for improving the software process, irrespective of the actual life cycle model used. The
maturity levels are:

• Maturity level 1: Initial (Adhoc Process)


➢ This is the lowest level and at this level organizations do not have a stable environment for
following software engineering and management practices.
➢ This leads to ineffective planning and management of systems. At this level, everything is
carried out on an adhoc basis.
➢ The success of a project depends on competent manager and a good software project team.
➢ The absence of sound management principles cannot even be covered by strong engineering
principles.
➢ The capability of software process is undisciplined at this level as the processes are
continuously changing or being modified as the software development progresses.
➢ The performance completely depends upon individuals rather than the organizations abilities.
As the process totally depends upon individual it changes when the staff changes.
➢ Thus, time and cost of development is unpredictable.
• Maturity level 2: Repeatable (Basic Process Management)
➢ At level 2 procedures and policies for managing the software product are established. Planning
and managing of new software projects are not adhoc rather they are based on the past similar
software projects.
➢ The aim is to develop effective software process and practices that enable the organization to
repeat the successful past practices.
➢ An effective process is the one which is efficient, defined, documented, measured and has the
ability for improvement. Unlike level 1, in this level the managers identify problems and take
corrective action to prevent these problems from converting into crisis.
➢ The project managers keep track of cost and time. The standards are defined the organizations
ensure that these standards are actually followed by the project team.
➢ Hence, the process maturity level 2 can be described as disciplined because it is well planned,
stable and past successful practices are repeatable.
• Maturity level 3: Defined (Process Definition)
➢ At maturity level 3, the established processes are documented. Software processes
established at this level are used by project managers and technical staff to improve the
performance more effectively.
➢ These are known as standard software processes.
➢ The organizations also conduct a training programme so that staff and managers are well
aware of the sources and understand their assigned tasks.
➢ Different projects in the organization modify the standard processes and practices to construct
their own defined software processes and practices. These defined software processes are
specific to the requirement of individual characteristics of the given project.
➢ A well-defined process includes inputs, standards, practices and procedures to carry out the
work, verification procedures, outputs and criteria of completion. The maturity level of a
process at level 3 can be described as standard and completed.
• Maturity level 4: Managed (Process Measurement)
➢ At this level goals for quality of software product are set. The organizations measurement
program measures the quality of the software process and standards.
➢ Hence, the software process and standards are evaluated using measures and this help in
giving indication of the trends of quality in process and standards followed in the organization.
➢ The limits specifying thresholds are established, when these limits are exceeded then
corrective action is to be taken.
• Maturity level 5: Optimizing (Process Control)
➢ At level 5, the focus of the organization is on continuous improvement of the software
processes.
➢ The software projects identify the causes of the software defects and evaluate software
processed to prevent defects from reoccurring.
➢ The process maturity at level 5 can be defined as 'continuously improving'. Improvement
occurs due to advances in existing processes and innovations using new technologies.

UNDERSTAND Tool
• Understand is a customizable integrated development environment (IDE) that enables static code
analysis through an array of visuals, documentation, and metric tools.
• It was built to help software developers comprehend, maintain, and document their source code.
• It enables code comprehension by providing flow charts of relationships and building a dictionary of
variables and procedures from a provided source code.
• Understand provides tools for metrics and reports, standards testing, documentation, searching,
graphing, and code knowledge.
• It is provided by SciTools.
Working of Understand tool
1. Parsing Source Code: Understand parses source code files from various programming languages,
extracting information about variables, functions, classes, relationships, and more.
2. Building a Database: The tool creates a database containing the extracted information. This database
stores the code's structural and semantic details, forming a foundation for analysis.
3. Visualization: Understand generates visual representations of the codebase's structure, like
dependency graphs, call trees, and inheritance hierarchies. These visuals help developers grasp the
architecture and relationships within the code.
4. Metrics and Statistics: The tool calculates code metrics like complexity, coupling, and cohesion. It
provides insights into code quality and potential areas for improvement.
5. Searching and Navigating: Developers can use Understand's search and navigation features to locate
specific code elements, references, or occurrences. This helps in understanding how different parts of
the code interact.
6. Cross-Referencing: Understand allows developers to cross-reference different elements, enabling
them to see where a function is called, where a variable is used, and so on.
7. Custom Queries: Developers can create custom queries to extract specific information from the
codebase. This is useful for generating custom reports or conducting in-depth analysis.
8. Language Support: Understand supports a wide range of programming languages, making it versatile
for analyzing different types of projects.

Comparison between Mccall Model, Boehm Model, and ISO 9126 Model
Aspect McCall Model Boehm Model ISO 9126 Model
Developed in the 1990s
Origin and Proposed in the 1970s
Developed in the 1970s and revised as ISO/IEC
Era and revised over time
25000
Emphasizes a cost and
Emphasizes software
Emphasizes three quality schedule estimation
product quality
Focus categories: Product, model with a focus on
attributes and
Operation, and Transition. the development
characteristics.
process.
Three categories, each
Defines software quality
with several
characteristics grouped
subcharacteristics: -
Divides software quality into six main categories:
Product: 11
factors into: Product, - Functionality -
Categories subcharacteristics -
Computer Program, and Reliability - Usability -
Operation: 6
Development Process. Efficiency -
subcharacteristics -
Maintainability -
Transition: 7
Portability
subcharacteristics
Offers a structured
Provides a qualitative Focuses on estimating approach to assessing
framework for software size, effort, and software quality using
Measurement assessment, with no schedule using metrics specific metrics, such as
specific metrics or like lines of code and defect density, response
measurement methods. function points. time, and adherence to
standards.
Used as a conceptual Widely used for
Primarily used for cost
framework for discussing assessing and specifying
and schedule estimation
Use Case software quality but lacks software quality in a
and project
concrete measurement measurable and
management.
guidelines. structured way.
The Boehm Model has The ISO 9126 Model has
The McCall Model has evolved into various evolved into the ISO/IEC
had limited practical iterations, such as 25000 series, which
Evolution application and has not COCOMO (COnstructive includes the SQuaRE
evolved significantly over COst MOdel), to address (Software Quality
time. different development Requirements and
environments. Evaluation) standards.
Some concepts, such as Still relevant today as the
Limited practical the COCOMO model, are foundation for defining
Modern relevance today due to its still used for cost and assessing software
Relevance qualitative nature and estimation, but the quality attributes,
lack of specific metrics. original Boehm Model is particularly in ISO/IEC
less relevant. 25000 standards.
Relationship Between Reliability and Maintainability
• Reliability Affects Maintainability:
➢ High Reliability: When a software system is highly reliable and has fewer defects or failures,
it often requires fewer maintenance activities. Users experience fewer problems, which
means there are fewer defects to fix and less need for corrective maintenance.
➢ Low Reliability: On the other hand, a less reliable system with frequent failures and defects
places a higher maintenance burden on the development team. Corrective maintenance
efforts become more time-consuming, and users may be frustrated by recurring issues.
• Maintainability Affects Reliability:
➢ High Maintainability: A system designed for easy maintenance allows developers to quickly
address and fix issues. It also makes it easier to implement preventive maintenance, such as
applying patches or updates to address vulnerabilities. As a result, a highly maintainable
system tends to have improved reliability over time.
➢ Low Maintainability: A system with poor maintainability can hinder the resolution of defects
or the implementation of necessary updates. This can lead to a degradation in reliability, as
unresolved issues accumulate and new problems emerge.

Why Customer's Satisfaction is Prime Concern for Developers


1. Repeat Business and Loyalty:
➢ Satisfied customers are more likely to become repeat customers. They continue to use your
product or service and may even upgrade or purchase additional offerings.
➢ Customer loyalty can lead to long-term relationships, reducing the need for costly customer
acquisition efforts. Loyal customers also tend to refer others, contributing to organic growth.
2. Competitive Advantage:
➢ In competitive markets, customer satisfaction can set your product or service apart.
Customers are more likely to choose a brand they trust and have had positive experiences
with.
➢ High customer satisfaction can be a unique selling point (USP) that helps your product stand
out in the marketplace.
3. Increased Revenue and Profitability:
➢ Customer satisfaction is directly linked to revenue and profitability. Satisfied customers are
more likely to make larger purchases and become premium subscribers.
➢ Happy customers are willing to pay a premium for quality, which can lead to higher profit
margins.
4. Word of Mouth and Referrals:
➢ Satisfied customers are your best advocates. They are more likely to recommend your
product or service to friends, family, and colleagues, leading to positive word-of-mouth
marketing.
➢ Referrals from happy customers can drive organic growth and reduce marketing expenses.
5. Positive Online Reviews and Ratings:
➢ In the age of the internet, customer satisfaction can be publicly visible through online reviews
and ratings. Positive reviews and high ratings can significantly impact potential customers'
decisions.
➢ Negative reviews, on the other hand, can harm your reputation and deter potential
customers.
Software Quality Assurance Plan
The purpose of SQAP is to define the techniques, procedures, and methodologies that will be used to
assure timely delivery of the software that meets specified requirements within project resources.
The basic objectives of SQA planning may include to:
• Determine if there is a problem.
• Evaluate the effect of change in the development process.
• Examine the effectiveness of testing, inspection, or other analysis techniques.
SQA Plan Outline:
1. Overview
1.1 Scope - The purpose of this standard is to provide uniform, minimum acceptable
requirements for preparation and content of Software Quality Assurance Plans.
2. References
3. Definitions and acronyms
3.1 Definitions
3.2 Acronyms
4. Software Quality Assurance Plan
4.1 Purpose (Section 1 of the SQAP)

• This section shall delineate the specific purpose and scope of the particular SQAP.
• It shall list the name(s) of the software items covered by the SQAP and the intended
use of the software.
• It shall state the portion of the software life cycle covered by the SQAP for each
software item specified.
4.2 Reference documents (Section 2 of the SQAP)- This section shall provide a complete list of
documents referenced elsewhere in the text of the SQAP.
4.3 Management (Section 3 of the SQAP) - This section shall describe organization, tasks, and
responsibilities.
4.3.1 Organization

• This paragraph shall depict the organizational structure that influences and
controls the quality of the software.
• This shall include a description of each major element of the organization together
with the delegated responsibilities.
4.3.2 Tasks

• That portion of the software life cycle covered by the SQAP.


• The tasks to be performed with special emphasis on software quality assurance
activities.
• The relationships between these tasks and the planned major checkpoints.
4.3.3 Responsibilities – This paragraph shall identify the specific organizational
elements responsible for each task.
4.4 Documentation (Section 4 of the SQAP)
4.4.1 Purpose

• This section shall identify the documentation governing the development,


verification and validation, use and maintenance of software.
• State how the documents are to be checked for adequacy. This shall include the
criteria and the identification of the review or audit by which the adequacy of each
document shall be confirmed, with reference to Section 6 of the SQAP.
4.4.2 Minimum Documents Requirements
4.4.2.1 SRS - The SRS shall clearly and precisely describe each of the essential
requirements (functions, performances, design constraints, and attributes) of
the software and the external interfaces.
4.4.2.2 SDD - The SDD shall depict how the software will be structured to satisfy
the requirements in the SRS. The SDD shall describe the components and
subcomponents of the software design, including databases and internal
interfaces.
4.4.2.3 Software Verification and Verification Plan (SVVP)
4.5 Standards, practices, conventions, and metrics (Section 5 of the SQAP)
4.6 Reviews and audits (Section 6 of the SQAP)
4.7 Test (Section 7 of the SQAP)
4.8 Problem reporting and corrective action (Section 8 of the SQAP)
4.9 Tools, techniques, and methodologies (Section 9 of the SQAP)
4.10 Code control (Section 10 of the SQAP)
4.11 Media control (Section 11 of the SQAP)
4.12 Supplier control (Section 12 of the SQAP)
4.13 Records collection, maintenance, and retention (Section 13 of the SQAP)
4.14 Training (Section 14 of the SQAP)
4.15 Risk management (Section 15 of the SQAP)

Software Metrics
1. Pressman explained as “A measure provides a quantitative indication of the extent, amount,
dimension, capacity, or size of some attribute of the product or process”.
2. Measurement is the act of determine a measure.
3. The metric is a quantitative measure of the degree to which a system, component, or process
possesses a given attribute.
4. Fenton defined measurement as “It is the process by which numbers or symbols are assigned to
attributes of entities in the real world in such a way as to describe them according to clearly
defined rules”.
5. Software metrics can be defined as “The continuous application of measurement-based
techniques to the software development process and its products to supply meaningful and
timely management information, together with the use of those techniques to improve that
process and its products”.

Applications of Software Metrics


1. Quality Assessment:
• Measure the quality of software code and identify areas for improvement.
• Evaluate the effectiveness of testing processes by tracking defect density, defect removal
efficiency, and defect arrival rate.
2. Productivity Measurement:
• Assess the productivity of development teams by measuring lines of code (LOC) produced
per unit of time.
• Evaluate the efficiency of development processes through metrics like function points per
person-month.
3. Code Complexity Analysis:
• Use metrics such as cyclomatic complexity to evaluate the complexity of the software code.
• Identify parts of the code that may be prone to errors or difficult to maintain.
4. Code Maintainability:
• Assess the maintainability of software by measuring code churn (frequency of changes) and
code stability.
• Identify areas of the code that may require refactoring for better maintainability.
5. Defect Tracking:
• Monitor defect trends over time to identify patterns and assess the effectiveness of defect
resolution efforts.
• Measure defect density and open defect counts to gauge the software's reliability.
6. Effort Estimation:
• Use historical metrics to estimate the effort required for future development projects.
• Employ function points or story points to estimate project size and complexity.
7. Software Size Measurement:
• Estimate and measure the size of software components using metrics like lines of code (LOC),
function points, or source lines of code (SLOC).
• Use size metrics to compare different software components or projects.
8. Risk Management:
• Identify and assess potential risks by analyzing metrics related to project progress, resource
utilization, and code quality.
• Use metrics to prioritize and manage risks throughout the software development lifecycle.
9. Process Improvement:
• Assess the effectiveness of development processes by measuring metrics like lead time, cycle
time, and throughput.
• Identify bottlenecks and areas for improvement in the development workflow.

Types of Software Testing Metrics


1. Product Metrics– Product Metrics quantify the features of a software product. First, the size and
complexity of the product, and second, the dependability and quality of the software are the
primary features that are emphasized.
2. Process Metrics– Unlike Product metrics, process metrics assess the characteristics of software
development processes. Multiple factors can be emphasized, such as identifying defects or errors
efficiently. In addition to fault detection, it emphasizes techniques, methods, tools, and overall
software process reliability.
3. Internal Metrics– Using Internal Metrics, all properties crucial to the software developer are
measured. Line of Control, or LOC, is an example of an internal metric.
4. External Metrics– Utilising External Metrics, all user-important properties are measured.
5. Project Metrics– The project managers use this metric system to monitor the project’s progress.
Utilizing past project references to generate data. Time, cost, labor, etc., are among the most
important measurement factors.

Halstead Metrics
According to Halstead's "A computer program is an implementation of an algorithm considered to be
a collection of tokens which can be classified as either operators or operand." Halstead’s Software
metrics are a set of measures proposed by Maurice Halstead to evaluate the complexity of a software
program. These metrics are based on the number of distinct operators and operands in the program
and are used to estimate the effort required to develop and maintain the program.
1. Program Volume (V): Proportional to program size, represents the size, in bits, of space necessary
for storing the program. This parameter is dependent on specific algorithm implementation. The
properties V, N, and the number of lines in the code are shown to be linearly connected and
equally valid for measuring relative program size.
V=N*log2n
The unit of measurement of volume is the common unit for size “bits”. It is the actual size of a
program if a uniform binary encoding for the vocabulary is used. And error = Volume / 3000
2. Program Level (L): To rank the programming languages, the level of abstraction provided by the
programming language, Program Level (L) is considered. The value of L ranges between zero and
one, with L=1 representing a program written at the highest possible level (i.e., with minimum
size). The higher the level of a language, the less effort it takes to develop a program using that
language.
L = V*/V
And estimated program level is
L^ = 2 * (n2) / (n1)(N2)
3. Program Difficulty (D): This parameter shows how difficult to handle the program is. The difficulty
level or error-proneness (D) of the program is proportional to the number of the unique operator
in the program.
D = (n1/2) * (N2/n2) = 1/L
As the volume of the implementation of a program increases, the program level decreases and
the difficulty increases. Thus, programming practices such as redundant usage of operands, or
the failure to use higher-level control constructs will tend to increase the volume as well as the
difficulty.
4. Programming Effort (E): Measures the amount of mental activity needed to translate the existing
algorithm into implementation in the specified program language. The unit of measurement of E
is elementary mental discriminations.
E=V/L=D*V
5. Program Length: According to Halstead, the first Hypothesis of software science is that the length
of a well-structured program is a function only of the number of unique operators and operands.
N = N1+N2

And estimated program length is denoted by N^:


N^ = n1log2n1+n2log2n2
The following alternate expressions have been published to estimate program length:
NJ = log2 (n1!) + log2 (n1!)
NB = n1 * log2n2 + n2 * log2n1
NC = n1 * sqrt(n1) + n2 * sqrt(n2)
NS = (n * log2n) / 2
6. Potential Minimum Volume (V*): The potential minimum volume V* is defined as the volume of
the shortest program in which a problem can be coded.
V* = (2 + n2*) * log2 (2 + n2*)
Here, n2* is the count of unique input and output parameters
7. Size of Vocabulary/Token Count (n): The size of the vocabulary of a program, which consists of
the number of unique tokens used to build a program, is defined as:
n=n1+n2
where,
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
8. Language Level: Shows the algorithm implementation program language level. The same
algorithm demands additional effort if it is written in a low-level program language. For example,
it is easier to program in Pascal than in Assembler.
L' = V / D / D
lambda = L * V* = L2 * V
9. Programming Time: Shows time (in minutes) needed to translate the existing algorithm into
implementation in the specified program language.

T = E / (f * S)

The concept of the processing rate of the human brain, developed by psychologist John Stroud,
is also used. Stoud defined a moment as the time required by the human brain requires to carry
out the most elementary decision. The Stoud number S is therefore Stoud’s moments per second
with:
5 <= S <= 20. Halstead uses 18. The value of S has been empirically developed from psychological
reasoning, and its recommended value for programming applications is 18.
Stroud number S = 18 moments / second
seconds-to-minutes factor f = 60

Counting rules for C language

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple
occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {...} while ( ), while ( ) {...}, for ( ) {...}, all control statements e.g., if
( ) {...}, if ( ) {...} else {...}, etc. are considered as operators.
8. In control construct switch ( ) {case:...}, switch as well as all the case statements are considered
as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are considered as operators.
10. All the brackets, commas, and terminators are considered as operators.
11. GOTO is counted as an operator, and the label is counted as an operand.
12. The unary and binary occurrence of "+" and "-" are dealt with separately. Similarly, "*"
(multiplication operator) are dealt separately.
13. In the array variables such as "array-name [index]" "array-name" and "index" are considered as
operands and [ ] is considered an operator.
14. In the structure variables such as "struct-name, member-name" or "struct-name -> member-
name," struct-name, member-name are considered as operands and '.', '->' are taken as
operators. Some names of member elements in different structure variables are counted as
unique operands.
15. All the hash directive is ignored.
Information Flow Metrics
1. Component: Any element identified by decomposing a (software) system into its constituent
parts.
2. Cohesion: The degree to which a component performs a single function.
3. Coupling: The term used to describe the degree of linkage between one component to and others
in the same system.
Basic Information Flow Model
Information Flow metrics are applied to the Components of a system design. The figure below shows
a fragment of such a design, and for component 'A' we can define three measures, but remember that
these are the simplest models of Information Flow.
1. 'FAN IN' is simply a count of the number of other components that can call, or pass control, to
Component A.
2. 'FANOUT' is the number of Components that are called by component A.
3. ‘INFORMATION FLOW’ is derived from the first two by using the following formula. We will call
this measure the INFORMATION FLOW index of component A, abbreviated as IF(A).
IF(A) = [FAN IN(A) x FAN OUT(A)]2

The following is a step-by-step guide to deriving these most simple IF metrics.


1. Note the level of each Component in the system design.
2. For each Component, count the number of calls to that Component - this is the FAN IN of that
Component. Some organizations allow more than one Component at the highest level in the
design, so for Components at the highest level which should have a FAN IN of zero, assign a FAN
IN of one. Also note that a simple model of FAN IN can penalize reused Components.
3. For each Component, count the number of calls from the Component. For Component that call
no other, assign a FAN OUT value of one.
4. Calculate the IF value for each Component using the above formula.
5. Sum the IF value for all Components within each level which is called as the LEVEL SUM.
6. Sum the IF values for the total system design which is called the SYSTEM SUM.
7. For each level, rank the Component in that level according to FAN IN, FAN OUT and IF values.
Three histograms or line plots should be prepared for each level.
8. Plot the LEVEL SUM values for each level using a histogram or line plot.

Sophisticated Information Flow Model


a = the number of components that call A.
b = the number of parameters passed to A from components higher in the hierarchy.
c = the number of parameters passed to A from components lower in the hierarchy.
d = the number of data elements read by component A.
Then:
FAN IN(A) = a + b + c + d
e = the number of components called by A;
f = the number of parameters passed from A to components higher in the hierarchy;
g = the number of parameters passed from A to components lower in the hierarchy;
h = the number of data elements written to by A.
Then:
FAN OUT(A) = e + f + g + h

The Amount of Data


One method for determining the amount of data is to count the number of entries in the cross-
reference list.
A variable is a string of alphanumeric characters that is defined by a developer and that is used to
represent some value during either compilation or execution.
Halstead introduced a metric that he referred to as n2 to be a count of the operands in a program -
including all variables, constants, and labels. Thus,
n2 = VARS + unique constants + labels
Live Variable

• A variable is live from the beginning of a procedure to the end of the procedure.
• A variable is live at a particular statement only if it is referenced a certain number of statements
before or after that statement.
• A variable is live from its first to its last references within a procedure.
• It is thus possible to define average number of live variables (LV), which is the sum of the count
of live variables divided by the executable statements in a procedure.
• This is a complexity measure for data usage in a procedure or program.

Where is the average live variable metric computed from ith module.

Variable Span
The size of a span indicates the number of statements that pass between successive uses of a
variables. The average span size for a program of n spans could be computed by using the
equation:

Program Weakness

A program consists of modules. Using the average number of live variables and average life
variables (n), the module weakness has been defined as:

A program is normally a combination of various modules, hence program weakness can be a useful
measure and is defined as:

where,
WMi : weakness of ith module
WP: weakness of the program
m: number of modules in the program
Object-Oriented Metrics
Terminologies:

1. Coupling Metrics
i. Response for a Class (RFC): Number of methods (internal and external) in a class.
ii. Data Abstraction Coupling (DAC): Number of Abstract Data Types in a class.
iii. Coupling between Objects (CBO): Number of other classes to which it is coupled.
iv. Message Passing Coupling (MPC): Number of send statements defined in a class.
v. Coupling Factor (CF): Ratio of actual number of coupling in the system to the max. possible
coupling.
2. Cohesion Metrics
i. Lack of Cohesion in Methods (LCOM):

ii. Tight Class Cohesion (TCC): Percentage of pairs of public methods of the class with common
attribute usage.
iii. Loose Class Cohesion (LCC): Same as TCC except that this metric also considers indirectly
connected methods.
iv. Information based Cohesion (ICH): Number of invocations of other methods of the same class,
weighted by the number of parameters of the invoked method.
3. Inheritance Metrics
i. Depth of Inheritance Tree (DIT): The maximum length from the node to the root of the tree
ii. Average Inheritance Depth (AID): It is calculated as-

iii. Number of Children (NOC): It counts immediate subclasses.


iv. Number of Ancestors (NOA): It counts number of base classes of a class.
v. Number of Parents (NOP): It counts the number of classes that a class directly inherits
(multiple inheritance).
vi. Number of Descendants (NOD): It counts number of subclasses of a class (both directly and
indirectly).
vii. Number of Methods Overridden (NMO): When a method in a subclass has the same name
and type (signature) as in the superclass, then the method in the superclass is said to be
overridden by the method in the subclass.
viii. Number of Methods Added (NMA): It counts the number of new methods (neither overridden
nor inherited) added in a class.
ix. Number of Methods Inherited (NMI): It counts the number of methods a class inherits from
its super classes.
x. Specialization Index (SIX): It is calculated as-

xi. Class to Leaf Depth (CLD): It measures the maximum number of levels in the inheritance
hierarchy which is below the class.
xii. Attribute Inheritance Factor (AIF): Ratio of the sum of inherited attributes in all classes of the
system to the total number of attributes for all classes.

xiii. Method Inheritance Factor (MIF): Ratio of the sum of inherited methods in all classes of the
system to the total number of methods for all classes.

4. Size Metrics
i. Number of Methods per Class (NOM): It is defined as the number of local methods defined in
a class.
ii. Number of Attributes per Class (NOA): It is defined as the sum of the number of instance
variables and number of class variables.
iii. Weighted Number Methods in a Class (WMC): Methods implemented within a class or the
sum of the complexities of all methods
iv. SIZE1: It is the number of semicolons in a class.
v. SIZE2: It is NOA + NOM.
5. Reuse Metrics
i. Reuse Ratio (U): It is given as-

ii. Specialization Ratio (S): It is given as-

iii. Function Template Factor (FTF): It is defined as the ratio of the number of functions using
function templates to the total number of functions.

iv. Class Template Factor (CTF): It is defined as the ratio of the number of classes using class
templates to the total number of classes.

Software Quality metrics based on Defects


According to IEEE/ANSI standard, defect can be defined as "an accidental condition that causes a unit
of the system to fail to function as required".
A fault can cause many failures, hence there is no one-to-one correspondence between fault and a
failure.
1. Defect density: Defect density can be measured as the ratio of number of defects encountered
to the size of the software. Size of the software is usually measured in terms of thousands of
lines of code (KLOC) and is given as:
Defect Density = Number of Defects/KLOC
2. Phase based defect density: It is an extension of defect density metric. The defect density can
be tracked at various phases of software development including verification activities such as
reviews, inspections, formal reviews before the start of validation testing.
3. Defect Removal Effectiveness (DRE): Latent defects for a given phase is not known. Thus, they
are estimated as the sum of defects removed during a phase and defects detected later. The
higher the value of the metric more efficient and effective is the process followed in a
particular phase.
DRE = Defects removed in a given life cycle phase/Latent defects

Usability Metrics
Usability metrics measure the ease of use, learnability and user satisfaction for a given software.
1. Task Effectiveness: It is measured as-
where quantity measures the amount of task completed by a user and quality measures the
degree to which the output produced by the user satisfies the goals of the task. Both quantity
and quality are represented in percentages.
2. Temporal Efficiency: It is measured as-

3. Productive Period: It is measured as-

4. Relative User Efficiency: It is measured as-

5. Time required to learn a system


6. Total increase in productivity by the use of the system
7. Response time

Testing Metrics
The testing metrics can be used to monitor the status of testing and provides indication about the
quality of the product. Testing coverage metrics can be used to monitor the amount of testing being
done. These include the following basic coverage metrics:
1. Statement coverage metric describes the degree to which statements are covered while testing.
2. Branch coverage metric determines whether each branch in the source code has been tested.
3. Operation coverage metric determines whether every operation of a class has been tested.
4. Condition coverage metric determines whether each condition is evaluated both for true and for
false.
5. Path coverage metric determines whether each path of the control flow graph has been exercised
or not.
6. Loop coverage metric determines how many times a loop is covered.
7. Multiple condition coverage metric determines whether every possible combination of
conditions is covered.
8. Test Focus (TF): It is measured as-

9. Fault Coverage Metric (FCM): It is measured as-

Software Quality Management


Software quality management (SQM) is a management process that aims to develop and manage
the quality of software in such a way so as to best ensure that the product meets the quality standards
expected by the customer while also meeting any necessary regulatory and developer requirements,
if any. SQM involves the implementation of processes, standards, and methodologies to plan, monitor,
and control the quality of software products.
Quality management Activities
1. Quality Planning
• Define the quality objectives and standards for the software project.
• Identify quality requirements based on user needs and project specifications.
• Develop a Quality Management Plan outlining the processes and resources needed for
quality assurance.
• Establish measurable quality goals and criteria for acceptance.
2. Quality Assurance
• Ensure that the defined processes are being followed and result in the desired quality of the
software.
• Conduct process audits to verify compliance with defined processes.
• Implement formal reviews and inspections to identify defects early in the development cycle.
• Perform static analysis of code and other project artifacts.
• Conduct training and awareness programs to enhance the skills of the development team.
3. Quality Control
• Verify and validate that the software product meets specified quality requirements.
• Execute dynamic testing to identify and correct defects.
• Use testing techniques such as unit testing, integration testing, system testing, and
acceptance testing.
• Monitor and measure key quality indicators during the development process.
• Implement automated testing tools to increase testing efficiency.

You might also like