Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Software Project Management

5. SOFTWARE QUALITY ASSURANCE


Software quality assurance is a series of activities undertaken throughout the software
engineering process. It comprises the entire gamut of software engineering life cycle. The
objective of the software quality assurance process is to produce high-quality software that
meets customer requirements through a process of applying various procedures and
standards.
Quality is defined as conformance to the stated and implied needs of the customer.
Quality also refers to the measurable characteristics of a software product and these can be
compared based on a given set of standards. In the same way, software quality can be defined
as conformance to explicitly stated and implicitly stated functional requirements. Here, the
explicitly stated functional requirement can be derived from the requirements stated by the
customer which are generally documented in some form. Implicit requirements are
requirements that are not stated explicitly but are intended. Implicit functional requirements
are standards that a software development company adheres to during the development
process. Implicit functional requirements also include the requirements of good
maintainability.
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements, and is maintainable. However, as discussed above, quality is a subjective term.
It will depend on who the ‘customer’ is and their overall influence in the scheme of things.
Each type of ‘customer’ will have its slant on ‘quality’. The end-user might define quality as
something user-friendly and bug-free.
Good quality software satisfies both explicit and implicit requirements. Software
quality is a complex mix of characteristics and varies from application to application and the
customer who requests it.
5.1 ATTRIBUTES OF QUALITY
The following are some of the attributes of quality:
➢ Auditability: The ability of software being tested against conformance to the
standard.
➢ Compatibility: The ability of two or more systems or components to perform their
required functions while sharing the same hardware or software environment.
➢ Completeness: The degree to which all of the software’s required functions and
design constraints are present and fully developed in the requirements specification,
design document, and code.
➢ Consistency: The degree of uniformity, standardization, and freedom from
contradiction among the documents or parts of a system or component.
➢ Correctness: The degree to which a system or component is free from faults in its
specification, design, and implementation. The degree to which software,
documentation, or other items meet specified requirements.
➢ Feasibility: The degree to which the requirements, design, or plans for a system or
component can be implemented under existing constraints.

1
Software Project Management

➢ Modularity: The degree to which a system or computer program is composed of


discrete components such that a change to one component has minimal impact on
other components.
➢ Predictability: The degree to which the functionality and performance of the
software are determinable for a specified set of inputs.
➢ Robustness: The degree to which a system or component can function correctly in the
presence of invalid inputs or stressful environmental conditions.
➢ Structuredness: The degree to which the SDD (System Design Document) and code
possess a definite pattern in their interdependent parts. This implies that the design
has proceeded in an orderly and systematic manner (e.g., top-down, bottom-up). The
modules are cohesive and the software has minimized coupling between modules.
➢ Testability: The degree to which a system or component facilitates the establishment
of test criteria and the performance of tests to determine whether those criteria have
been met.
➢ Traceability: The degree to which a relationship can be established between two or
more products of the development process. The degree to which each element in a
software development product establishes its reason for existing (e.g., the degree to
which each element in a bubble chart references the requirement that it satisfies). For
example, the system’s functionality must be traceable to user requirements.
➢ Understandability: The degree to which the meaning of the SRS, SDD, and code is
clear and understandable to the reader.
➢ Verifiability: The degree to which the SRS, SDD, and code have been written to
facilitate verification and testing
5.2 Causes of Error in Software
Misinterpretation of customers’ requirements/communication
o Incomplete/erroneous system specification
o Error in logic
o Not following programming/software standards
o Incomplete testing
o Inaccurate documentation/no documentation
o Deviation from specification
o Error in data modeling and representation.
5.3 QA ACTIVITIES
Measurement of Software Quality (Quality metrics)
Software quality is a set of characteristics that can be measured in all phases of software
development.
5.3.1 Defect metrics
• Number of design changes required
• Number of errors in the code
• Number of bugs during different stages of testing
• Reliability metrics

2
Software Project Management

• It measures the meantime to failure (MTTF) which may be defined as the probability of
failure during a particular interval of time.
Many models have been developed to determine software defects/failures. All these models
describe the occurrence of defects/failures as a function of time. This allows us to define
reliability. These models are based on certain assumptions which can be described below:
➢ The failures are independent of each other, i.e., one failure has no impact on another
failure (s).
➢ The inputs are random samples.
➢ Failure intervals are independent and all software failures are observed.
➢ The time between failures is exponentially distributed.
The following formula gives the cumulative number of defects observed at a time ‘t’.
D(t) = Td (1 – e –bct)
D(t) = Cumulative number of defects observed at a time t
Td = Total number of defects
‘b’ and ‘c’ are constants and depend on historical data of similar software for which the
model is being applied.
We may find the mean time to failure (MMFT) as below:
MTTF(t) = e bct / c Td
5.3.2. Maintainability metrics
Complexity metrics are used to determine the maintainability of software.
The complexity of software can be measured from its control flow.
Consider the graph in Figure 1. Each node represents one program segment and edges
represent the control flow. The complexity of the software module represented by the graph
can be given by simple formulae of graph theory as follows:
V (G) = e – n + 2 where
V (G): is called Cyclomatic complexity of the program
e = number of edges
n = number of nodes

Figure 1: A Software Module

Applying the above equation the complexity V (G) of the graph is found to be 3.
The cyclomatic complexity has been related to programming efforts, maintenance efforts, and
debugging efforts. Although cyclomatic complexity measures program complexity, it fails to
measure the complexity of a program without multiple conditions. The information flow
within a program can provide a measure of program complexity.
5.3.3 Integrity.
Software integrity has become increasingly important in the age of hackers and firewalls.
This attribute measures a system's ability to withstand attacks (both accidental and

3
Software Project Management

intentional) to its security. Attacks can be made on all three components of software:
programs, data, and documents. To measure integrity, two additional attributes must be
defined: threat and security. The threat is the probability (which can be estimated or derived
from empirical evidence) that an attack of a specific type will occur within a given time.
Security is the probability (which can be estimated or derived from empirical evidence) that
the attack of a specific type will be repelled.
The integrity of a system can then be defined as
Integrity = summation [(1 – threat)* (1 – security)]
Where threat and security are summed over each type of attack.
5.3.4 Correctness.
A program must operate correctly or it provides little value to its users. Correctness is the
degree to which the software performs its required function. The most common measure for
correctness is defects per KLOC, where a defect is defined as a verified lack of conformance
to requirements. When considering the overall quality of a software product, defects are those
problems reported by a user of the program after the program has been released for general
use. For quality assessment purposes, defects are counted over a standard period of time,
typically one year.
5.3.5 Usability
The catchphrase "user-friendliness" has become ubiquitous in discussions of software
products. If a program is not user-friendly, it is often doomed to failure, even if the functions
that it performs are valuable. Usability is an attempt to quantify user-friendliness and can be
measured in terms of four characteristics:
(1) The physical and or intellectual skill required to learn the system,
(2) The time required to become moderately efficient in the use of the system,
(3) The net increase in productivity (over the approach that the system replaces) is measured
when the system is used by someone who is moderately efficient, and
(4) A subjective assessment (sometimes obtained through a questionnaire) of users' attitudes
toward the system.
5.3.5 Defect Removal Efficiency:
A quality metric that provides benefit at both the project and process level is defect removal
efficiency (DRE). In essence, DRE is a measure of the filtering ability of quality assurance
and control activities as they are applied throughout all process framework activities.
When considered for a project as a whole, DRE is defined in the following manner:
DRE = E/(E + D)
Where E is the number of errors found before delivery of the software to the end-user and D
is the number of defects found after delivery.
The ideal value for DRE is 1. That is, no defects are found in the software.
5.4 Important Parameters for Measurement of Software Quality
➢ To the extent it satisfies user requirements; they form the foundation to measure software
quality.
➢ Use of specific standards for building the software product. Standards could be the
organization’s standards or standards referred to in a contractual agreement.
➢ Implicit requirements are not stated by the user but are essential for quality software.

4
Software Project Management

5.5 Objective of Software Quality Assurance


The aim of the Software Quality Assurance process is to develop a high-quality
software product. The purpose of Software Quality Assurance is to provide management with
appropriate visibility into the process of the software project and of the products being built.
Software Quality Assurance involves reviewing and auditing the software products
throughout the development lifecycle to verify that they conform to explicit requirements and
implicit requirements such as application procedures and standards. Compliance with agreed-
upon standards and procedures is evaluated through process monitoring, product evaluation,
and audits. SQA is a planned, coordinated, and systematic action necessary to provide
adequate confidence that a software product conforms to established technical requirements.
Software Quality Assurance is a set of activities designed to evaluate the process by which
software is developed and/or maintained.
5.6 The Process of Software Quality Assurance
➢ Defines the requirements for software controlled system fault/failure detection, isolation,
and recovery;
➢ Reviews the software development processes and products for software error prevention
and/ or controlled change to reduced functionality states; and
➢ Defines the process for measuring and analyzing defects as well as reliability and
maintainability factors.
Software engineers, project managers, customers, and Software Quality Assurance groups are
involved in software quality assurance activity. The role of various groups in software quality
assurance is as follows:
➢ Software engineers: They ensure that appropriate methods are applied to develop the
software, perform testing of the software product and participate in formal technical
reviews.
➢ SQA group: They assist the software engineer in developing a high-quality product.
They plan quality assurance activities and report the results of the review.
5.7 What is software Review?
Software review can’t be defined as a filter for the software engineering process. The purpose
of any review is to discover errors in the analysis, design, coding, testing, and implementation
phase of the software development cycle. The other purpose of a review is to see whether
procedures are applied uniformly and in a manageable manner.
Reviews are basically of two types, informal technical review, and formal technical review.
➢ Informal Technical Review: Informal meeting and informal desk checking.
➢ Formal Technical Review: A formal software quality assurance activity through
various approaches such as structured walkthroughs, inspection, etc.
5.7.1 FORMAL TECHNICAL REVIEW: FTR is a software quality assurance activity
performed by software engineering practitioners to improve software product quality. The
product is scrutinized for completeness, correctness, consistency, technical feasibility,
efficiency, and adherence to established standards and guidelines by the client organization.
Different Types Of Formal Technical Review Are the Following:
(A) Structured walkthrough is a review of the formal deliverables produced by the project
team. Participants of this review typically include end-users and management of the client

5
Software Project Management

organization, management of the development organization, and sometimes auditors, as well


as members of the project team. As such, these reviews are more formal with a predefined
agenda, which may include presentations, overheads, etc.
(B) An inspection is more formalized than a ‘Structured walkthrough’, typically with 3-8
people. The subject of the inspection is typically a document such as a requirements
specification or a test plan, and the purpose is to find problems and see what’s missing, not to
suggest rectification or fixing. The result of the inspection meeting should be a written report.
(C). Verification: Verification generally involves reviews to evaluate whether correct
methodologies have been followed by checking documents, plans, code, requirements, and
specifications. This can be done with checklists, walkthroughs, etc.
(D) Validation: Validation typically involves actual testing and takes place after verifications
are completed.
5.7.2 Objectives of Formal Technical Review
o To uncover errors in logic or implementation
o To ensure that the software has been represented accruing to predefined standards
o To ensure that the software under review meets the requirements
o To make the project more manageable.
5.8 Software Quality Standards
➢ Software standards help an organization adopt a uniform approach to designing,
developing, and maintaining software.
➢ There are several standards for software quality and software quality assurance.
➢ Once an organization decides to establish a software quality assurance process, standards
may be followed to establish and operate different software development activities and
support activities.
➢ Many organizations have developed standards on quality in general and software quality
in specific.
1.8.1 CAPABILITY MATURITY MODEL INTEGRATION (CMMI)
Software Engineering Institute (SEI) has developed what is called a ‘Capability Maturity
Model’ (CMM) now called the Capability Maturity Model Integration (CMMI) to help
organizations to improve software development processes.
It is a model of 5 levels of process maturity that determine the effectiveness of an
organization in delivering quality software. Organizations can receive CMMI ratings by
undergoing assessments by qualified auditors. The organizations are rated as CMM Level 1,
CMM Level 2, etc. by evaluating their organizational process maturity.
➢ SEI-CMM Level 1: Characterised by unorganized, chaos, periodic panics, and heroic
efforts required by individuals to complete projects. Successes depend on individuals and
may not be repeatable.
➢ SEI-CMM Level 2: Software project tracking, requirements management, realistic
planning, and configuration management processes are in place; successful practices can
be repeated.
➢ SEI-CMM Level 3: Standard software development and maintenance processes are
integrated throughout an organization; a Software Engineering Process Group is in place

6
Software Project Management

to oversee software processes, and training programs are used to ensure understanding
and compliance.
➢ SEI-CMM Level 4: Metrics are used to track productivity, processes, and products.
Project performance is predictable, and quality is consistently high.
➢ SEI-CMM Level 5: The focus is on continuous process improvement. The impact of new
processes and technologies can be predicted and effectively implemented when required.
5.9 THE INTERNATIONAL ORGANISATION FOR STANDARDISATION (ISO)
➢ The International Organisation for Standardisation (ISO) developed the ISO 9001:2000
standard (which replaces the previous set of three standards of 1994) that helps the
organization to establish, operate, maintain and review a quality management system that
is assessed by outside auditors.
➢ The standard is generic and can be applied to any organization involved in the production,
or manufacturing service including an organization providing software services.
➢ It covers documentation, design, development, production, testing, installation, servicing,
and other processes.
➢ It may be noted that ISO certification does not necessarily indicate quality products.
➢ It only indicates that the organization follows a well-documented established process.

Table 1: List of standards

7
Software Project Management

TESTING VS. QUALITY CONTROL & ASSURANCE AND AUDIT


We need to understand that software testing is different from software quality assurance,
software quality control, and software auditing.
➢ Software quality assurance - These are software development process monitoring
means, by which it is assured that all the measures are taken as per the standards of the
organization. This monitoring is done to make sure that proper software development
methods were followed.
➢ Software quality control - This is a system to maintain the quality of software products.
It may include functional and non-functional aspects of a software product, which
enhance the goodwill of the organization. This system makes sure that the customer is
receiving a quality product for their requirement and that the product is certified as ‘fit for
use.
➢ Software audit - This is a review of the procedure used by the organization to develop
the software. A team of auditors, independent of the development team examines the
software process, procedure, requirements, and other aspects of SDLC. The purpose of a
software audit is to check that software and its development process, both conform to
standards, rules, and regulations.
CASE TOOLS:
CASE tools are the software engineering tools that permit collaborative software
development and maintenance. CASE tools support almost all the phases of the software
development life cycle such as analysis, design, etc., including umbrella activities such as
project management, configuration management, etc. The CASE tools in general, support
standard software development methods such as Jackson Structure programming or
structured system analysis and design method. The CASE tools follow a typical process for
the development of the system, for example, for developing database applications, CASE
tools may support the following development steps:
• Creation of data flow and entity models
• Establishing a relationship between requirements and models
• Development of top-level design
• Development of functional and process description
• Development of test cases.
The CASE tools based on the above specifications can help in automatically generating
database tables, forms and reports, and user documentation.
Thus, the CASE tools
• support contemporary development of software systems, they may improve the quality of
the software
• help in automating the software development life cycles by use of certain standard methods
• create an organization-wide environment that minimizes repetitive work
• help developers to concentrate more on the top level and more creative problem-solving
tasks
• support and improve the quality of documentation (Completeness and non- ambiguity),
testing process (provides automated checking), project management, and software
maintenance.

8
Software Project Management

Figure 2: the CASE tool environment


Most of the CASE tools include one or more of the following types of tools:
• Analysis tools
• Repository to store all diagrams, forms, models and report definitions, etc.
• Diagramming tools
• Screen and report generators
• Code generators
• Documentation generators
• Reverse Engineering tools (that take source code as input and produce graphical and textual
representations of program design-level information)
• Re-engineering tools (that take source code as the input analyze it and interactively alter an
existing system to improve quality and/or performance).
Categories of CASE Tools based on their activities, sometimes CASE tools are classified
into the following categories:
1. Upper CASE tools
2. Lower CASE tools
3. Integrated CASE tools.
Upper CASE: Upper CASE tools mainly focus on the analysis and design phases of software
development. They include tools for analysis modeling, reports, and form generation.
Lower CASE: Lower CASE tools support the implementation of system development. They
include tools for coding, configuration management, etc.
Integrated CASE Tools: Integrated CASE tools help in providing linkages between the lower
and upper CASE tools. Thus creating a cohesive environment for software development
where programming by lower CASE tools may automatically be generated for the design that
has been developed in an upper CASE tool.
SOFTWARE QUALITY AND CASE TOOLS
Software quality is sacrificed by many developers for more functionality, faster development,
and lower cost. However, one must realize that a good quality product enhances the speed of
software development. It reduces the cost and allows enhancement and functionality with
ease as it is a better-structured product. You need to pay for poor quality in terms of more
maintenance time and cost. Can the good quality be attributed to software by enhancing the

9
Software Project Management

testing? The answer is NO. The high-quality software development process is most
important for the development of quality software products. Software quality involves
functionality for software usability, reliability, performance, scalability, support, and security
Integrated CASE tools:
• help the development of the quality product as they support standard methodology and
process of software development
• supports an exhaustive change management process
• contains easy-to-use visual modeling tools incorporating continuous quality assurance.
Quality in such tools is represented in all life cycle phases viz., Analysis/ Design
development, test, and deployment. Quality is essential in all the life cycle phases:
Analysis: A poor understanding of analysis requirements may lead to a poor product.
CASE tools help in reflecting the system requirements, accurately, and in a simple way.
CASE tools support the requirements analysis and coverage also as we have modeled. CASE
also helps in ambiguity resolution of the requirements, thus making high-quality
requirements.
Design: In design, the prime focus of quality starts with the testing of the architecture of
the software. CASE tools help in detecting, isolating, and resolving structure deficiency
during the design process. On average, a developer makes 100 to 150 errors for every
thousand lines of code. Assuming only 5% of these errors are serious, if the software has
ten thousand lines of code you may still have around 50 serious coding errors in your
system. One of the newer software development processes called the Agile process helps
in reducing such problems by asking the developer to design their test items first before
the coding.
A very good approach that is supported by CASE tools especially running time
development of C, C++, JAVA, or .NET code is to provide a set of automatic run-time
Language tools for the development of reliable and high-performance applications.
Testing: Functionality and performance testing is an integrated part of ensuring a high-
quality product. CASE supports automated testing tools that help in testing the software,
thus, helping in improving the quality of testing. CASE tools enhance the speed breadth
and reliability of these design procedures. The design tools are very important specifically
in the case of a web-based system where scalability and reliability are two major issues of
design.
Deployment: After proper testing software goes through the phase of deployment where
a system is made operational. A system failure should not result in a complete failure of
the software on restart. CASE tools also help in this particular place. In addition, they
support configuration management to help any kind of change thus to be made in the
software.
Quality is teamwork: It involves the integration of workflow of various individuals. It
establishes traceability and communication of information, all that can be achieved by
sharing workload documents and keeping their configuration items

10
Software Project Management

5.10 Software Reliability:


Unlike the reliability of the hardware device, the term software reliability is difficult to
measure. In the software engineering environment, software reliability is defined as the
probability that software will provide failure-free operation in a fixed environment for a fixed
interval of time.
Software reliability is typically measured per unit of time, whereas the probability of failure
is generally time-independent. These two measures can be easily related if you know the
frequency with which inputs are executed per unit of time. Mean-time-to-failure (MTTF) is
the average interval of time between failures; this is also sometimes referred to as Mean-time-
before-failure.

11

You might also like