Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 27

Chapter 10

VERIFICATION & VALIDATION


Outlines
 Verification & Validation
 Traceability
 Reviews
 Inspections
 Walkthroughs
 Unit Testing
 Integration Testing
 System Testing
 Validation Testing
Verification & Validation
Verification: Are we building the product right―
The software should confirm to its specification
Means that each ―stage of development follows processes and standards
Quality is the goal.
Validation: "Are we building the right product―
 The software should do what the user really requires

 Means that the overall product and process meets user needs.
User satisfaction is the goal.
Traceability
IEEE [1994] provides the following compound definition of traceability.
The degree to which a relationship can be established between two or more products of the
development process; for example, the degree to which the requirements and design of a given
software component match.(IEEE 610.12-1990 )
Requirement Traceability
Refers the ability to describe and follow the life of a requirement, in both forward and
backward direction.
That is from its origins, through its development and specification, to its subsequent
deployment and use, and through all periods of on-going refinement and iteration in any of
these phases.
Some requirements are volatile
Traceability importance increases where rate of change is high
Classification of Requirement
Traceability
Forward traceability
requires that each input to a phase must be traceable to an output of that phase.
Requirement to test cases
Backward traceability
requires that each output of a phase must be traceable to an input to that phase. Outputs
that cannot be traced to inputs are extra, unless it is acknowledged that the inputs
themselves were incomplete.
 Test case to requirements
Traceability Table
Requirements traceability information can be kept in traceability tables, each table relating
requirements to one or more aspects of the system or its environment
Tracing Requirement in the System
Definition Domain
Tracing User Needs to Features
Defining the features of a system that meets user needs is the next step after defining needs,
and it can be helpful to continually relate how the user needs are addressed by the features
of your proposed solution.
We can do so via a simple table, or traceability matrix, similar to the one shown in Table-1
Table -1. Traceability Matrix: User Needs versus Features
Tracing Requirement in the System
Definition Domain
Tracing User Needs to Features
In Table-1, we've listed all the user needs we identified down the left column. In the row across
the top, we've listed all the application features .
Once the rows (needs) and columns (features) are defined, we simply put an X in the
appropriate cell(s) to record the fact that a specific feature has been defined for supporting user
needs.
After you've recorded all known need–feature relationships, examine the traceability matrix for
potential indications of error.
If inspection of a row fails to detect any Xs, a possibility exists that no feature is yet defined to
respond to a user need. These potential red flags should be checked carefully. Modern
requirements management tools have a facility to automate this type of inspection.
If inspection of a column fails to detect any Xs, a possibility exists that a feature has been
included for which there is no defined product need
Tracing Requirement in the System
Definition Domain
Tracing Features to Use Cases
It is equally important to ensure that the features can be related to the use cases proposed for
the system.
Again, we can consider a simple matrix similar to the one shown in Table-2
Table -2. Traceability Matrix: Features versus Use Cases
Tracing Requirements in the System
Definition Domain
Tracing Features to Supplementary (non functional) Requirements
While the use cases carry the majority of the functional behaviour, keep in mind that the
supplementary requirements also hold valuable system requirements.
These often include the non-functional requirements of the system such as usability,
reliability, supportability, and so on.
Table -3. Traceability Matrix: Features versus Supplementary Requirements
Tracing Requirements in the System
Definition Domain
In the same way tracing continues from one phase of software development lifecycle to
another phase
Tracing from Requirements to Testing
Finally we approach the last system boundary we must bridge to implement a complete
traceability strategy: the bridge from the requirements domain to the testing domain.
one specific approach to comprehensive testing is to assure that every use case is "tested by"
one or more test cases
Tracing Requirements in the System
Definition Domain
we first had to identify all the scenarios described in the use case itself. This is a one- to-many
relationship since an elaborated use case will typically have a variety of possible scenarios that
can be tested. From a traceability viewpoint, each use case traces to each scenario of the use
case as shown in Figure.
Finally test case should be listed for each scenario as shown in Table 4
Tracing Requirements in the System
Definition Domain
Requirement Reviews
A group of people analyze the requirements, look for problems, meet and discuss the problems
and agree on actions to address these problems
Review is a very formal meeting headed by a leader
Not only to look for errors but also to check conformance to standards
Requirement Review Process
Inspections
An inspection is a set of procedures and error-detection techniques for group reading.
An inspection team usually consists of four people. One of the four people plays the role of
moderator. The moderator is expected to be a competent programmer, but he or she is not the
author of the program.
The duties of the moderator include
Distributing materials for, and scheduling, the inspection session
Leading the session
Recording all errors found
Ensuring that the errors are subsequently corrected
Inspections
The second team member is the programmer.
The remaining team members usually are the program’s designer (if different from the
programmer) and a test specialist.
The moderator distributes the program’s listing and design specification to the other
participants several days in advance of the inspection session. The participants are expected to
familiarize themselves with the material prior to the session
Inspections Session Activities
During the session, two activities occur:
1. The programmer narrates, statement by statement, the logic of the program. During the
discourse, other participants should raise questions, and they should be pursued to determine
whether errors exist.
2. The program is analysed with respect to a checklist of historically common programming
errors.
Walkthroughs
The walkthrough, like the inspection, is a set of procedures
and error-detection techniques for group reading.
It shares much in common with the inspection process, but
the procedures are slightly different, and a different error-
detection technique is employed.
Rather than simply reading the program, the participants
―play computer.
The person designated as the tester comes to the meeting
armed with a small set of paper test cases for the program or
module.
Walkthroughs
During the meeting, each test case is mentally executed.
That is, the test data are walked through the logic of the
program. The state of the program (i.e., the values of the
variables) is monitored on paper or whiteboard
Of course, the test cases must be simple in nature and few in
number, because people execute programs at a rate that is
many orders of magnitude slower than a machine.
The walkthrough should have a follow-up process similar to
that described for the inspection process.
Unit Testing
Unit testing focuses on verification effort on the
smallest unit of software design (the software
component or module).
Using the component level design description as a
guide, important control paths are tested to uncover
errors within the boundary of the module.
The unit test is white box oriented
Integration Testing
Integration Testing is a systematic technique for constructing a program structure while at the
same time conducting tests to uncovers errors associated with interfacing.
There are two main approaches of integration testing
BIG BANG INTEGRATION
INCREMENTAL INTEGRATION
System Testing
Recovery Testing
Security Testing
Load Testing
Stress Testing
Performance Testing
Recovery Testing
 Recovery testing is the forced failure of the software in a variety of ways to verify that
recovery is properly performed.
 For example: When an application is receiving data from a network, unplug the connecting
cable. After some time, plug the cable back in and analyze the application’s ability to continue
receiving data from the point at which the network connection was broken.
Security Testing
 Recovery testing is the forced failure of the software in a variety of ways to verify that
recovery is properly performed.
 For example: When an application is receiving data from a network, unplug the connecting
cable. After some time, plug the cable back in and analyze the application’s ability to continue
receiving data from the point at which the network connection was broken.
Validation Testing

Acceptance Testing
Alpha Testing
Beta Testing

You might also like