Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

The stroud number S is the Stroud’s moments per second with range from 5 to 20.

Generally S is set to 18.

9. Estimated Program Level / Difficulty: Halsted offered an alternative formula that


estimates the program level.
L^ = 2 n2 / (n1 N2) and D^ = 1/ L^ = (n1 N2)/2n2

Advantages of Halstead Metrics:

• Do not require in-depth analysis of programming structure.


• Predicts rate of error.
• Predicts maintenance of effort.
• Useful in scheduling and reporting projects.
• Measure overall quality of programs.
• Simple to calculate.
• Can be used for any programming language.

Drawbacks of Halstead Metrics:

• It depends on completed code.


• It has no use as a predictive estimating model.

Cyclomatic Complexity Metric

Cyclomatic Complexity is also called structural complexity of the program. It defines the
upper bound on the number of independent paths in a program. Cyclomatic complexity is a
software metric that provides a quantitative measure of the logical complexity of a program.
The value computed by the Cyclomatic complexity defines the number of independent paths
in the basis set of a program and provides us with an upper bound for the number of tests that
must be conducted to ensure that all statements have been executed at least once.

Cyclomatic complexity can be computed in one of the three ways:

1. No of bounded regions + 1
2. Edges – Nodes +2
3. No of predicates node +1

Decision nodes are known as predicate nodes. To calculate the Cyclomatic complexity, we
have to first draw the control flow graph from the code.

Control Flow Graph

A control flow graph describes the sequences in which the different instructions of a program
get executed. In other words, it describes how the control flows through the program. In order
to draw the control flow graph of a program, we need to first number all the statements of a
program. The different numbered statements serve as the nodes of the control flow graph.
Each circle represents one or more procedural statements which also called a flow graph
node. An edge must terminate at a node.
UNIT 4 : Testing

Software testing is the process of testing the software product. Effective software testing will
contribute to the delivery of higher quality software products, more satisfied users, lower
maintenance cost, more accurate, and reliable results. However insufficient testing leads to
the opposite results. So software testing is necessary and important activity of software
development process. It is very expensive and time consuming activity.

Terminology

There are some commonly used terminologies which are associated with software testing.

1. Errors: People make errors which are also known as mistakes. This may be a syntax
error or misunderstanding of specification and logical errors.
2. Bugs: when developers make mistakes while coding, we call these mistakes bugs.
3. Faults: An error may lead to one or more faults. We can also say that a fault is the
representation of an error, where representation is the mode of expression, such as
data flow diagrams, ER diagrams, source code etc.
4. Failure: A failure occurs when a fault executes. It is departure of the output of
program from the expected output.
5. Test Case: it is the triplet [I, S, O], where I is the data input to the system, S is the
state if the system at which the data is input, and O is the expected output of the
system.
6. Test Suite: It is the set of all test cases with which a given software product is to be
tested.
7. Verification: It is the process of confirming that software meets its specification. It
also check the software with respect to customer’s expectations.
8. Validation: It is the process of confirming that software meets the customer’s
requirements. It also checks the software with respect to the specification for a
specific functions.

Objective of Testing

The objectives of testing are as follows:

1. Testing is a process of executing a program with the intent of finding an error.


2. A good test case is one that has a high probability of finding an yet undiscovered
error.
3. A successful test is one that uncovers an yet undiscovered error.

Testing Principles

To design good test cases, a software engineer must understand the principles of testing.
Some principles are as follows:

1. All test should be traceable to customer requirements.


2. Tests should planned long before testing begins.
3. The Pareto principle applies to software testing: The Pareto principle implies that 80
percent of all errors uncovered during testing will likely be traceable to 20 percent of
all program components.
4. Testing should begin in the small and progress toward testing in the large.
5. Exhaustive testing is not possible: It is impossible to execute every combination of
paths during testing.
6. To be effective, testing should be conducted by an independent third party.

Testability

Software testability is simple how easily a computer program can be tested. There are certain
metrics that could be used to measure testability in most of its aspects. If the software follows
these characteristics then it leads to testable software. These characteristics are as follows:

1. Operability: The better it works, the more efficiently it can be tested.


2. Observability: What you see is what you test.
3. Controllability: The better we can control the software, the more testing can be
automated and optimized.
4. Decomposability: By controlling the scope of testing, we can more quickly isolate
problems and perform smarter retesting.
5. Simplicity: The less there is to test, the more quickly we can test it.
6. Stability: The fewer the changes, the fewer the disruptions to testing.
7. Understandability: The more information we have, the smarter we will test.

Attributes of Good Test

The following attributes are suggested for good test:

1. A good test has a high probability of finding an error.


2. A good test is not redundant.
3. A good test should be best of breed.
4. A good test should be neither too simple not too complex.
Testing in the large vs Testing in the small

Software products are normally tested first at the individual component or unit level. This is
referred to as testing in the small. After testing all the components individually, the
components are slowly integrated and tested at each level of integration (integration testing).
Finally, the fully integrated system is tested (called system testing). Integration and system
testing are known as testing in the large.

Thus, a software product goes through three levels of testing

1. Unit Testing
2. Integration Testing
3. System Testing

Unit Testing

Unit testing is undertaken after a module has been coded and successfully reviewed. Unit
testing (or module testing) is the testing of different units (or modules) of a system in
isolation. In order to test a single module, a complete environment is needed to provide all
that is necessary for execution of the module. That is, besides the module under test itself, the
following steps are needed in order to be able to test the module:

• The procedures belonging to other modules that the module under test calls.
• Nonlocal data structures that the module accesses.
• A procedure to call the functions of the module under test with appropriate
parameters.
The module interface is tested to ensure that
information properly flows into and out of the
program unit under test. The local data structure is
examined to ensure that data stored temporarily
maintains its integrity during all steps in an
algorithm's execution. Boundary conditions are
tested to ensure that the module operates properly
at boundaries established to limit or restrict
processing. All independent paths (basis paths)
through the control structure are exercised to
ensure that all statements in a module have been
executed at least once. And finally, all error
handling paths are tested.

What Errors are commonly found during Unit Testing

Among the more common errors in computation are:

1. Misunderstood or incorrect arithmetic precedence.


2. Mixed mode operations.
3. Incorrect initialization.
4. Precision inaccuracy.
5. Incorrect symbolic representation of an expression.

Because a component is not a stand-alone program, driver and/or stub software must be
developed for each unit test. The unit test environment is illustrated in Figure:

In most applications a driver is nothing more than a "main program" that accepts test case
data, passes such data to the component (to be tested), and prints relevant results. Stubs serve
to replace modules that are subordinate (called by) the component to be tested. A stub or
"dummy subprogram" uses the subordinate module's interface, may do minimal data
manipulation, prints verification of entry, and returns control to the module undergoing
testing.

Integration Testing

The primary objective of integration testing is to test the module interface in order to ensure
that there are no errors in the parameter passing, when one module invokes another module.
During integration testing, different modules of a system are integrated in a planned manner
using an integration plan. The integration plan specifies the steps and the order in which
modules are combined to realize the full system. After each integration step, the partially
integrated system is tested. The structure chart or module dependency graph denotes the order
in which different modules call each other. By examining the structure chart the integration
plan can be developed.

There are four types of integration testing approaches. Any one of the following approaches
can be used to develop the integration test plan. Those approaches are the following:

1. Big-Bang Integration
2. Top-Down Integration
3. Bottom-Up Integration
4. Mixed Integration
Big-Band Integration Testing

It is the simplest integration testing approach, where all the modules making up a system are
integrated in a single step. In simple words, all the modules of the system are simply put
together and tested. However, this technique is practicable only for very small systems.

The main problem with this approach is that once an error is found during the integration
testing, it is very difficult to localize the error as the error may potentially belong to any of
the modules being integrated. Therefore, debugging errors reported during big bang
integration testing are very expensive to fix.

Top-Down Integration Testing

Top-down integration testing starts with the main routine and one or two subordinate routines
in the system. After the top-level modules has been tested, the immediately subroutines of the
modules are combined with it and tested. Top-down integration testing approach requires the
use of program stubs to simulate the effect of lower-level routines that are called by the
routines under test. A pure top-down integration does not require any driver routines.

A disadvantage of the top-down integration testing approach is that in the absence of lower-
level modules, many times it may become difficult to exercise the top-level modules in the
desired manner since the lower-level routines perform several low-level functions such as
I/O.

Steps of Top-Down Integration Approach

The integration process is performed in a series of 5 steps:

1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main module.
2. Depending on the integration approach selected subordinate stubs are replaced one at
a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of test, another stub is replaced with the real component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced.

Problems Associated with Top-Down Integration Approach:

Top-Down strategy sounds relatively uncomplicated, but in practice, logical problems can
arise. The most common of these problems occurs when processing at low levels in the
hierarchy is required to test upper levels. Stubs replace low-level modules at the beginning of
top-down testing. Therefore, no significant data can flow upward in the program structure.

The tester is left with three choices:

1. Delay many tests until stubs are replaced with actual modules. It causes us to lose
some control over correspondence between specific tests and incorporation of specific
module.
2. Develop stubs that perform limited functions that simulate the actual module. This
approach leads to significant overhead, as stubs become more and more complex.
3. Integrate the software from the bottom of the hierarchy upward. This approach is
known as bottom up approach.

Bottom-up Integration Testing

In bottom-up testing, each subsystem is tested separately and then the full system is tested. A
subsystem might consist of many modules which communicate among each other through
well-defined interfaces. The primary purpose of testing each subsystem is to test the
interfaces among various modules making up the subsystem. Both control and data interfaces
are tested.

Lower-level subsystems are successively combined to form higher-level subsystems. A


principal advantage of bottom-up integration testing is that several disjoint subsystems can be
tested simultaneously. In a pure bottom-up testing no stubs are required, only test-drivers are
required. A disadvantage of bottom-up testing is the complexity that occurs when the system
is made up of a large number of small subsystems.

Steps of Bottom-Up Integration:

A bottom-up integration strategy may be implemented with the following steps:

1. Low Level components are combined into clusters that perform a specific software
sub-function.
2. A driver is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.

Mixed Integration Testing

A mixed (also called sandwiched) integration testing follows a combination of top-down and
bottom-up testing approaches. In top-down approach, testing can start only after the top-level
modules have been coded and unit tested. Similarly, bottom-up testing can start only after the
bottom level modules are ready. The mixed approach overcomes this shortcoming of the top-
down and bottom-up approaches. In the mixed testing approaches, testing can start as and
when modules become available. Therefore, this is one of the most commonly used
integration testing approaches.

Phased vs. Incremental Testing

The different integration testing strategies are either phased or incremental. A comparison of
these two strategies is as follows:

• In incremental integration testing, only one new module is added to the partial system
each time.
• In phased integration, a group of related modules are added to the partial system each
time.
Phased integration requires less number of integration steps compared to the incremental
integration approach. However, when failures are detected, it is easier to debug the system in
the incremental testing approach since it is known that the error is caused by addition of a
single module. In fact, big bang testing is a degenerate case of the phased integration testing
approach.

System Testing

System tests are designed to validate a fully developed system to assure that it meets its
requirements. There are essentially three main kinds of system testing:

• Alpha Testing: Alpha testing refers to the system testing conducted at the
developer’s site by the customer under controlled environment.
• Beta Testing: Beta testing is the system testing conducted at one or more customer
sited by the end user of the software in uncontrolled environment. Generally
developers are not present.
• Acceptance Testing: Acceptance testing is the system testing performed by the
customer to determine whether they should accept the delivery of the system.

Functional Testing or Black Box Testing

It is also known as behavioral testing. Functional testing refers to testing, which involves only
observation of the output for certain input values. There is no attempt to analyze the code,
which produces the output. We ignore the internal structure of the code. Black-box testing is
not the alternative of white box testing. Rather it is complimentary approach that is likely to
uncover a different class of errors than white-box methods.

Black-box testing attempts to find errors in the following categories:

1. Incorrect or missing functions.


2. Interface errors
3. Errors in data structures or external database access
4. Behavior or performance errors
5. Initialization and termination errors.

The following are two main approaches to designing black-box test cases:

1. Equivalence class portioning


2. Boundary Value Analysis

Equivalence Class Partitioning

It divides the input domain of a program into classes of data from which test cases can be
derived. Equivalence partitioning is use to define a test case that uncover classes of errors so
that we can reduce the total number of test cases. Test case design for equivalence
partitioning is based on an evaluation of equivalence of classes for an input condition.

An equivalence class represents a set of valid and invalid states for input concisions.
Generally, an input condition is either a specific numeric value, a rage of values, a set of
related values, or a Boolean condition. The equivalence classes may be defined according to
the following guidelines:

1. If an input condition specific a range, one valid and two invalid equivalence classes
are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid
equivalence class are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.

Boundary Value Analysis

A type of programming error frequently occurs at the boundaries of different equivalence


classes of input. The reason behind such errors might purely be due to psychological factors.
Programmers often fail to see the special processing required by the input values that lie at
the boundary of the different equivalence classes. For example, programmers may improperly
use < instead of <=, or conversely <= for <. Boundary value analysis leads to selection of test
cases at the boundaries of the different equivalence classes.

Boundary value analysis is a test case design technique that complements equivalence
partitioning. Rather than selecting any element of an equivalence class, BVA leads to the
selection of test cases at the boundaries of the class. The basic idea of boundary value
analysis is to use input variable values at their minimum, just above minimum, just less
minimum, a nominal value, just below their maximum, maximum and just above the
maximum.

Structural Testing or White-Box Testing

White-box testing are sometimes known as glass-box testing. It uses the control structure of
the procedural design to derive test cases. Using white-box testing methods, the software
engineer can derive test cases that

1. Guarantee that all independent paths within a module have been exercised at least
once.
2. Exercise all logical decisions on their true and false sides.
3. Execute all loops at their boundaries and within their operational bounds.
4. Exercise internal data structures to ensure their validity.

Statement Coverage

The statement coverage strategy aims to design test cases so that every statement in a
program is executed at least once. The principal of statement coverage strategy is that unless
a statement is executed, it is very hard to determine if an error exists in that statement.
However, executing some statement once and observing that it behaves properly for that
input value is no guarantee that it will behave correctly for all input values. In the following,
designing of test cases using the statement coverage strategy have been shown.
Example:

Consider the Euclid’s GCD computation algorithm:

int compute_gcd(x, y)
int x, y;
{
1 while (x! = y){
2 if (x>y) then
3 x= x – y;
4 else y= y – x;
5}
6 return x;
}

By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)}, we can exercise the program
such that all statements are executed at least once.

Branch Coverage

In the branch coverage-based testing strategy, test cases are designed to make each branch
condition to assume true and false values in turn. Branch testing is also known as edge testing
as in this testing scheme, each edge of a program’s control flow graph is traversed at least
once.

It is obvious that branch testing guarantees statement coverage and thus is a stronger testing
strategy compared to the statement coverage-based testing. For Euclid’s GCD computation
algorithm, the test cases for branch coverage can be {(x=3, y=3), (x=3, y=2), (x=4, y=3),
(x=3, y=4)}.

Condition Coverage

In this structural testing, test cases are designed to make each component of a composite
conditional expression to assume both true and false values. For example, in the conditional
expression ((c1.and.c2).or.c3), the components c1, c2 and c3 are each made to assume both
true and false values.

Branch testing is probably the simplest condition testing strategy where only the compound
conditions appearing in the different branch statements are made to assume the true and false
values. Thus, condition testing is a stronger testing strategy than branch testing and branch
testing is stronger testing strategy than the statement coverage-based testing. Thus, for
condition coverage, the number of test cases increases exponentially with the number of
component conditions. Therefore, a condition coverage-based testing technique is practical
only if n (the number of conditions) is small.

Path Coverage

The path coverage-based testing strategy requires us to design test cases such that all linearly
independent paths in the program are executed at least once. A linearly independent path can
be defined in terms of the control flow graph (CFG) of a program.
Data Flow Based Testing

Data flow-based testing method selects test paths of a program according to the locations of
the definitions and uses of different variables in a program.

Mutation Testing

In mutation testing, the software is first tested by using an initial test suite built up from the
different white box testing strategies. After the initial testing is complete, mutation testing is
taken up. The idea behind mutation testing is to make few arbitrary changes to a program at a
time. Each time the program is changed, it is called as a mutated program and the change
effected is called as a mutant.

A mutated program is tested against the full test suite of the program. If there exists at least
one test case in the test suite for which a mutant gives an incorrect result, then the mutant is
said to be dead. If a mutant remains alive even after all the test cases have been exhausted,
the test data is enhanced to kill the mutant.

Since mutation testing generates a large number of mutants and requires us to check each
mutant with the full test suite, it is not suitable for manual testing. Mutation testing should be
used in conjunction of some testing tool which would run all the test cases automatically.

Functional Testing vs. Structural Testing

In the black-box testing approach, test cases are designed using only the functional
specification of the software, i.e. without any knowledge of the internal structure of the
software. For this reason, black-box testing is known as functional testing.

On the other hand, in the white-box testing approach, designing test cases requires thorough
knowledge about the internal structure of software, and therefore the white-box testing is
called structural testing.

Performance Testing

Performance testing is carried out to check whether the system needs the non-functional
requirements identified in the SRS document. There are several types of performance testing.
The types of performance testing to be carried out on a system depend on the different non-
functional requirements of the system documented in the SRS document. All performance
tests can be considered as black-box tests.

• Stress Testing: Stress tests are black box tests which are designed to impose a range
of abnormal and even illegal input conditions so as to stress the capabilities of the
software.
• Volume Testing: It is especially important to check whether the data structures have
been designed to successfully extraordinary situations.
• Configuration Testing: This is used to analyze system behavior in various hardware
and software configurations specified in the requirements.
• Regression Testing: Regression testing is the practice of running an old test suite
after each change to the system or after each bug fix to ensure that no new bug has
been introduced due to the change or the bug fix.
• Recovery Testing: Recovery testing tests the response of the system to the presence
of faults, or loss of power, devices, services, data, etc. and it is checked if the system
recovers satisfactorily.
• Maintenance Testing: This testing addresses the diagnostic programs, and other
procedures that are required to be developed to help maintenance of the system.
• Documentation Testing: It is checked that the required user manual, maintenance
manuals, and technical manuals exist and are consistent.
• Compatibility Testing: This type of testing is required when the system interfaces
with other types of systems. Compatibility aims to check whether the interface
functions perform as required.
• Usability Testing: Usability testing concerns checking the user interface to see if it
meets all user requirements concerning the user interface. During usability testing, the
display screens, report formats, and other aspects relating to the user interface
requirements are tested.

Path:

A path through a program is a node and edge sequence from the starting node to a terminal
node of the control flow graph of a program. There can be more than one terminal node in a
program. Writing test cases to cover all the paths of a typical program is impractical. For this
reason, the path-coverage testing does not require coverage of all paths but only coverage of
linearly independent paths.

Linearly Independent Path:

A linearly independent path is any path through the program that introduces at least one new
edge that is not included in any other linearly independent paths. If a path has one new node
compared to all other linearly independent paths, then the path is also linearly independent.
This is because, any path having a new node automatically implies that it has a new edge.
Thus, a path that is subpath of another path is not considered to be a linearly independent
path.
Formal Technical Reviews:

FTR is done for quality purpose without executing the code. FTR involves the analysis of
artifacts by a grouped of technically skilled persons following a specified and documented
process.

A Formal Technical Review is a software quality assurance activity performed by software


engineers. The objectives of the FTR are:

1. To uncover errors in functions, logic or implementation for any representation of the


software.
2. To verify that the software under review meets its requirements.
3. To ensure that the software has been represented according to predefined standards.
4. To achieve software that is developed in a uniform manner.
5. To make projects more manageable.

Stages of FTR

Preparation:

The FTR is done by a group of at least 4 to 5 persons. One of them is called team leader and
others are called reviewers. Each reviewer spend some time to review the product and
making some notes on it. At the same time leader also review the problem and set the agenda
for review meeting and schedule the meeting.

Review Meeting:

Ever review meeting should have the following constraints:

1. There must be 3 to 5 persons involved in the review.


2. Advance preparation should occur.
3. The duration of review meeting should be less than two hours.

FTR focuses on a specific part of the overall software. For example, rather than attempting to
review an entire design, reviews are conducted for each component or small group of
components. At the end of the review, all attendees of the FTR must decide whether to:

1. Accept the product without further modification


2. Reject the product due to severe errors. or
3. Accept the product provisionally.

Review Reporting and Record Keeping

During the FTR all the issues are recorded that have been raised. These are summarized at the
end of the review meeting and a review issue list is produced. A review report answer three
questions:

1. What was reviewed?


2. Who reviewed it?
3. What were the findings and conclusions?
The review issues list serves two purposes:

1. To identify problem areas within the product


2. To serve as an action item checklist that guides the producer as corrections are made.

Review Guidelines

Guidelines for the conduct of FTR must be established in advance, distributed to all the
reviewers and followed. The following are some guidelines for review which are as follows:

1. Review the product, not the producer


2. Set an agenda and maintain it.
3. Limit the debate.
4. Raise the problem areas, but don’t attempt to solve every problem noted.
5. Take written notes.
6. Limit the number of participants and each must be well prepared in advance.
7. Develop a checklist for each product that is likely to be reviewed.
8. Allocate resources and schedule time for FTRs.
9. Conduct meaningful training for all reviewers.
10. Review your early reviews.

Code Review

Code review is carried out after the module is successfully compiled and all the syntax errors
have been eliminated. Code reviews are extremely cost-effective strategies for reduction in
coding errors and to produce high quality code. Normally, two types of reviews are carried
out on the code of a module. These two types code review techniques are code inspection and
code walk through.

1. Code Walkthrough

Code walk through is an informal code analysis technique. This technique uses after a
module has been coded, successfully compiled and all syntax errors eliminated. A few
members of the development team are given the code few days before the walk
through meeting to read and understand the code. Each member selects some test
cases and simulates execution of the code by hand i.e. execution through each
statement and function execution. The main objectives of the walk through are to
discover the algorithmic and logical errors in the code. The members note down their
findings to discuss these in a walk through meeting where the coder of the module is
present.

2. Code Inspection

The aim of code inspection is to discover some common types of errors caused due to
improper programming. In other words, during code inspection the code is examined
for the presence of certain kinds of errors, in contrast to the hand simulation of code
execution done in code walk throughs. In addition to the commonly made errors, the
coding standards are also checked during code inspection.
The Following are some classical programming errors which can be checked during
code inspection:
1. Use of Uninitialized Variables.
2. Jumps into Loops.
3. Non-terminating assignments.
4. Array out of bound
5. Improper storage allocation and de-allocation
6. Mismatches between actual and formal parameter in procedure calls.
7. Use of incorrect logical operators or incorrect precedence among operators
8. Improper modification of loop variables.
9. Comparison of equality of floating point variables etc.

FINISH OF UNIT 4

You might also like