15 Module 5 - Software Testing Methodologies 26-03-2024

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 141

CSI1007 Software Engineering Principles

MODULE 5
Software Testing

1
Module:5 Software Testing

TESTING: Introduction; Software Testing Fundamental; Testing


Principles; Testing Levels; Verification and Validation: Validation
Testing, Validation Test Criteria; Test Plan: Test Documentation;
Test Strategies: Top-Down Testing, Bottom-Up Testing, Thread
testing, Stress testing, Back-to-back testing; Testing methods and
tools: Testing through reviews, Black-box testing (Functional
testing), White box testing (glass-box testing), Testing software
changes; Additional requirements in testing OO Systems; Metrics
Collection, Computation, and Evaluation; Test and QA plan;
Managing Testing Functions.

2
Testing

• Testing is an important, compulsory


part of software development;

• It is a technique for evaluating product


quality and also for indirectly
improving it, by identifying defects and
problems.

3
Software Testing
• One of the practical methods commonly used to detect the
presence of errors (failures) in a computer program is
to test it for a set of inputs.

I1, I2, I3, The


…, In, … Our program output is
correct?
Expected
results
=?
Obtained
results
“Inputs”

4
Software Testing . . .

 Test is a formal activity. It involves a strategy


and a systematic approach. The different stages
of tests supplement each other. Tests are always
specified and recorded.
 Test is a planned activity. The workflow and the
expected results are specified. Therefore, the
duration of the activities can be estimated. The
point in time where tests are executed is defined.
 Test is the formal proof of software
quality.
5
SOFTWARE TESTING DEFINITIONS

 Definition 1:-Software testing can be stated as a process of


validating and verifying that a software program or
application or product,
1. Meets the business and technical requirements that
guided its design and development, and
2. Works as expected.
3. Can be implemented with the same characteristics.
 Definition 2:- It is a process used to identify the
correctness, completeness and quality of developed
computer software. It includes a set of activities conducted
with the intent of finding errors in software so that it could
be corrected before the product is released to the end users..
6
Correctness of a software

• How do we decide correctness of a software?


To answer this question, we need to first understand how a
software can fail.
• Failure may be due to:
 Error or incompleteness in the requirements
 Difficulty in implementing the specification in the target
environment
 Faulty system or program design
 Defects in the code

7
WHAT IS SOFTWARE TESTING?

Myer’s Definition on Software Testing

The processof executing a program or


a system with the intent of finding errors.
♦ Testing is activity which requires skilled people.

♦ Sound testing process is essential for good testing.

♦ It is an era of software testing tools.

♦ Need for skilled test manager to look after testing.

8
It is clear from the definition of Testing by
Myers that

Testing cannot prove correctness of a


program – it is just a series of experiments to
find out errors .

04/24/2024 9
Quality Software Should be

 Bug Free
 Delivered on time
 complete
 Within budget
 Meet the requirement of the customer.
 Easy to maintain and upgrade whenever needed.
 Can be web enabled
 Adaptable to various OS like Unix, Linux etc
 Can run in notepads, PCs, Mainframes as per customer
requirements
 Documented
 Consistency
 Secured
10
Why is testing important?

 To find and correct defects


 To check whether the client/user needs are satisfied
 To avoid user detecting problems
 To provide quality product
 Provide confidence in the system
 To provide sufficient information to allow an
objective decision on applicability to deploy
11
Why testing?
• “egoless programming” and keep a goal of
eliminating as many faults as possible.

• We need to place static testing also in


place to capture an error before it
becomes a defect in the software.

04/24/2024 12
Why does software have bugs?
 Miscommunication or no communication
 Time pressure(Time Scheduling)
 Changing requirements
 Software complexity
 Programming mistakes

13
Software Testing Principles

14
Principle1:-Testing shows the
presence of defects
Suppose you have 15 input fields to test each having 5
possible values,the number of combinations to be
tested would be 5^15=30517578125!!

15
Principle2-Exhaustive testing is
impossible
 If you were to test all the possible
combinations, projects execution time and costs
will rise exponentially
We have learned that we cannot test everything
(i.e. all combinations of inputs and pre-
conditions).
That is, we must Prioritise our testing effort
using a Risk Based Approach.

16
Principle3 – Early Testing

 Testing should start as early as possible


in the software development life
cycle.

17
Principle 4:-Defect Clustering

Defects are not evenly distributed in a system


They are ‘clustered’.
“Pareto law”:
In other words, most defects found during testing
are usually confined to a small number of modules
( 80% of uncovered errors
focused in 20% modules of the whole application)
Similarly, most operational failures of a system are
usually confined to a small number of modules.
An important consideration in test is prioritisation

18
Principle 5:-Pesticide Paradox
5. The Pesticide Paradox
Testing identifies bugs, and programmers
respond to fix them
As bugs are eliminated by the programmers, the
software improves
As software improves the effectiveness of
previous tests erodes
Therefore we must learn, create and use new
tests based on new techniques to catch new bugs
( i.e. It is not a matter of repetition. It is a
matter of learning and improving)

19
Principle:- 6 Testing is Context Dependent

 Testing is done differently in different contexts


 For example, safety-critical software is tested
differently from an e-commerce site
 Testing can be 50% of development costs, in NASA's Apollo
program (it was 80% testing)
 3 to 10 failures per thousand lines of code (KLOC) typical
for commercial software
 1 to 3 failures per KLOC typical for industrial software
 0.01 failures per KLOC for NASA Shuttle code!
 Also different industries impose different testing
standards

20
Principle7:- Absence of error is
a Fallacy

21
Software Testing - Levels

22
Software Testing - Levels

• Unit Testing: This is the first level of testing where


individual components or units of the software are
tested to ensure they function as expected. It’s
performed by developers and is the smallest testable
part of the software.
• Integration Testing: This level of testing involves testing
the interaction between different units or modules of
the software. It’s done after unit testing and helps detect
interface errors.
• System Testing: At this level, the complete and integrated
software is tested to ensure it meets the system
requirements. It’s typically performed by a dedicated
testing team.
• Acceptance Testing: This is the final level of testing
conducted to ensure that the software fulfills the user’s
requirements and works correctly in the user’s
environment. It’s usually performed by the customer or
an independent testing team
24
Verification and Validation
• Verification asks if you're building the Product right.
• It's about checking if what you've built matches the
blueprint or design specifications.
• This typically involves internal activities like code
reviews or inspections against set requirements.
• The process of checking that a software achieves its goal
without any bugs.
• It is the process to ensure whether the product that is
developed is right or not.
• It is Static Testing.
Verification and Validation

• Validation, asks if you're building the right product.


• Itensures thefinal product actually meets the
needs of the users and stakeholders.
• Validation often involves external testing or user feedback
to see if the system fulfills its intended purpose in the real
world.
• It is the process of checking whether the
software product is up to the m ark or in
other words product has high level
requirements.
• It is Dynamic Testing
Validation Testing

• Itis the processof ensuring that the tested and developed


software satisfies the client/user’s needs.
• The business requirement logic or scenarios have to
be tested in detail.
• All the critical functionalities of an application must be
tested.
• The process of conducting tests to determine if a system
meets the needs of its users.
• This can involve various methods like user acceptance
testing, usability testing, or performance testing
Validation Test Criteria
• Functional Requirements: Do the features
of the system work as intended according to the
specifications?
• Usability: Is the system user-friendly and easy to interact
with?
• Performance: Does the system meet the desired
performance benchmarks for speed, responsiveness, and
scalability?
• Compatibility: Can the system run on the intended
platforms and integrate with other systems seamlessly?
• Reliability: Is the system stable and
dependable with minimal downtime or errors?
• Security: Does the system adequately protect sensitive
data and user privacy?
Testing Strategies

Testing strategies define the overall approach to


software testing, outlining how you'll ensure the
quality of your product.
§ Top-Down
§ Bottom – Up
§ Thread Testing
§ Stress Testing
§ Back to Back Testing
Testing Strategies – Top-Down

 Testing starts from the highest level of the


system, focusing on overall functionality before
delving into individual components
 This breaks down testing into stages, starting
from the overall system level and progressing to
individual components.
Testing Strategies – Top-Down

 Process:
 Begin by testing the main module and its interactions with
lower-level modules (not yet implemented).
 Use stubs (simulated modules) to represent the missing
lower levels and provide expected responses for testing the
higher- level module's logic.
 Gradually integrate lower-level modules one by one,
replacing stubs with actual components and refining tests
as the system becomes more complete.
Testing Strategies – Top-Down

 Advantages:
 Early detection of high-level design flaws.
 Verifies system-level functionality early in development.
 Facilitates a u ser-centric approach, testing featu res as the
u ser would experience them.
 Disadvantages:
 Requ ires creation and m aintenance of stu bs, which
can be time- consuming.
 May be difficult to isolate issues that arise from interactions
between lower-level components.
Testing Strategies – Top-Down

Example:
Imagine testing a new online store. Top-down testing
would start with simulating user actions like browsing
products, adding items to the cart, and checking out.
You'd use stubs to represent payment processing
and inventory management until those modules are
developed
Testing Strategies – Bottom-Up

Testing begins with the lowest levels (individual modules or units)


and progresses towards integrating and testing the entire
system.
• Testing is done by integrating or joining two or more modules
by moving upward from bottom to top through the control flow
of the architecture structure.
• Low-level modules are tested first, and then high-level
modules are tested.
• For example, in a software application, you might start by
testing individual functions (low-level modules), then move on
to testing how these functions interact within a class (a higher-
level module)
34
Testing Strategies – Bottom-Up

 Process:
 S tart by testing individu a l modu les or u n its in
isolation , ensuring they function as per specifications.
 G radu a lly integrate tested modu les into sm all
grou ps (clusters) and test their interactions.
 Continue integrating clusters to form larger
subsystems
and eventu ally the entire system, testing
fun ction ality at each stage.
Testing Strategies – Bottom-Up

 Advantages:
 Promotes reusability of unit tests for individual
modules.
 E asier to isolate and pinpoint bugs
originating from specific modules.
 Disadvantages:
 System-level functionality might not be tested until
later stages.
 Requires careful design of interfaces between modules

to enable modular testing


Testing Strategies – Bottom-Up

 Example:
 Testing the online store from the bottom up might
involve first testing individual modules like product
search, shopping cart logic, and user
authentication. Then, you'd combine them to test
adding items to the cart and user checkout.
Testing Strategies – Thread Testing

 It verifies the key functional capabilities of a specific task


(thread). It is usually conducted at the early stage of Integration
Testing phase.
 For example, in a banking application, you might have a
thread that handles the process of transferring money
from one account to another.
 Thread testing would involve testing this process from start to
finish to ensure it works correctly.
 Focuses on testing software designed to handle multiple
concurrent threads (lightweight processes within a program).
Testing Strategies – Thread Testing

 Process:
 Identify potential thread-related issues like race conditions
(unexpected outcomes due to timing) or deadlocks (threads
waiting on each other indefinitely).
 Design test cases to simulate how multiple threads interact
with shared resources like memory or files.
 Use tools or techniques like thread synchronization testing
to ensure proper coordination between threads.
Testing Strategies – Thread Testing

 Advantages:
 Helps to identifyand prevent thread-
related bugs thatmight not be apparent in single-
threaded testing.
 Disadvantages:
 C a n be complex to design and execute test cases for
multi- threaded scenarios.
 Debugging thread-related issues can be challenging.
Testing Strategies – Thread Testing

Example:
 Testing a multi-user chat application
would involve thread testing to ensure
messages are sent and received properly
when multiple users chat
simultaneously. You'd want to avoid race
conditions where a message might be
incomplete or out of order.
Testing Strategies – Stress Testing

 Pushes the software beyond its normal operating limits to identify


weaknesses and breaking points.
 Used to determine the robustness of software by testing beyond
the limits of normal operation.
 It emphasizes robustness, availability, and error handling under
a heavy load rather than what is correct behavior under normal
situations.
 For example, you might test how a website handles a large
number of simultaneous users or how a database system handles
a large number of simultaneous queries.
Testing Strategies – Stress Testing

 Process:
 S imu late high u ser loads, large data volu mes, or
resou rce exhaustion scenarios.
 Monitor system performance metrics like
response times, resource utilization (CPU, memory),
and error rates.
 Analyze the resu lts to identify potential bottlenecks
or areas where performance degrades under stress.
Testing Strategies – Stress Testing

 Advantages:
 Increases confidence in the system's ability to h andle
peak loads or unexpected surges in activity.
 Helps improve system scalability and resilience.

 Disadvantages:
 C a n be resource-intensive to set up and execute stress tests.

 Might requ ire specialized tools or expertise to generate


realistic load conditions.
Testing Strategies – Stress Testing

 Example:
 Stress testing the online store might involve simulating a

high number of concurrent users browsing products,


completing purchases, and placing orders. This helps
identify if the system can handle peak traffic during a sale
or promotional event without crashing or experiencing
significant performance slowdowns.
Testing Strategies – Back-to-Back Testing

 It is known as comparison testing.


 It is a method of evalu ating the performance of two or
more systems by running them on the same input and comparing
the results.
 This type of testing is typically used in engineering and scientific
applications, such as comparing the performance of different designs
or algorithms.
 Compares the functionality of two functionally identical
systems , typically a production system and a test or
development version.
 For instance, if two different algorithms are being tested for image
recognition, back-to-back testing would involve running both
algorithms on the same set of images and comparing the accuracy of
the results.
Testing Strategies – Back-to-Back Testing

 Process:
 Execute the same set of test cases on both systems.

 Verify that the results on both systems are identical,

ensuring the changes made in the test or development


version haven't introduced regressions (unintended side
effects).
Testing Strategies – Back-to-Back Testing

 Example
 An example of back-to-back testing would be comparing

the performance of two different compression algorithms


on a set of digital images. The goal of the test would be to
identify any differences in performance and to determine
which algorithm is the most effective
Alpha Test & Beta Test
• Sometimes the system is piloted in-house before the
customer runs the real pilot test. The in-house test, in such
case, is called an alpha test,
• the customer’s pilot is a beta test. This approach is common
in the case of commercial software where the system has to
be released to a wide variety of customers.

04/24/2024 49
Parallel Testing
• This approach is done when a new system is replacing
an existing one or is part of a phased development.

• The new system is put to use in parallel with previous


version and will facilitate gradual transition of users,
and to compare and contrast the new system with the
old.

04/24/2024 50
Testing Strategies – Regression Testing

• Regression testing ensures changes


haven't introduced unintended bugs
or broken existing functionalities.

• It's like a safety net, catching regressions


(reintroductions of bugs) before they
reach users
51
Testing Strategies – Regression Testing

Purpose:
• Verify that code modifications haven't
negatively impacted previously working
features.
• Catch regressions early in the
development cycle to minimize rework
and delays.
• Maintain the overall quality and
stability of the software. 52
Testing Strategies – Regression Testing

When to Perform:
• After any code changes, bug fixes, or new feature additions.

• Following code integrations from different development


teams.

• As part of the testing process before a new software release.

• Can be automated for efficiency, especially for frequently


modified parts of the codebase.
53
Testing Strategies – Regression Testing

Types of Regression Testing:


• Selective Testing: Focuses on retesting
functionalities directly impacted by the code
changes.
• Full Regression Testing: Re-runs a
comprehensive suite of tests covering all major
functionalities. This is often used for critical
releases or when significant changes have been
made
54
Testing Strategies – Regression Testing

Benefits:
• Improves software quality and reliability.
• Reduces the risk of regressions impacting users.
• Increases developer confidence in the stability of the codebase.
• Saves time and resources by catching regressions early.
Challenges:
• Can be time-consuming, especially for full regression testing.
• Maintaining a large and up-to-date test suite can be a burden.
• Deciding on the appropriate level of regression testing for
different scenarios

55
Unit Testing
• Initially, each program component (module) is tested
independently verifying the component functions with the
types of input identified by studying component’s design.

• Unit testing is done in a controlled environment with a


predetermined set of data fed into the component to observe
what output actions and data are produced.

04/24/2024 56
04/24/2024 57
Unit Testing
• Examining the code: Typically the static testing methods like:

Reviews, Walkthroughs and Inspections are used.


• Proving code correct
• Use formal methods.
• One way to investigate program correctness is to view the
code as a statement of logical flow. Using mathematical
logic, if we can formulate the program as a set of assertions
and theorems, we can show that the truth of the theorems
implies the correctness of the code.
04/24/2024 58
Unit Testing

• Use of this approach forces us to be more


rigorous and precise in specification.

• Much work is involved in setting up and carrying


out the proof.

04/24/2024 59
Testing program components
• To test a component (module), input data and conditions are
chosen to demonstrate an observable behavior of the code.

• A test case is a particular choice of input data to be used in


testing a program.

• Test case are generated by using either black-box or white-


box approaches

04/24/2024 60
Integration Testing

• When collections of components have been unit-tested, the next


step is ensuring that the interfaces among the components are
defined and handled properly.

• This process of verifying the synergy of system components


against the program Design Specification is called Integration
Testing.

04/24/2024 61
Integration Testing
• Integration is the process of assembling unit-tested modules.
• We need to test the following aspects:
Interfaces: To ensure “interface integrity,” the transfer of data between
modules is tested. When data is passed to another module, by way of a call,
there should not be any loss or corruption of data. The loss or corruption of
data can happen due to miss-match or differences in the number or order of
calling and receiving parameters.
Module combinations may produce a different behavior due to combinations
of data that are not exercised during unit testing.
Global data structures, if used, may reveal errors due to unintended usage in
some module.
04/24/2024 62
Integration Strategies

• Depending on design approach, one of the


following integration strategies can be
adopted:
• Big Bang approach
• Incremental approach
 Top-down testing
 Bottom-up testing
 Sandwich testing

04/24/2024 63
consider the following arrangement of
modules:

04/24/2024 64
Big Bang approach

04/24/2024 65
• Though Big Bang approach seems to be advantageous
when we construct independent module concurrently,

• This approach is quite challenging and risky as we


integrate all modules in a single step and test the resulting
system.

• Locating interface errors, if any, becomes difficult here.

04/24/2024 66
Incremental approach
• The alternative strategy is an incremental approach, wherein
modules of a system are consolidated with already tested
components of the system.
• In this way, the software is gradually built up, spreading the
integration testing load more evenly through the construction
phase.
• Incremental approach can be implemented in two distinct
ways:
Top-down
Bottom-up.
04/24/2024 67
Top-down Testing

04/24/2024 68
Bottom-up Testing

04/24/2024 69
Top Down Approach

Advantages
• Advantageous if major defects occur toward the top of the program
• Early wasted program allows demonstrations and boosts confidence
Disadvantages
• Stub modules must be produced
• Test conditions may be impossible, or very difficult, to create .
• Observation of test output is more difficult, as only simulated
values will be used initially. For the same reason, program
correctness can be misleading.

04/24/2024 70
Bottom Up Approach
Advantages
• if major defect occur toward the bottom of the program
• Test conditions are easier to create
• Observations of test results is easier (as “live” data is used from
the beginning)
Disadvantages
• Driver modules must be produced
• The program as an entity does not exist until the last module is
added

04/24/2024 71
System Testing
• Once the system is integrated, the overall functionality is tested
against the Software Requirements Specification (SRS).

• Then, the other non-functional requirements like performance


testing are done to ensure readiness of the system to work
successfully in a customer’s actual working environment. This
step is called System Testing.

04/24/2024 72
Acceptance Testing
• The next step is customer’s validation of the system against
User Requirements Specification .

• Customer in their working environment does this exercise


of Acceptance Testing usually with assistance from the
developers. Once the system is accepted, it will be installed
and will be put to use.

04/24/2024 73
Acceptance Testing
• Acceptance testing is the customer (and user) evaluation
of the system, primarily to determine whether the system
meets their needs and expectations.
• Usually acceptance test is done by customer with
assistance from developers.
• Customers can evaluate the system either by conducting a
benchmark test or by a pilot test.

04/24/2024 74
• In benchmark test, the system performance is evaluated
against test cases that represent typical conditions under
which the system will operate when actually installed.

• A pilot test installs the system on an experimental basis,


and the system is evaluated against everyday working.

04/24/2024 75
Sandwich Testing
• To overcome the limitations and to exploit the
advantages of Top-down and Bottom-up testing, a
sandwich testing is used
• This system is viewed as three layers – the target layer
in the middle, the levels above the target, and the levels
below the target.
• A top-down approach is used in the top layer and a
bottom-up one in the lower layer.
04/24/2024 76
• Testing converges on the target layer, chosen on the basis
of system characteristics and the structure of the code.

• For example, if the bottom layer contains many general-


purpose utility programs, the target layer (the one above)
will be components using the utilities.

• This approach allows bottom-up testing to verify the


utilities’ correctness at the beginning of testing.

04/24/2024 77
System Testing

• The objective of unit and integration testing was to ensure


that the code implemented the design properly.
• In system testing, we need to ensure that the system does
what the customer wants it to do. Initially the functions
(functional requirements) performed by the system are
tested.
• A function test checks whether the integrated system
performs its functions as specified in the requirements.
04/24/2024 78
 After ensuring that the system performs the
intended functions, the performance test is done.
 This non-functional requirement includes
security, accuracy, speed, and reliability.

04/24/2024 79
Testing Strategies – Others

Performance Testing:
• This strategy tests the software to determine its
performance characteristics such as speed,
scalability, and stability.

• For example, you might test how a website


handles a large number of simultaneous users.

80
Testing Strategies – Others

Security Testing:
• This strategy tests the software to identify
vulnerabilities and ensure it meets security
requirements.

• For instance, you might test a web application


for common security vulnerabilities like SQL
injection or cross-site scripting attacks.

81
White Box Testing

04/24/2024 82
Testing Methods – White Box Testing

• It is clear box testing, glass box testing, transparent box


testing, and structural testing.
• It is a method of software testing that tests the
internal structures or workings of an application.
• The tester has access to the source code and uses
this knowledge to design test cases that can verify the
correctness of the software at the code level.
• It involves testing the software’s internal logic, flow,
and structure.
• The tester creates test cases to examine the code
paths and logic flows to ensure they meet the specified
requirements.
83
Testing Methods – White Box Testing

Key Characteristics:
• Testers' Skills: Requires programming
knowledge and understanding of the specific
programming language used in the application
being tested.

• Focus: Examines the internal logic, code


paths, and data structures of the software.

84
Testing Methods – White Box Testing

Benefits:
• Identifies logic errors, code inefficiencies,
and potential security vulnerabilities.
• Ensures all code paths are exercised and
tested.
• More targeted testing based on the code
structure.

85
Testing Methods – White Box Testing

When to Use White-Box Testing:


• When the source code and design documents are readily
available.
• For critical applications where high reliability is
essential.
• When the focus is on testing the internal logic and
structure of the software.
• Often used in conjunction with other testing techniques
like black-box testing for a more comprehensive
approach.
86
Testing Methods – White Box Testing
Techniques:

• Code Coverage: Analyzes how much of the code is


actually executed during testing.
• Path Testing: Tests all possible execution paths through
the code, including conditional branches.
• Data Flow Testing: Focuses on how data flows through
the code and validates outputs for various inputs.
• Mutation Testing: Deliberately introduces errors in the
code to see if the tests can detect them

87
Testing Methods – White Box Testing
Techniques:

• Statement Coverage: In this technique, the aim is to traverse all


statements at least once. Hence, each line of code is tested.
• Branch Coverage: In this technique, test cases are designed so that
each branch from all decision points is traversed at least once.
• Condition Coverage: In this technique, all individual conditions
must be covered.
• Multiple Condition Coverage: In this technique, all the possible
combinations of the possible outcomes of conditions are tested at
least once.
• Basis Path Testing: In this technique, control flow graphs are made
from code or flowchart and then Cyclomatic complexity is calculated
which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path 88
Testing Methods – White Box Testing
Limitations:
• Requires significant time and effort due to the
complexity of analyzing code.

• Relies heavily on the tester's programming skills and


knowledge of the specific codebase.

• May not effectively identify usability issues or user


experience problems, which are better suited for black-
box testing

89
Testing Methods – White Box Testing

Statement Coverage
 Aims to ensure that every single line of
code (executable statement) is executed at
least once during testing.
Testing Methods – White Box Testing

 Statement Coverage
def calculate_area(length, width):
if length < 0 or width < 0:
return "Invalid dimensions. Length and width must be non-negative."
else:
area = length * width
return area
Testing Methods – White Box Testing

 Statement Coverage
 Test Case 1: calculate_area(2, 3) (Executes all
lines, returns 6)

 Test Case 2: calculate_area(-1, 4) (Executes all


lines, returns "Invalid dimensions...")
Testing Methods – White Box Testing

 Branch Coverage
 Focuses on ensuring that each possible outcome
of a conditional statement (branch) is executed at least
once.
 Example: Using the same calculate_area function :
 Test Case 1: (from Statement Coverage) covers the
"valid dimension" branch.
 Test Case 3: calculate_area(2, -5) (Executes the
"invalid dimension" branch, returns "Invalid dimensions...")
Testing Methods – White Box Testing

 Condition Coverage
 Like branch coverage but focuses on
individual conditions within a decision
statement.

 Ensures eachcondition evaluates to both True


and False at least once.
Testing Methods – White Box Testing

 Condition Coverage
 Example - Imagine a function that checks if a
number is even or odd:
def is_even(number):
return number % 2 == 0
Testing Methods – White Box Testing

 Condition Coverage
 Test Case 1: is_even(4) (Condition number % 2
== 0 evaluates to True, returns True)

 Test Case 2: is_even(3) (Condition evaluates to


False, returns False)
Testing Methods – White Box Testing

 Multiple Condition Coverage

 Extends condition by all


coverage combinations of testing (when
conditions)
conditions to ensure they produce the expected results.are
possible
there
multiple
Testing Methods – White Box Testing

 Multiple Condition Coverage


 Example - Example: Let's modify the is_even function to
consider negative numbers:
def is_even(number):
if number == 0:
return "Zero is
considered even."
elif number % 2 == 0:
return True
else:
return False
Testing Methods – White Box Testing

 Multiple Condition Coverage


 Test Case 1: is_even(4) covers number % 2 == 0.

 Test Case 3: is_even(0) (Covers the number == 0


condition, returns "Zero is considered even.")
White Box Approach

Basis Path Testing


• Basis Path Testing is white box testing method where
we design test cases to cover every statement, every
branch and every predicate (condition) in the code
which has been written.
• Thus the method attempts statement coverage,
decision coverage and condition coverage.
To perform Basis Path Testing

• Derive a logical complexity measure of procedural design


 Break the module into blocks delimited by statements
that affect the control flow (eg.: statement like return,
exit, jump etc. and conditions)
 Mark out these as nodes in a control flow graph
 Draw connectors (arcs) with arrow heads to mark the
flow of logic
 Identify the number of regions (Cyclomatic Number)
which is equivalent to the McCabe’s number
Contd.,

• Define a basis set of execution paths


Determine independent paths
• Derive test case to exercise (cover) the basis
set
04/24/2024 103
Flow graph notations
Basis Path Testing
McCabe’s Number or Cyclomatic Complexity
McCabe’s Number
(Cyclomatic Complexity)
• Gives a quantitative measure of the logical
complexity of the module

• Defines the number of independent paths

• Provides an upper bound to the number of tests


that must be conducted to ensure that all the
statements are executed at least once.
Complexity of a flow graph ‘g’, v(g), is computed in
one of three ways:
i. V(G) = No. of regions of G
ii. V(G) = E - N +2
(E: No. of edges , N: No. of nodes , P No. of Disconnected Parts )
iii. V(G) = P + 1
(P: No. of predicate nodes in G or No. of conditions in the code)
 Calculating the McCabe Number
 Cyclomatic complexity is derived from the control flow graph of
a program as follows:

 V(G) = E - N + 2P
 Cyclomatic complexity (CC) = E - N + 2P
Where:
 V(G) – Given graph
 P = number of disconnected parts of the flow graph (e.g. a
calling program and a subroutine)
 E = number of edges (transfers of control)
 N = number of nodes (sequential group of statements containing
only one transfer of control)
04/24/2024 110
• McCabe’s Number = No. of Regions (Count the mutually exclusive
closed regions and also the whole outer space as one region) = 2 (in the
above graph)
• Two other formulae as given below also define the above measure:

McCabe’s Number = E - N +2
( = 6 – 6 +2 = 2 for the above graph)
• McCabe’s Number = P + 1
( =1 + 1= 2 for the above graph)
• Please note that if the number of conditions is more than one in a
single control structure, each condition needs to be separately
marked as a node.
• When the McCabe’s number is 2, it indicates that there two linearly
independent paths in the code. i.e., two different ways in which the graph
can be traversed from the 1st node to the last node
• The independents in the above graph are:
i) 1-2-3-5-6
ii) 1-2-4-5-6
• The last step is to write test cases corresponding to the
listed paths.
• This would mean giving the input conditions in such a
way that the above paths are traced by the control of
execution.
The test cases for the paths listed here are shown in the
following table.

Input Actual
Path Condition Expected Result Result Remarks

value of ‘a’ >


i) value of ‘b’ Increment ‘a’ by 1

value of ‘a’ <=


ii) value of ‘b’ Increment ‘b’ by 1
Example 2

04/24/2024 115
Prepare a Test Case Table

Input Actual
Path Condition Expected Result Result Remarks

i)

ii)
Ex 1 : Compute the Cyclomatic Complexity of
the given program
Cyclomatic Complexity
= E – N + 2p
= 9 – 7 + 2*1
=4

OR

V(G) = P + 1 (P: No. of predicate


nodes in G or No. of conditions in
the code)

Cyclomatic Complexity
= 3+1(Condition nodes are 1,2 and 3 nodes)
=4
Ex 2 : Compute the Cyclomatic Complexity
of the given program
begin
int a, b, p, temp;
float c;
input(a, b);
if(b<0) Procedure :
temp = -b;
else • First draw the control
temp = b; flow graph for the given
c=1; code-
while(temp!=0)
{ c=c*p; • Compute the CC
temp=temp-1;
}
if(b<0)
c=1/c;
output(c);
end
Control flow graph

Cyclomatic Complexity
= E – N + 2p
= 16 – 14 + 2*1
=4
121
Black box Testing

Black Box Approaches:


 Equivalence Partitioning
 Boundary Value Analysis
 Cause Effect Analysis
 Cause Effect Graphing
 Error Guessing

04/24/2024 122
Equivalence Partitioning
• Equivalence partitioning is partitioning the input domain
of a system into finite number of equivalent classes .
• To put this in simpler words, since it is practically
infeasible to do complete testing, the next best alternative
is to check whether the program extends similar behavior
or treatment to a certain group of inputs.
• If such a group of values can be found in the input
domain treat them together as one equivalent class and
test one representative from this.
Example
Consider a program which takes “Salary” as input with
values 12000...37000 in the valid range.
The program calculates tax as follows:
- Salary up to Rs. 15000 – No Tax
- Salary between 15001 and 25000 – Tax is 18 % of Salary
- Salary above 25000 – Tax is 20% of Salary
Accordingly, the valid input domain can be
divided into three valid equivalent classes as
below:
c1 : values in the range 12000...15000
c2: values in the range 15001...25000
c3: values > 25000
• However, it is not sufficient that we test only valid test
cases.
• We need to test the program with invalid data also as the
users of the program may give invalid inputs,
intentionally or unintentionally.
• It is easy to identify an invalid class
“c4: values < 12000”.
Equivalent classes identified
Summary
To design test cases using equivalence
partitioning, for a range of valid input values
identify
• one valid value within the range
• one invalid value below the range and
• one invalid value above the range
Example
• Similarly, To design test cases for a specific set of values
• one valid case for each value belonging to the set
• one invalid value
• Test Cases for Types of Account (Savings, Current) will be
• Savings, Current (valid cases)
• Overdraft (invalid case)
• It may be noted that we need fewer test cases if some test
cases can cover more than one equivalent class.
II. Boundary Value Analysis
• Even though the definition of equivalence
partitioning states that testing one value from a class
is equivalent to testing any other value from that
class, we need to look at the boundaries of
equivalent classes more closely. This is so since
boundaries are more error prone.
II. Boundary Value Analysis
To design test cases using boundary value analysis, for a
range of values,
• Two valid cases at both the ends
• Two invalid cases just beyond the range limits
The test cases using boundary value analysis
are
III. Cause Effect Analysis
• The main drawback of the previous two techniques is that they do not
explore the combination of input conditions.

• Cause effect analysis is an approach for studying the specifications


carefully and identifying the combinations of input conditions (causes)
and their effect in the form of a table and designing test cases

• It is suitable for applications in which combinations of input


conditions are few and readily visible.
IV. Cause Effect Graphing
• This is a rigorous approach, recommended for complex
systems only.

• In such systems the number of inputs and number of


equivalent classes for each input could be many and hence
the number of input combinations usually is astronomical.

• Hence we need a systematic approach to select a subset of


these input conditions.
Guidelines for Graphing
• Divide specifications into workable pieces as it may be practically
difficult to work on large specifications.
• Identify the causes and their effects. A cause is an input condition or an
equivalence class of input conditions. An effect is an output condition or
a system transformation.
• Link causes and effects in a Boolean graph which is the cause-effect
graph.
• Make decision tables based on the graph. This is done by having one
row each for a node in the graph. The number of columns will depend on
the number of different combinations of input conditions which can be
made.
• Convert the columns in the decision table into test cases.
Cause Effect Graphing….

04/24/2024 136
Cause Effect Graphing Eg..
Contd.,
Consider the following specification:
A program accepts Transaction Code - 3 characters as input.
For a valid input the following must be true.
1st character (denoting issue or receipt)
+ for issue
- for receipt
2nd character - a digit
3rd character - a digit
V. Error Guessing

• Error guessing is a supplementary technique


where test case design is based on the tester's
intuition and experience.
• There is no formal procedure. However, a
checklist of common errors could be helpful
here.
When to stop testing
When To Stop Testing ?
The question arises as testing is never complete and we cannot
scientifically prove that a software system does not contain any
more errors.
Common Criteria Practiced
Stop When Scheduled Time For Testing Expires
Stop When All The Test Cases Execute Without Detecting Errors
Both are meaningless and counterproductive as the first
can be satisfied by doing absolutely nothing
141

You might also like