Software Testing Unit 1

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 59

SOFTWARE TESTING

UNIT 1
Basics of Software Testing and Testing
Methods

Presented by
M S RATHOD
LIF, GPA
COURSE OUTCOMES (COs)
 At the end of this course, student will be able to:
 1) Apply various software testing methods.
 2) Prepare test cases for different types and levels
of testing.
 3) Prepare test plan for an application.
 4) Identify bugs to create defect report of given
application.
 5) Test software for performance measures using
automated testing tools.
 6) Apply different testing tools for a given
application.
Basics of Software Testing and
Testing Methods
Identify errors and bugs in the given
program.
Prepare test case for the given application.
Describe the Entry and Exit Criteria for
the given test application.
Validate the given application using V
model in relation with quality assurance.
Describe features of the given testing
method.
What is Software Testing?

 Software testing is a process of identifying the correctness


of software by considering its all attributes (Reliability,
Scalability, Portability, Usability, Re-usability) and
evaluating the execution of software components to find
the software bugs or errors or defects.
 It provides objective of the software and gives surety of
fitness of the software.
 It involves testing of all components under the required
services to confirm that whether it is satisfying the
specified requirements or not. The process is also
providing the client with information about the quality of
the software.
Objectives of Testing
 Verification:- allows testers to confirm that the software
meets the various business and technical requirements
stated by the client before the inception of the whole
project.
 Validation:-Confirms that the software performs as
expected and as per the requirements of the clients.
 Defects:- to find different defects in the software to
prevent its failure or crash during implementation or go
live of the project.
 Providing Information:- during the
process of software testing, testers can accumulate a
variety of information related to the software and the steps
taken to prevent its failure.
Objectives of Testing
 Quality Analysis:- Testing helps improve the
quality of the software by constantly measuring and
verifying its design and coding.
 Compatibility:- It helps validate application’s
compatibility with the implementation environment,
various devices, Operating Systems, user requirements,
among other things.
 Verifying Performance & Functionality:- It ensures
that the software has superior performance and
functionality.
Software Development Life Cycle
(SDLC)
Some Terminologies
Comparison Bug Defect Error Fault Failure
basis

Definition It is an informal The Defect is the An Error is a The Fault is a If the software
name specified to difference mistake made in state that causes has lots of
the defect. between the the code; that's the software to defects, it leads
actual outcomes why we cannot fail to to failure or
and expected execute or accomplish its causes failure.
outputs. compile code. essential
function.

Raised by The Test The Testers identi The Developers Human The failure finds
Engineers submit fy the defect. And and automation mistakes cause by the manual
the bug. it was also solved test fault. test engineer
by the developer engineers raise through
in the the error. the developmen
development t cycle.
phase or stage.
Test, Test Case and Test Suite
 Test and test case terms are synonyms and may be used
interchangeably.
 A test case consists of inputs given to the program and
its expected outputs.
 Every test case will have a unique identification number.
 The set of test cases is called a test suite.
 All test suites should be preserved as we preserve source
code and other documents.
Software Testing Life cycle

What is Entry and Exit Criteria in STLC?


Entry Criteria: Entry Criteria gives the prerequisite items that must be
completed before testing can begin.
Exit Criteria: Exit Criteria defines the items that must be completed
before testing can be concluded
Software Testing Life cycle
 Requirement Analysis:- tester analyses requirement document of
SDLC to examine requirements stated by the client. After examining the
requirements, the tester makes a test plan to check whether the software is
meeting the requirements or not.
Entry Criteria Activities Exit Criteria

For the planning of test Prepare the list of all requirements and List of all the
plan queries, and get resolved from necessary
requirement specification, Technical Manager/Lead, System tests for the
application architecture Architecture, Business Analyst and testable
document and well-defined Client. requirements
acceptance criteria should Make a list of all types of tests and Test
be available. (Performance, Functional and security) environment
to be performed. details
Make a list of test environment details,
which should contain all the necessary
tools to execute test cases.
Software Testing Life cycle
 Test plan creation :- Tester determines the estimated effort and cost
of the entire project. Test case execution can be started after the
successful completion of Test Plan Creation.
Entry Criteria Activities Exit Criteria

Requirement Document. Define Objective as well as the scope of Test strategy


the software. document.
List down methods involved in testing. Testing Effort
Overview of the testing process. estimation
Settlement of testing environment. documents
Preparation of the test schedules and are the
control procedures. deliverables
Determination of roles and of this phase.
responsibilities.
List down testing deliverables, define
risk if any.
Software Testing Life cycle
 Environment setup: - Setup of testing environment is and independent
activity and can be started along with Test Case Development.

Entry Criteria Activities Exit Criteria

Test strategy and test plan Prepare the list of software and Successful
document. hardware by analyzing requirement smoke test,
Test case document. specification. Environment
Testing data. After the setup of the test environment, setup must
execute the smoke test cases to check work as per
the readiness of the test environment. plan and
checklist.
Software Testing Life cycle
 Test case Execution:- the testing team starts case development and
execution activity. The testing team writes down the detailed test
cases, also prepares the test data if required.

Entry Criteria Activities Exit Criteria

Requirement Document Creation of test cases. Test execution result.


Execution of test cases. List of functions with the
Mapping of test cases detailed explanation of
according to requirements. defects.
Software Testing Life cycle
 Defect Logging:- Testers and developers evaluate the completion criteria of the software
based on test coverage, quality, time consumption, cost, and critical business objectives.
This phase determines the characteristics and drawbacks of the software. Test cases and
bug reports are analyzed in depth to detect the type of defect and its severity.

Entry Criteria Activities Exit Criteria

Test case execution report. It evaluates the completion Closure report


Defect report criteria of the software based Test metrics
on test coverage, quality, time
consumption, cost, and
critical business objectives.
Defect logging analysis finds
out defect distribution by
categorizing in types and
severity.
Software Testing Life cycle
 Test Cycle Closure:- The test cycle closure report includes all the
documentation related to software design, development, testing results,
and defect reports.

Entry Activities Exit Criteria


Criteria

All Prepare test metrics based on parameters in Test closure report


document last phase.
and reports Document the learning out of the project
related to Prepare Test closure report
software. Qualitative and quantitative reporting of
quality of the work product to the customer.
Test result analysis to find out the defect
distribution by type and severity.
V-Model
 V-Model also referred to as the Verification and
Validation Model.

 Inthis, each phase of SDLC must complete before the


next phase starts.

 It
follows a sequential design process same as the
waterfall model.

 Testingof the device is planned in parallel with a


corresponding stage of development.
V-Model
 Verification
 It involves a static analysis method (review) done without executing
code.
 It is the process of evaluation of the product development process to
find whether specified requirements meet.
 Validation
 It involves dynamic analysis method (functional, non-functional),
testing is done by executing code.
 Validation is the process to determine whether the software meets the
customer expectations and requirements.

 So V-Model contains Verification phases on one side of the Validation


phases on the other side. Verification and Validation process is joined
by coding phase in V-shape. Thus it is known as V-Model.
V-Model
.
V-Model
 There are the various phases of Verification Phase of V-model:
 Business requirement analysis: This is the first step where product
requirements understood from the customer's side.
 System Design: In this stage system engineers analyze and interpret the
business of the proposed system by studying the user requirements
document.
 Architecture Design: The baseline in selecting the architecture is that it
should understand all which typically consists of the list of modules, brief
functionality of each module, their interface relationships, dependencies,
database tables, architecture diagrams, technology detail, etc. The
integration testing model is carried out in a particular phase.
 Module Design: In the module design phase, the system breaks down
into small modules. The detailed design of the modules is specified,
which is known as Low-Level Design
 Coding Phase: After designing, the coding phase is started. Based on the
requirements, a suitable programming language is decided.
V-Model
 There are the various phases of Validation Phase of V-model:
 Unit Testing: In the V-Model, Unit Test Plans (UTPs) are
developed during the module design phase.
 These UTPs are executed to eliminate errors at code level or unit
level.
 A unit is the smallest entity which can independently exist, e.g., a
program module.
 Unit testing verifies that the smallest entity can function correctly
when isolated from the rest of the codes/ units.
 Integration Testing: Integration Test Plans are developed during
the Architectural Design Phase. These tests verify that groups
created and tested independently can coexist and communicate
among themselves.
V-Model
 System Testing: System Tests Plans are developed during System
Design Phase.
 Unlike Unit and Integration Test Plans, System Tests Plans are
composed by the client’s business team.
 System Test ensures that expectations from an application
developer are met.
 Acceptance Testing: Acceptance testing is related to the business
requirement analysis part.
 It includes testing the software product in user atmosphere.
Acceptance tests reveal the compatibility problems with the
different systems, which is available within the user atmosphere.
 It conjointly discovers the non-functional problems like load and
performance defects within the real user atmosphere.
V-Model
 System Testing: System Tests Plans are developed during System
Design Phase.
 Unlike Unit and Integration Test Plans, System Tests Plans are
composed by the client’s business team.
 System Test ensures that expectations from an application
developer are met.
 Acceptance Testing: Acceptance testing is related to the business
requirement analysis part.
 It includes testing the software product in user atmosphere.
Acceptance tests reveal the compatibility problems with the
different systems, which is available within the user atmosphere.
 It conjointly discovers the non-functional problems like load and
performance defects within the real user atmosphere.
V-Model
When to use V-Model?
 When the requirement is well defined and not ambiguous.

 The V-shaped model should be used for small to medium-sized


projects where requirements are clearly defined and fixed.

 The V-shaped model should be chosen when sample technical


resources are available with essential technical expertise.
V-Model
Advantage (Pros) of V-Model:

 Easy to Understand.

 Testing Methods like planning, test designing happens well before


coding.

 This saves a lot of time. Hence a higher chance of success over the
waterfall model.

 Avoids the downward flow of the defects.


 Works well for small plans where requirements are easily
understood.
V-Model
Disadvantage (Cons) of V-Model:

 Very rigid and least flexible.

 Not a good for a complex project.

 Software is developed during the implementation stage, so no early


prototypes of the software are produced.

 If any changes happen in the midway, then the test documents


along with the required documents, has to be updated.
Quality Assurance and Quality Control

 Quality Assurance is popularly known as QA Testing,


is defined as an activity to ensure that an organization is
providing the best possible product or service to
customers.
 Quality Control in Software Testing is a systematic set
of processes used to ensure the quality of software
products or services. The main purpose of the quality
control process is ensuring that the software product
meets the actual requirements by testing and reviewing
its functional and non-functional requirements. Quality
control is popularly abbreviated as QC.
Methods of Testing: Static and
dynamic Testing

 Static testing is testing, which checks the application without


executing the code. It is a verification process. Some of the
essential activities are done under static testing such as business
requirement review, design review, code walkthroughs, and the
test documentation review.
 Static testing is performed in the white box testing phase,
where the programmer checks every line of the code before
handling over to the Test Engineer.
 Static testing can be done manually or with the help of tools
to improve the quality of the application by finding the error
at the early stage of development; that why it is also called
the verification process.
 The documents review, high and low-level design review,
code walkthrough take place in the verification process.
Methods of Testing: Static and
dynamic Testing

 Dynamic testing is testing, which is done when the code


is executed at the run time environment. It is a validation
process where functional testing [unit, integration, and
system testing] and non-functional testing [user
acceptance testing] are performed.

 We will perform the dynamic testing to check whether


the application or software is working fine during and
after the installation of the application without any error.
Difference
Static Testing Dynamic Testing
It is performed in the early stage of the software It is performed at the later stage of the software
development. development.
In static testing whole code is not executed. In dynamic testing whole code is executed.
Static testing prevents the defects. Dynamic testing finds and fixes the defects.
Dynamic testing is performed after code
Static testing is performed before code deployment.
deployment.
Static testing is less costly. Dynamic testing is highly costly.
Dynamic Testing involves test cases for testing
Static Testing involves checklist for testing process.
process.
It includes walkthroughs, code review, inspection
It involves functional and nonfunctional testing.
etc.
It usually takes longer time as it involves running
It generally takes shorter time.
several test cases.

It expose the bugs that are explorable through


It can discover variety of bugs.
execution hence discover only limited type of bugs.

Static Testing may complete 100% statement While dynamic testing only achieves less than 50%
coverage in comparably less time. statement coverage.
Black Box Testing
 a method in which the functionalities of software
applications are tested without having knowledge of
internal code structure, implementation details and
internal paths.
 mainly focuses on input and output of software
applications
 it is entirely based on software requirements and
specifications.
 also known as Behavioral Testing.
Black Box Testing
 Initially, the requirements and specifications of the system
are examined.
 Tester chooses valid inputs (positive test scenario) to check
whether SUT processes them correctly. Also, some invalid
inputs (negative test scenario) are chosen to verify that the
SUT is able to detect them.
 Tester determines expected outputs for all those inputs.
 Software tester constructs test cases with the selected inputs.
 The test cases are executed.
 Software tester compares the actual outputs with the
expected outputs.
 Defects if any are fixed and re-tested.
Black Box Testing/Methods
Equivalence Partitioning and Boundary
Value Analysis.
Boundary Value Analysis.
 the process of testing between extreme ends or
boundaries between partitions of the input values.
 So these extreme ends like Start- End, Lower- Upper,
Maximum-Minimum, Just Inside-Just Outside values
are called boundary values and the testing is called
“boundary testing”.
Boundary Analysis
 The basic idea in normal boundary value testing is to
select input variable values at their:
 Minimum
 Just above the minimum
 A nominal value
 Just below the maximum
 Maximum
Equivalence Partitioning
 Equivalence Class Partitioning is type of black box
testing technique which can be applied to all levels
of software testing like unit, integration, system, etc.
 In this technique, input data units are divided into
equivalent partitions that can be used to derive test
cases which reduces time required for testing
because of small number of test cases.
 It divides the input data of software into different
equivalence data classes.
 You can apply this technique, where there is a range
in the input field.
Example 1: Equivalence and
Boundary Value
 Let’s consider the behavior of Order Pizza Text Box Below
 Pizza values 1 to 10 is considered valid. A success message is
shown.
 While value 11 to 99 are considered invalid for order and an
error message will appear, “Only 10 Pizza can be ordered”
Here is the test condition
 Any Number greater than 10 entered in the Order Pizza
field(let say 11) is considered invalid.
 Any Number less than 1 that is 0 or below, then it is
considered invalid.
 Numbers 1 to 10 are considered valid
 Any 3 Digit Number say -100 is invalid.
Example 1: Equivalence and
Boundary Value
 we divide the possible values of tickets into groups or
sets as shown below where the system behavior can be
considered the same.
Example 1: Equivalence and
Boundary Value
 The divided sets are called Equivalence Partitions or
Equivalence Classes.
 Then we pick only one value from each partition for testing.
 The hypothesis behind this technique is that if one
condition/value in a partition passes all others will also
pass. Likewise, if one condition in a partition fails, all
other conditions in that partition will fail.
Example 1: Equivalence and
Boundary Value
 Boundary Value Analysis- in Boundary Value Analysis, you test
boundaries between equivalence partitions
 In our earlier equivalence partitioning example, instead of checking
one value for each partition, you will check the values at the
partitions like 0, 1, 10, 11 and so on. As you may observe, you test
values at both valid and invalid boundaries. Boundary Value
Analysis is also called range checking.
 Equivalence partitioning and boundary value analysis(BVA) are
closely related and can be used together at all levels of testing.
Example 2: Equivalence Partitioning
Following password field accepts
minimum 6 characters and maximum 10
characters
That means results for values in partitions
0-5, 6-10, 11-14 should be equivalent
Test Scenario
Test Scenario Description Expected Outcome
#
Enter 0 to 5 characters in password
1 System should not accept
field
Enter 6 to 10 characters in password
2 System should accept
field
Enter 11 to 14 character in password
3 System should not accept
field
Examples 3: Input Box should accept
the Number 1 to 10
 Here we will see the Boundary Value Test Cases

Test Scenario Description Expected Outcome

Boundary Value = 0 System should NOT accept

Boundary Value = 1 System should accept

Boundary Value = 2 System should accept

Boundary Value = 9 System should accept

Boundary Value = 10 System should accept

Boundary Value = 11 System should NOT accept


Why Equivalence & Boundary
Analysis Testing
This testing is used to reduce a very large
number of test cases to manageable
chunks.
Very clear guidelines on determining test
cases without compromising on the
effectiveness of testing.
Appropriate for calculation-intensive
applications with a large number of
variables/inputs
Summary
 Boundary Analysis testing is used when practically
it is impossible to test a large pool of test cases
individually
 Two techniques - Boundary value analysis and
equivalence partitioning testing techniques are used
 In Equivalence Partitioning, first, you divide a set of
test condition into a partition that can be considered.
 In Boundary Value Analysis you then test
boundaries between equivalence partitions
 Appropriate for calculation-intensive applications
with variables that represent physical quantities
White Box Testing
 software testing technique in which internal structure,
design and coding of software are tested to verify flow
of input-output and to improve design, usability and
security.
 In white box testing, code is visible to testers so it is also
called Clear box testing, Open box testing, Transparent
box testing, Code-based testing and Glass box testing.
 The clear box or WhiteBox name symbolizes the ability
to see through the software’s outer shell (or “box”) into
its inner workings.
What do you verify in White Box
Testing?
Internal security holes
Broken or poorly structured paths in the
coding processes
The flow of specific inputs through the
code
Expected output
The functionality of conditional loops
Testing of each statement, object, and
function on an individual basis.
White Box Testing
One of the basic goals of white box testing
is to verify a working flow for an
application.
It involves testing a series of predefined
inputs against expected or desired outputs
so that when a specific input does not result
in the expected output, you have
encountered a bug.
Example
Consider the following piece of code
Printme (int a, int b) {
int result = a+ b;
If (result> 0)
Print ("Positive", result)
 Else Print ("Negative", result)
}
Example
The goal of WhiteBox testing in software
engineering is to verify all the decision
branches, loops, statements in the code.
To exercise the statements in the above
white box testing example, WhiteBox test
cases would be
A = 1, B = 1
A = -1, B = -3
White Box Testing Techniques
Technical Review
 a static white-box testing technique which is conducted to spot the defects
early in the life cycle that cannot be detected by black box testing
techniques.
 Technical Reviews are documented and uses a defect
detection process that has peers and technical specialist
as part of the review process.
 The Review process doesn't involve management
participation.
 It is usually led by trained moderator who is NOT the
author.
 The report is prepared with the list of issues that needs
to be addressed.
White Box Testing Techniques
1. Walkthrough :
Walkthrough is a method of conducting informal
group/individual review. In a walkthrough, author
describes and explain work product in a informal
meeting to his peers or supervisor to get feedback.
Here, validity of the proposed solution for work
product is checked.
It is cheaper to make changes when design is on the
paper rather than at time of conversion.
Walkthrough is a static method of quality
assurance. Walkthrough are informal meetings but
with purpose.
White Box Testing Techniques
Inspection
An inspection is defined as formal, rigorous, in depth group review
designed to identify problems as close to their point of origin as
possible. Inspections improve reliability, availability, and
maintainability of software product.

Anything readable that is produced during the software development


can be inspected. Inspections can be combined with structured,
systematic testing to provide a powerful tool for creating defect-free
programs.

Inspection activity follows a specified process and participants play


well-defined roles.
An inspection team consists of three to eight members who plays
roles of moderator, author, reader, recorder and inspector.
White Box Testing Techniques
A major White box testing technique is Code
Coverage analysis.
Code Coverage analysis eliminates gaps in a
Test Case suite.
It identifies areas of a program that are not
exercised by a set of test cases.
 Once gaps are identified, you create test
cases to verify untested parts of the code,
thereby increasing the quality of the software
product
White Box Testing Techniques
 Below are a few coverage analysis techniques a box tester can
use:
 Statement Coverage:- This technique requires every possible
statement in the code to be tested at least once during the testing
process of software engineering.
 Branch Coverage – This technique checks every possible path
(if-else and other conditional loops) of a software application.
 Apart from above, there are numerous coverage types such as
Condition Coverage, Multiple Condition Coverage, Path
Coverage, Function Coverage etc.
 Each technique has its own merits and attempts to test (cover) all
parts of software code.
 Using Statement and Branch coverage you generally attain
80-90% code coverage which is sufficient.
Cyclomatic Complexity
the quantitative measure of the number of
linearly independent paths in it.
It is a software metric used to indicate the
complexity of a program.
It is computed using the Control Flow Graph of
the program.
The nodes in the graph indicate the smallest
group of commands of a program, and a directed
edge in it connects the two nodes i.e. if second
command might immediately follow the first
command.
Cyclomatic Complexity
For example, if source code contains no
control flow statement then its cyclomatic
complexity will be 1 and source code
contains a single path in it. Similarly, if
the source code contains one if
condition then cyclomatic complexity
will be 2 because there will be two paths
one for true and the other for false.
Example
A = 10
IF B > C
THEN
 A=B
ELSE
 A=C
ENDIF
Print A
Print B
Print C
Example
Mathematically, for a structured program, the
directed graph inside control flow is the edge joining
two basic blocks of the program as control may pass
from first to second.
So, cyclomatic complexity M would be defined as,

M = E – N + 2P
where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
Example
The cyclomatic complexity calculated for
above code will be from control flow
graph. The graph shows seven
shapes(nodes), seven lines(edges), hence
cyclomatic complexity is 7-7+2 = 2.
Use of Cyclomatic Complexity
Determining the independent path executions
thus proven to be very helpful for Developers
and Testers.
It can make sure that every path have been
tested at least once.
Thus help to focus more on uncovered paths.
Code coverage can be improved.
Risk associated with program can be evaluated.
These metrics being used earlier in the program
helps in reducing the risks.

You might also like