Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 45

Chapter 01

Basics Of Software
Testing
Overview
• Software Quality, Definition of Software Testing,
Role of Testing
• Failure, Error, Fault, Defect, Bug Terminology
• Objectives of Testing
• Test Case
• When to Start and Stop Testing of Software (Entry
and Exit Criteria)
• Skills for Software Tester
• Quality Assurance, Quality Control, Verification
and Validation, V Model
Quality
• The concept and vocabulary of quality is elusive.
Different people interpret quality differently. Few
can define quality in measurable terms that can
be operationalized.
• the degree of excellence of something, often a
high degreeof it.
• Quality often refers to how good
or bad something is
• a characteristic or featureof someone or
something
• When asked what differentiate their product or
service, the banker will answer “service,” health
care worker will answer “quality health care,” the
hotel restaurant employee will answer “customer
satisfaction,” and the manufacturer will simply
answer “quality product.” When pressed to
provide a specific definition and measurement,
few can do so.
• Harvard professor David Garvin, in his
book Managing Quality, summarized five
principal approaches to defining quality:
transcendent, product based, user based,
manufacturing based, and value based. Let’s
discuss each one of them:
1. Transcendental View of Quality: Those who hold
transcendental view would say, “I can’t define it, but I
know when I see it.”
2. Product-Based View: Product based definitions are
different. Quality is viewed as quantifiable and
measurable characteristics or attributes.
3. User-Based View: User based definitions are based
on the idea that quality is an individual matter, and
products that best satisfy their preferences (i.e.
perceived quality) are those with the highest quality.
4. Manufacturing-Based View: Manufacturing-based
definitions are concerned primarily with engineering
and manufacturing practices and use the universal
definition of “conformance to requirements.”
5. Value-Based View: Value-based quality is defined in
terms of costs and prices as well as a number of other
attributes.
Software Quality
• SOFTWARE QUALITY is the degree of
conformance to explicit or implicit requirements
and expectations.
• Explanation
– Explicit: clearly defined and documented
– Implicit: not clearly defined and documented but
indirectly suggested
– Requirements: business/product/software
requirements
– Expectations: mainly end-user expectations
Software Testing
• Software testing is defined as an activity to check
whether the actual results match the expected
results and to ensure that the software system
is Defect free.
• It involves execution of a software component or
system component to evaluate one or more
properties of interest.
• Software testing also helps to identify errors,
gaps or missing requirements in contrary to the
actual requirements.
Why is Software Testing Important?
• Testing is important because software bugs could be
expensive or even dangerous. Software bugs can
potentially cause monetary and human loss, and history is
full of such examples.
– In April 2015, Bloomberg terminal in London crashed due to
software glitch affected more than 300,000 traders on financial
markets. It forced the government to postpone a 3bn pound
debt sale.
– Nissan cars have to recall over 1 million cars from the market
due to software failure in the airbag sensory detectors. There
has been reported two accident due to this software failure.
– Starbucks was forced to close about 60 percent of stores in the
U.S and Canada due to software failure in its POS system. At one
point store served coffee for free as they unable to process the
transaction.
Role of Software Testing
• Testing plays a vital role in software
development. In every company, testing is an
important and valuable stage in the Software
Development Life Cycle. Techniques used for
software testing differ from one company to
another. Software testing is not an easy task.
Each and every day there will be challenges in
coding as well as decoding.
• Improve Your Software
The role of testing in software development
begins with improved reliability, quality and
performance of the software. It assists a
developer to check out whether the software is
performing the right way and to assure that
software is not performing what it is not
supposed to do.
• Benefits Of Software Testing
When testing is introduced in the early stage, the
cost of fixing the bug can be reduced. Cost of fixing
the bug becomes larger when a bug is not found at
the right time.
• Quality assurance
Quality plays a vital role in today’s competitive
world. Hence a software tester plays key role in
Software Development Life Cycle for producing
quality products.
Failure, Error, Fault, Defect, Bug
• Testing is the process of identifying defects,
where a defect is any variance between actual
and expected results.
• A mistake in coding is called Error, error found
by tester is called Defect, defect accepted by
development team then it is called Bug, build
does not meet the requirements then it Is
Failure.
• DEFECT: It can be simply defined as a variance
between expected and actual. Defect is an
error found AFTER the application goes into
production
• Defect can be categorized into the following:
• Wrong: When requirements are implemented not in the right way.
This defect is a variance from the given specification. It is Wrong!
• Missing: A requirement of the customer that was not fulfilled. This
is a variance from the specifications, an indication that a
specification was not implemented, or a requirement of the
customer was not noted correctly.
• Extra: A requirement incorporated into the product that was not
given by the end customer. This is always a variance from the
specification, but may be an attribute desired by the user of the
product. However, it is considered a defect because it’s a variance
from the existing requirements.
• ERROR: An error is a mistake, misconception, or misunderstanding
on the part of a software developer. In the category of developer
we include software engineers, programmers, analysts, and testers.
For example, a developer may misunderstand a de-sign notation, or
a programmer might type a variable name incorrectly – leads to an
Error. It is the one which is generated because of wrong login, loop
or due to syntax. Error normally arises in software; it leads to
change the functionality of the program.
• BUG: A bug is the result of a coding error. An Error found in
the development environment before the product is
shipped to the customer. A programming error that causes
a program to work poorly, produce incorrect results or
crash. An error in software or hardware that causes a
program to malfunction. Bug is terminology of Tester.
• FAILURE: A failure is the inability of a software system or
component to perform its required functions within
specified performance requirements. When a defect
reaches the end customer it is called a Failure. During
development Failures are usually observed by testers.
• FAULT: An incorrect step, process or data definition in a
computer program which causes the program to perform in
an unintended or unanticipated manner. A fault is
introduced into the software as the result of an error. It is
an anomaly in the software that may cause it to behave
incorrectly, and not according to its specification. It is the
result of the error.
Objectives of Testing
• To ensure that application works as specified in requirements
document.
• To provide a bug free application
• To achieve the customer satisfaction
• To ensure that error handling has been done gracefully in the
application. In situations when user has entered incorrect data, the
application display user-friendly messages
• To establish confidence in software
• To evaluate properties of software
• To discuss the distinctions between validation testing and defect
testing
• To describe strategies for generating system test cases
• To understand the essential characteristics of tool used for test
automation
Test Case
• A TEST CASE is a set of conditions or variables under
which a tester will determine whether a system under
test satisfies requirements or works correctly.
• A test case is a document, which has a set of test data,
preconditions, expected results and postconditions,
developed for a particular test scenario in order to
verify compliance against a specific requirement.
• Test Case acts as the starting point for the test
execution, and after applying a set of input values, the
application has a definitive outcome and leaves the
system at some end point or also known as execution
postcondition.
Typical Test Case Parameters:
• Test Case ID-The ID of the test case
• Test Scenario-
• Test Case Description-The summary / objective of the test case.
• Test Steps
• Prerequisite-Any prerequisites or preconditions that must be fulfilled prior
to executing the test
• Test Data-The test data, or links to the test data, that are to be used while
conducting the test.
• Expected Result-The expected result of the test.
• Test Parameters
• Actual Result-The actual result of the test; to be filled after executing the
test.
• Environment Information-The environment (Hardware/Software/Network)
in which the test was executed.
• Comments
Writing Good Test Cases
• As far as possible, write test cases in such a way that you test only
one thing at a time. Do not overlap or complicate test cases.
Attempt to make your test cases ‘atomic’.
• Ensure that all positive scenarios AND negative scenarios are
covered.
• Language:
– Write in simple and easy-to-understand language.
– Use active voice instead of passive voice: Do this, do that.
– Use exact and consistent names (of forms, fields, etc).
• Characteristics of a good test case:
– Accurate: Exacts the purpose.
– Economical: No unnecessary steps or words.
– Traceable: Capable of being traced to requirements.
– Repeatable: Can be used to perform the test over and over.
– Reusable: Can be reused if necessary.
Example:Let us say that we need to check an input field
that can accept maximum of 10 characters.

Scenario Test Step Expected Result Actual Outcome

Verify that the Login to Application Application


input field that application and should be able to accepts all 10
can accept key in 10 accept all 10 characters.
maximum of 10 characters characters.
characters

Verify that the Login to Application Application


input field that application and should NOT accepts all 10
can accept key in 11 accept all 11 characters.
maximum of 11 characters characters.
characters
When to Start Testing?
• An early start to testing reduces the cost and time to rework and
produce error-free software that is delivered to the client. However
in Software Development Life Cycle (SDLC), testing can be started
from the Requirements Gathering phase and continued till the
deployment of the software.
• It also depends on the development model that is being used. For
example, in the Waterfall model, formal testing is conducted in the
testing phase; but in the incremental model, testing is performed at
the end of every increment/iteration and the whole application is
tested at the end.
• Testing is done in different forms at every phase of SDLC −
– During the requirement gathering phase, the analysis and verification
of requirements are also considered as testing.
– Reviewing the design in the design phase with the intent to improve
the design is also considered as testing.
– Testing performed by a developer on completion of the code is also
categorized as testing.
When to Stop Testing?
• It is difficult to determine when to stop testing, as
testing is a never-ending process and no one can
claim that a software is 100% tested. The
following aspects are to be considered for
stopping the testing process −
– Testing Deadlines
– Completion of test case execution
– Completion of functional and code coverage to a
certain point
– Bug rate falls below a certain level and no high-
priority bugs are identified
– Management decision
Skills for Software Tester
• Communication skill
• Domain Knowledge
• Desire to learn
• Technical Skills
• Analytical Skills
• Planning
• Integrity
• Curiosity
• Think from User Perspective
• Be a good judge of your product
Quality Assurance
• Quality Assurance is defined as the auditing and
reporting procedures used to provide the
stakeholders with data needed to make well-
informed decisions.
• It is the Degree to which a system meets specified
requirements and customer expectations. It is
also monitoring the processes and products
throughout the SDLC.
• Quality assurance (QA) is any systematic process
of determining whether a product or service
meets specified requirements.
Quality Assurance Criteria:
• Below are the Quality assurance criteria against which
the software would be evaluated against:
– correctness
– efficiency
– flexibility
– integrity
– interoperability
– maintainability
– portability
– reliability
– reusability
– testability
– usability
Quality Control
• Quality control is a set of methods used by
organizations to achieve quality parameters or
quality goals and continually improve the
organization's ability to ensure that a software
product will meet quality goals.
• The three class parameters that control software
quality are:
– Products
– Processes
– Resources
Quality Control Process:

• The total quality control process consists of:


– Plan - It is the stage where the Quality control processes are
planned
– Do - Use a defined parameter to develop the quality
– Check - Stage to verify if the quality of the parameters are met
– Act - Take corrective action if needed and repeat the work
Quality Control characteristics:
• Process adopted to deliver a quality product
to the clients at best cost.
• Goal is to learn from other organizations so
that quality would be better each time.
• To avoid making errors by proper planning and
execution with correct review process.
QA Vs QC
• In effect, QA provides the overall guidelines used
anywhere, and QC is a production-focused
process – for things such as inspections.
• QA is any systematic process for making sure a
product meets specified requirements, whereas
QC addresses other issues, such as individual
inspections or defects.
• In terms of software development, QA practices
seek to prevent malfunctioning code or products,
while QC implements testing and troubleshooting
and fixes code.
Verification & Validation
Sr.No. Verification Validation
1 Verification addresses the concern: "Are you Validation addresses the concern: "Are
building it right?" you building the right thing?"

2 Ensures that the software system meets all Ensures that the functionalities meet
the functionality. the intended behavior.

3 Verification takes place first and includes the Validation occurs after verification and
checking for documentation, code, etc. mainly involves the checking of the
overall product.

4 Done by developers. Done by testers.


5 It has static activities, as it includes collecting It has dynamic activities, as it includes
reviews, walkthroughs, and inspections to executing the software against the
verify a software. requirements.

6 It is an objective process and no subjective It is a subjective process and involves


decision should be needed to verify a subjective decisions on how well a
software. software works.
Verification Validation
Are we building the system right? Are we building the right system?

Verification is the process of Validation is the process of


evaluating products of a evaluating software at the end of the
development phase to find out development process to determine
whether they meet the specified whether software meets the
requirements. customer expectations and
requirements.
The objective of Verification is to The objective of Validation is to
make sure that the product being make sure that the product actually
develop is as per the requirements meet up the user’s requirements,
and design specifications. and check whether the specifications
were correct in the first place.
Following activities are involved Following activities are involved
in Verification: Reviews, Meetings in Validation: Testing like black box
and Inspections. testing, white box testing, gray box
testing etc.
Verification is carried out by QA Validation is carried out by testing
team to check whether team.
implementation software is as per
specification document or not.
Execution of code is not comes Execution of code is comes
under Verification. under Validation.
Verification process explains Validation process
whether the outputs are according describes whether the software is
to inputs or not. accepted by the user or not.
Verification is carried out before Validation activity is carried out just
the Validation. after the Verification.
Following items are evaluated Following item is evaluated
during Verification: Plans, during Validation: Actual product or
Requirement Specifications, Software under test.
Design Specifications, Code, Test
Cases etc,
Cost of errors caught Cost of errors caught in Validation is
in Verificationis less than errors more than errors found in
found in Validation. Verification.
It is basically manually checking It is basically checking of developed
the of documents and files like program based on the requirement
requirement specifications etc. specifications documents & files.
Conclusion on difference
of Verification and Validation in
software testing:
• Both Verification and Validation are essential and
balancing to each other.
• Different error filters are provided by each of
them.
• Both are used to finds a defect in different way,
Verification is used to identify the errors in
requirement specifications & validation is used to
find the defects in the implemented Software
application.
V Model
• The V-model is an SDLC model where execution of
processes happens in a sequential manner in a V-
shape. It is also known as Verification and Validation
model.
• The V-Model is an extension of the waterfall model and
is based on the association of a testing phase for each
corresponding development stage. This means that for
every single phase in the development cycle, there is a
directly associated testing phase. This is a highly-
disciplined model and the next phase starts only after
completion of the previous phase.
V-Model - Design
• Under the V-Model, the corresponding testing
phase of the development phase is planned in
parallel.
• So, there are Verification phases on one side
of the ‘V’ and Validation phases on the other
side. The Coding Phase joins the two sides of
the V-Model.
• The following illustration depicts the different
phases in a V-Model of the SDLC
V-Model - Verification Phases
• Business Requirement Analysis
• This is the first phase in the development cycle where
the product requirements are understood from the
customer’s perspective. This phase involves detailed
communication with the customer to understand his
expectations and exact requirement. This is a very
important activity and needs to be managed well, as
most of the customers are not sure about what exactly
they need. The acceptance test design planning is
done at this stage as business requirements can be
used as an input for acceptance testing.
• System Design
– Once you have the clear and detailed product requirements, it is
time to design the complete system. The system design will
have the understanding and detailing the complete hardware
and communication setup for the product under development.
The system test plan is developed based on the system design.
Doing this at an earlier stage leaves more time for the actual
test execution later.
• Architectural Design
– Architectural specifications are understood and designed in this
phase. Usually more than one technical approach is proposed
and based on the technical and financial feasibility the final
decision is taken. The system design is broken down further into
modules taking up different functionality. This is also referred to
as High Level Design (HLD).
– The data transfer and communication between the internal
modules and with the outside world (other systems) is clearly
understood and defined in this stage. With this information,
integration tests can be designed and documented during this
stage.
• Module Design
– In this phase, the detailed internal design for all the system
modules is specified, referred to as Low Level Design
(LLD). It is important that the design is compatible with the
other modules in the system architecture and the other
external systems. The unit tests are an essential part of any
development process and helps eliminate the maximum
faults and errors at a very early stage. These unit tests can
be designed at this stage based on the internal module
designs.
• Coding Phase
– The actual coding of the system modules designed in the
design phase is taken up in the Coding phase. The best
suitable programming language is decided based on the
system and architectural requirements.
– The coding is performed based on the coding guidelines
and standards. The code goes through numerous code
reviews and is optimized for best performance before the
final build is checked into the repository.
• Validation Phases
– The different Validation Phases in a V-Model are
explained in detail below.
• Unit Testing
– Unit tests designed in the module design phase are
executed on the code during this validation phase.
Unit testing is the testing at code level and helps
eliminate bugs at an early stage, though all defects
cannot be uncovered by unit testing.
• Integration Testing
– Integration testing is associated with the architectural
design phase. Integration tests are performed to test
the coexistence and communication of the internal
modules within the system.
• System Testing
– System testing is directly associated with the system
design phase. System tests check the entire system
functionality and the communication of the system
under development with external systems. Most of
the software and hardware compatibility issues can be
uncovered during this system test execution.
• Acceptance Testing
– Acceptance testing is associated with the business
requirement analysis phase and involves testing the
product in user environment. Acceptance tests
uncover the compatibility issues with the other
systems available in the user environment. It also
discovers the non-functional issues such as load and
performance defects in the actual user environment.
V- Model ─ Application
• V- Model application is almost the same as the
waterfall model, as both the models are of sequential
type. Requirements have to be very clear before the
project starts, because it is usually expensive to go
back and make changes. This model is used in the
medical development field, as it is strictly a disciplined
domain.
• The following pointers are some of the most suitable
scenarios to use the V-Model application.
– Requirements are well defined, clearly documented and
fixed.
– Product definition is stable.
– Technology is not dynamic and is well understood by the
project team.
– There are no ambiguous or undefined requirements.
– The project is short
V-Model - Pros and Cons
• The advantage of the V-Model method is that it is very easy
to understand and apply. The simplicity of this model also
makes it easier to manage. The disadvantage is that the
model is not flexible to changes and just in case there is a
requirement change, which is very common in today’s
dynamic world, it becomes very expensive to make the
change.
• The advantages of the V-Model method are as follows −
– This is a highly-disciplined model and Phases are completed one
at a time.
– Works well for smaller projects where requirements are very
well understood.
– Simple and easy to understand and use.
– Easy to manage due to the rigidity of the model. Each phase has
specific deliverables and a review process.
• The disadvantages of the V-Model method are as
follows −
– High risk and uncertainty.
– Not a good model for complex and object-oriented
projects.
– Poor model for long and ongoing projects.
– Not suitable for the projects where requirements are
at a moderate to high risk of changing.
– Once an application is in the testing stage, it is difficult
to go back and change a functionality.
– No working software is produced until late during the
life cycle.
?
Thank You

You might also like