Software Testing: Presented by

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 77

1

MBM-317

SOFTWARE TESTING

PRESENTED BY: 12/07/21

ANUPAMA MUKHERJEE
RATNANJALI ARORA
GURPREETI SHARMA
BABITA DHURVEY
History of software Testing
The separation of debugging from testing was
initially introduced by Glenford J. Myers in
1979. Although his attention was on breakage
testing ("a successful test is one that finds a
bug“) it illustrated the desire of the software
engineering community to separate fundamental
development activities, such as debugging, from
that of verification.
Dave Gelperin and William C. Hetzel classified
in 1988 the phases and goals in software testing
in the following stages:
• Until 1956 - Debugging oriented
• 1957–1978 - Demonstration oriented
• 1979–1982 - Destruction oriented
• 1983–1987 - Evaluation oriented
• 1988–2000 - Prevention oriented
What Is Software Testing
• Software testing is an investigation conducted to
provide stakeholders with information about the
quality of the product or service under test.

• Software testing also provides an objective,


independent view of the software to allow the
business to appreciate and understand the risks at
implementation of the software. Test techniques
include, but are not limited to, the process of
executing a program or application with the intent
of finding software bugs.
Software testing can also be stated as the process
of validating and verifying that a software
program/application/product:
meets the business and technical requirements
that guided its design and development;
works as expected; and
can be implemented with the same
characteristics
Purpose of software Testing
• A primary purpose of testing is to detect
software failures so that defects may be
discovered and corrected. This is a non-trivial
pursuit. Testing cannot establish that a product
functions properly under all conditions but can
only establish that it does not function properly
under specific conditions.

• The scope of software testing often includes


examination of code as well as execution of that
code in various environments and conditions as
well as examining the aspects of code: does it do
what it is supposed to do and do what it needs to
do.
• In the current culture of software development, a
testing organization may be separate from the
development team. There are various roles for
testing team members. Information derived
from software testing may be used to correct the
process by which software is developed.
Testing methods

The box approach

• Software testing methods are traditionally divided into


white- and black-box testing. These two approaches are
used to describe the point of view that a test engineer
takes when designing test cases.

White box testing

• White box testing is when the tester has access to the


internal data structures and algorithms including the
code that implement these.
Types of white box testing

The following types of white box testing exist:

• API testing (application programming interface) -


testing of the application using public and private APIs
• Code coverage - creating tests to satisfy some criteria of
code coverage (e.g., the test designer can create tests to
cause all statements in the program to be executed at
least once)
• Fault injection methods - improving the coverage of a
test by introducing faults to test code paths
• Mutation testing methods
• Static testing - White box testing includes all static
testing
Black box testing
• Black box testing treats the software as a "black box"—without
any knowledge of internal implementation. Black box testing
methods include: equivalence partitioning, boundary value
analysis, all-pairs testing, fuzz testing, model-based testing,
traceability matrix, exploratory testing and specification-
based testing.
• Specification-based testing: Specification-based testing
aims to test the functionality of software according to the
applicable requirements. Thus, the tester inputs data into, and
only sees the output from, the test object. This level of testing
usually requires thorough test cases to be provided to the
tester, who then can simply verify that for a given input, the
output value (or behaviour), either "is" or "is not" the same as
the expected value specified in the test case.
 Specification-based testing is necessary, but it is insufficient
to guard against certain risks.
Advantages and disadvantages:
The black box tester has no "bonds" with the code, and a
tester's perception is very simple: a code must have bugs.
Using the principle, "Ask and you shall receive," black box
testers find bugs where programmers do not. On the other
hand, black box testing has been said to be "like a walk in a
dark labyrinth without a flashlight," because the tester doesn't
know how the software being tested was actually constructed.
As a result, there are situations when (1) a tester writes many
test cases to check something that could have been tested by
only one test case, and/or (2) some parts of the back-end are
not tested at all.
Therefore, black box testing has the advantage of "an
unaffiliated opinion", on the one hand, and the disadvantage
of "blind exploring", on the other
Grey box testing
• Grey box testing involves having knowledge of internal
data structures and algorithms for purposes of designing the
test cases, but testing at the user, or black-box level.
Manipulating input data and formatting output do not qualify
as grey box, because the input and output are clearly outside
of the "black-box" that we are calling the system under test.
• This distinction is particularly important when conducting
integration testing between two modules of code written by
two different developers, where only the interfaces are
exposed for test.
• However, modifying a data repository does qualify as grey
box, as the user would not normally be able to change the data
outside of the system under test. Grey box testing may also
include reverse engineering to determine, for instance,
boundary values or error messages.
A sample testing cycle
Although variations exist between organizations, there is a
typical cycle for testing. The sample below is common among
organizations employing the Waterfall development model.

• Requirements analysis: Testing should begin in the


requirements phase of the software development life cycle.
During the design phase, testers work with developers in
determining what aspects of a design are testable and with
what parameters those tests work.
• Test planning: Test strategy, test plan, testbed creation.
Since many activities will be carried out during testing, a plan
is needed.
• Test development: Test procedures, test scenarios, test
cases, test datasets, test scripts to use in testing software.
• Test execution: Testers execute the software based on the
plans and test documents then report any errors found to the
development team.
• Test reporting: Once testing is completed, testers generate
metrics and make final reports on their test effort and whether or
not the software tested is ready for release.
• Test result analysis: Or Defect Analysis, is done by the
development team usually along with the client, in order to decide
what defects should be treated, fixed, rejected (i.e. found software
working properly) or deferred to be dealt with later.
• Defect Retesting: Once a defect has been dealt with by the
development team, it is retested by the testing team. AKA
Resolution testing.
• Regression testing: It is common to have a small test program
built of a subset of tests, for each integration of new, modified, or
fixed software, in order to ensure that the latest delivery has not
ruined anything, and that the software product as a whole is still
working correctly.
• Test Closure: Once the test meets the exit criteria, the activities
such as capturing the key outputs, lessons learned, results, logs,
documents related to the project are archived and used as a
reference for future projects.
15

SOFTWARE TESTING TOOLS


16

12/07/21

Are they really useful?


• There's no easy answer to this question.
• Software testing is a crucial part of the
development process, and tools definitely help
with this oftentimes overwhelming task.
• But to be useful, software testing tools must
support the testing process.
17

12/07/21

What does this mean?


• It means you need to start by understanding the
different phases of software testing.
• For example, do you understand the difference
between black box and white box testing?
• Black box confirms only that the software meets
its stated requirements and functions accordingly.
• White box testing looks at the actual software
code to ensure paths, conditions, code statements
and branches are written properly.
18

12/07/21

WHICH TOOL TO USE?


• Do you know which software testing tools work
best for unit testing, integration testing or
system testing?
• Each of these testing processes addresses a
different aspect or view of the software.
• Starting to understand that software testing
tools are not a "one size fits all" solution…
19

12/07/21

• Some rounds of software testing are better


accomplished using humans, not software
testing tools.
• Do you know which ones?
Functional testing, alpha testing, acceptance
testing and usability testing fall into this
category.
• To confuse matters even a bit further, you've got
to make sure you're using the right type of
human for the different "human" software
testing processes.
20

12/07/21

SOFTWARE TESTERS
• Software testers work closely with developers
throughout all stages of development and use
software testing tools.
• End users are those individuals for whom the
software has been created.
• Beta testers are humans with more technical
backgrounds (generally) who get involved in
software testing just before the software is ready
to be released into production.
21

12/07/21

• They look for last minute bugs and functionality


issues.
• User acceptance testing ensures the resulting
software is "user friendly" and satisfies the end
users' needs; also important before the software
goes into production.
22

12/07/21

• There are even more phases of software testing that


go on during the software development lifecycle.
• Determining which of the phases are better
accomplished using software testing tools and which
are better left to human intervention takes effort.
• In larger development houses, the IT department
has a pretty good grasp on this.
• And they're the ones who ultimately decide which of
the hundreds of software testing tools on the market
are best for their Development/testing teams;
definitely not an easy task.
23
maintenance and upgrade cycles that other types of software do. So before making purchasing decisions, it's best to try out these products, talk with existing u

12/07/21

WHAT ELSE TO KNOW…


• Software testing tools are themselves software.
• As such, each of these testing tools undergoes the
whole development, testing, maintenance and
upgrade cycles that other types of software do.
• So before making purchasing decisions, it's best
to try out these products, talk with existing users,
research the product's track record, and know
how much configuration you'll need to do to get
the product up and running.
24

12/07/21

AND REMEMBER…
• software testing tools can't work miracles.
• They cannot make poorly designed software
better.
• They can't do anything about unrealistic
development schedules.
• And most importantly, software testing tools will
never work properly if management or others
are allowed to continually change software
specifications after development has begun.
25

Types of Software Testing

12/07/21
26

12/07/21

SOFTWARE TESTING
• Software testing is an investigation conducted to
provide stakeholders with information about the
quality of the product or service under test.
• Software testing also provides an objective,
independent view of the software to allow the
business to appreciate and understand the risks
at implementation of the software. 
27

Types of Software Testing


• In the testing phase a software undergoes
various types of testing before it is shipped to the
customer
• About 50 types of testing are available.
28

12/07/21

Automation Testing
• Determines how well a product functions
through a series of automated tasks, using a
variety of tools to simulate complex test data.
29

12/07/21

Acceptance Testing
• Formal testing conducted to determine whether
or not a system satisfies its acceptance criteria -
enables a customer to determine whether to
accept the system or not.
30

12/07/21

Alpha Testing
• Testing of a software product or system
conducted at the developer’s site by the
customer
31

12/07/21

Automated Testing
• That part of software testing that is assisted with
software tool(s) that does not require operator
input, analysis, or evaluation.
32

12/07/21

Beta Testing
• Testing conducted at one or more customer sites
by the end user of a delivered software product
system.
33

12/07/21

Black-Box Testing
• Functional Testing based on the requirements
with no knowledge of the internal program
structure or data. Also known as closed box
testing.
34

12/07/21

Bottom-up Testing
• An integration testing technique that tests the
low level components first using test drivers for
those components that have not yet been
developed to call the low level components for
test.
35

12/07/21

Clear-Box Testing
• Another term for White-Box Testing. Structural
Testing is sometimes referred o as clear-box
testing, since “white boxes” are considered
opaque and do not really permit visibility into
the code. This is also known as glass-box or
open-box testing.
36

12/07/21

Compatibility Testing
• Determines how well a product works in
conjunction with a variety of other products, on
certain operating systems, across a broad range
of hardware and component configurations and
when exposed to earlier versions of the product.
37

12/07/21

Database Testing
• Most web sites of any complexity store and
retrieve information from some type of database.
Clients often want us to test the connection
between their web site and database in order to
verify data and display integrity.
38

12/07/21

Dynamic Testing
• Verification or validation performed which
executes the system code.
39

12/07/21

Error-based Testing
• Testing where information about programming
style, error-prone language constructs, and other
programming knowledge is applied to select test
data capable of detecting defaults, either a
specified class of faults or all possible faults.
40

12/07/21

Exhaustive Testing
• Executing the program with all possible
combinations of values for program variables.
41

12/07/21

Failure-directed Testing
• Testing based on the knowledge of the types of
errors made in the past that are likely for the
system under test.
42

12/07/21

Fault based testing


• Testing that employs a test data selection
strategy designed to generate test data capable of
demonstrating the absence of a set of pre-
specified faults, typically, frequent occurring
faults.
43

12/07/21

Functionality Testing
• Determines the extent to which a product meets
expected functional requirements through
validation of product features. This process can
be as simple as a smoke test to ensure primary
functional operation, or as detailed as checking a
variety of scenarios and validating that all output
meets specified expectations.
44

12/07/21

Functional Localization Testing


• Determines how well a product functions across
a range of language, localized versions are
checked to determine whether particular
language translations create failures specific to
that language versions.
45

12/07/21

Heuristics Testing
• Another term for fault-directed testing.
46

12/07/21

Hybrid Testing
• A combination of top-down testing combined
with bottom-up testing of prioritized or available
components.
47

12/07/21

Integration Testing
• An orderly progression of testing in which the
software components or hardware components,
or both are combined and tested until the entire
system has been integrated.
48

12/07/21

Interoperability Testing
• Determines, to a deeper extent than
compatibility testing, how well a product works
with a specific cross section of external
components such as hardware, device drivers,
second-party software and even specific
operating systems and factory delivered
computer systems.
49

12/07/21

Intrusive Testing
• Testing that collects timing and processing
information during program execution that may
change the behavior of the software from its
behavior in a real environment.
50

12/07/21

Install Testing
• Determines how well and how easily a product
installs on a variety of platform configurations
51

12/07/21

Load Testing
• Determines how well a product functions when
it is in competition for system resources. The
competition most commonly comes from active
processes, CPU utilization, I/O activity, network
traffic or memory allocation.
52

12/07/21

Manual Testing
• That part of software testing that requires
operator input, analysis, or evaluation.
53

12/07/21

Mutation Testing
• A method to determine test set thoroughness by
measuring the extent to which a test set can
discriminate the program from slight variants of
the program.
54

12/07/21

Mundane Testing
• A test that include many simple and repetitive
steps, it can be called as Manual Testing
55

12/07/21

Operational Testing
• Testing performed by the end user on software
in its normal operating environment.
56

12/07/21

Path coverage Testing


• A test method satisfying coverage criteria that
each logical path through the program is tested.
Paths through the program often are grouped
into finite set of classes; one path from each
class is tested.
57

12/07/21

Performance Testing
• Determines how quickly a product executes a
variety of events. This type of testing sometimes
includes reports on response time to a user’s
command, system throughput or latency.
Although the word performance has various
meanings, eg: speed.
58

12/07/21

Qualification Testing
• Formal Testing usually conducted by the
developer for the customer, to demonstrate that
the software meets its specified requirements.
59

12/07/21

Random Testing
• An essentially black-box testing approach in
which a program is tested by randomly choosing
a subset of all possible input values. The
distribution may be arbitrary or may attempt to
accurately reflect the distribution of inputs in the
application environment.
60

12/07/21

Regression Testing
• Selective re-testing to detect faults introduced
during modification of a system or system
component to verify that modifications have not
caused unintended adverse effects, or to verify
that a modified system or system component
still meets its requirements.
61

12/07/21

Smoke Testing
• It is performed only when the build is ready.
Every file is compiled, linked, and combined into
an executable program every day, and the
program is then put through a “smoke test”,
arelatively simple check to see whether the
product “smokes” when it runs.
62

12/07/21

Statement Coverage Testing


• A test method satisfying coverage criteria that
requires each statement be executed at least
once.
63

12/07/21

Static Testing
• Verification performed without executing the
system’s code. Also called static analysis.
64

12/07/21

Stress Testing
• Determines, to a deeper extent than load testing,
how well a product functions when a load is
placed on the system resources that exceeds
their capacity. Either stress testing can also
determine the capacity of a system by increasing
the load placed on the resources until a failure or
other unacceptable product behaviour occurs.
Stress testing can also involve placing loads on
the system for extended periods.
65

12/07/21

System Testing
• The process of testing an integrated hardware
and software system to verify that the system
meets its specified requirements.
66

12/07/21

System Integration Testing


• Determine, through isolation, which component of a
product is the roadblock in the development process.
This testing is beneficial to products that come together
through a series of builds where each step in the
development process has the potential to introduce a
problem. System integration testing is also used in
systems composed of hardware and software. In
essence, system integration testing is intended to
exercise the whole system in real-world scenarios and,
again through isolation, determine which component is
responsible for a certain defect.
67

12/07/21

Top-down Testing
• An integration testing technique that test the
high-level components first using stubs for
lower-level called components that have not yet
been integrated and that stimulate the required
actions of those components.
68

12/07/21

Unit Testing
• The testing done to show whether a unit (the
smallest piece of software that can be
independently compiled or assembled, loaded,
and tested) satisfies its functional specification
or its implemented structure matches the
intended design structure.
69

12/07/21

White box Testing


• Testing approaches that examine the program
structure and derive test data from the program
logic.
70

12/07/21

Web site Testing


• Compatibility Testing
▫ compatibility testing tests your web site across a wide
variety browser/operating system combinations. This
testing typically exposes problems with plug-ins.
ActiveX controls, Java applets, JavaScript, forms and
frames. Currently there are over 100 possible
combinations of different windows operating systems
and various versions of NE and IE browsers. It is
important to test across a large number of these to
ensure that users with diverse config don’t experience
problems when using the web site or application.
71

12/07/21

Web Site Testing


• Content Testing
▫ Content Testing verifies a web site’s content such
as images, clip art and factual text.
72

12/07/21

Web site Testing


• Database Testing
▫ Most web sites of any complexity store and
retrieve information from some type of database.
Clients often want us to test the connection
between their web site and database in order to
verify data and display integrity.
73

12/07/21

Web site Testing


• Functionality Testing
▫ Funtionality testing ensures that the web site performs
as expected. The details of this testing will vary
depending on the nature of your web site. Typical
examples of this type of testing include link checking,
form testing, transaction verification for e-commerce
and databases, testing java applets, file upload testing
and SSL verification. For testing, which is repetitive in
nature, an automated test tool such as Rational’s
Visual Test can be used to decrease the overall
duration of a test project.
74

12/07/21

Web site Testing


• Performance Testing
▫ Performance Testing measures the web site
performance during various conditions. When the
conditions include different numbers of concurrent
users, we can run performance tests at the same time
as stress and load tests.
• Eight Second Rule
▫ Every page within a web site must load in eight
seconds or less, even for users on slow modem
connections, or they risk losing their user to a
competitor site that serves pages more quickly.
75

12/07/21

Web site Testing


• Server Side Testing
▫ Server side testing tests the server side of the site,
rather than the client side. Examples of server side
testing include testing the interaction between a web
and an application server, checking database integrity
on the database server itself, verifying that ASP scripts
are being executed correctly on the server and
determining how well a web site functions when run
on different kinds of web servers.
76

12/07/21

Web site Testing


• Stress and Load Testing
▫ Load Testing, a subset of stress testing, verifies that a
web site can handle a particular number of concurrent
users while maintaining acceptable response times.
To perform this type of testing use sophisticated
automated testing tools, such as Segue’s
SilkPerformer, to generate accurate metrics based on
overall system load and server configuration.
77

THANK YOU

12/07/21

You might also like