Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 65

SOFTWARE TESTING

(ITS 64704)
Testing Throughout the Software Life Cycle
(Part 1)
Topics

Testing in Software Development Models

Testing Levels

Testing of New Product Version

Overview of Test Types


Topics

Testing in Software Development Models

Testing Levels

Testing of New Product Version

Overview of Test Types


General V Model
(sequential development model)
General V Model
(sequential development model)

Construction phase:

Requirement Definition

• Specifications and requirements from


customers are collected, specified and
adopted
• The purpose, and desired features of the
system to be developed are determined

Functional System Design

• Requirements are represented by the


functions and dialog sequences of the
new system
General V Model
(sequential development model)

Construction phase:
Technical System Design

• Technical implementation of the system is designed


• Includes:
• Definition of interfaces to the system environment
• Decomposition of the system into sub-systems that can
be developed independently (if possible)

Component Specification

• For each sub-system, the task, behaviour, internal


structure and interfaces to other sub-systems are
defined

Programming

• Implementation of every specific component (module,


unit, class, etc) using a programming language
General V Model
(sequential development model)

Test Level phase:

Component Testing

• Checks whether each component satisfies the requirement


of its specification

Integration testing

• Checks whether groups of components as in the technical


system design intended operate together

System testing

• Checks whether the system as a whole meet the specified


requirements

Acceptance testing

• Checks whether the system shows the contractually


approved performance from a customer perspective
Validation in V Model

Definition: Confirmation by examination and thorough provision of objective evidence


that the requirements for a specific intended use or application have been fulfilled
(ISO 9000)

• Checking the developmental result against the original requirements


• The task for each phase of testing is to prove whether the development results meet
the requirements which are specified or relevant for the respective level
• When performing validation, the testers evaluate whether a (partial) product can
actually solve a specified task and is suitable for its intended purpose
• Examines whether the product is useful in the context of the intended product use
Validation in V Model

Validation

Validation

Validation

Validation
Verification in V Model

Definition: Confirmation by examination and through provision of objective evidence


that the specified requirements have been fulfilled
(ISO 9000)

• In addition to validation testing, using V model also requires verification


• Verification is based on a single development phase and proves the correctness and
completeness of the result from a phase relative to its direct specification
• Examines whether the specifications have been implemented correctly, regardless the
intended purpose or use of the product
• In practice, every test contains both verification and validation aspects
Verification in V Model

n
tio
ca
rifi
n Ve
tio
ca
rifi
Ve
n
tio
ca n
rifi tio
Ve ca
rifi
Ve
V Model: Summary

• Construction and test levels activities are separated but equal (left side/ right side)
• “V” illustrates the testing aspects of verification and validation.
• V-Model gives the impression that testing starts relatively late after implementation.
THIS IS WRONG.
• The test steps in the right branch of the model are to be understood as phases of test
execution
• The associated test preparation (test planning, test specification) starts earlier and is
performed in parallel with the development steps (in the left branch)
Iterative-Incremental Development Models

• Development takes place in small steps


• The system is not created “in one go”, but in a
planned series of versions and intermediate
deliveries (increment)
• Enhancement are made at each iteration
• Each iteration results in one expanding (yet)
incomplete system
Testing in Iterative-Incremental Development
Models

• Testing needs to be adjusted to the development sequence.


• For each increment (intermediate delivery) reusable tests need to be planned.
• In each iteration the existing tests can be reused.
• For new functionality, additional tests are needed.
• For each increment within an iteration the different testing levels can be
performed.
• Continuous integration testing and regression testing is necessary.
• Verification and validation can be conducted for each increment
Examples of Iterative-Incremental
Development Models

• Prototyping
• Rapid Application Development (RAD)
• Rational Unified Process (RUP)
• Agile Development Method
Tips for Good Testing

Regardless of the chosen software development model, the


following aspects regarding testing have been proven to be
useful:
• For every development activity there is a corresponding test activity.
• Testing activities should start early in the development cycle. Test
analysis and design should begin parallel to the corresponding stage of
development.
• Include testers early in the review process of the development
documents.
• Software development models should not be used "out of the box". They
must be adapted to project and product characteristics (e.g. number of
applicable test levels, the number and length of iterations, etc. need to
be adapted per project context)
Topics

Testing in Software Development Models

Testing Levels

Testing of New Product Version

Overview of Test Types


Testing Levels

• The distinction between different levels of testing is more than just a time-
based subdivision of testing activities.
• Test levels are distinguished from each other by:
 different test objects
 different test basis
 different testing strategy
 application of different test methods
 using different testing tools
 different goals and results
 different responsibilities
 different specialized test personnel required
• If configuration data is part of the system, the test of these data should also
be taken into account in the test planning.
BREAK FOR 10 MINUTES
Testing Levels

Drivers and stubs


Component Test
Test first

Top down
Test Levels
Bottom up
Integration Test
Ad-hoc
System Test
Big bang
Testing Levels

Drivers and stubs


Component Test
Test first

Top down
Test Levels
Bottom up
Integration Test
Ad-hoc
System Test
Big bang
Component Testing

• Component testing (unit testing) is the first test level, following directly after
the programming phase
• The created software modules are subjected to a systematic test for the first
time.
• Depending on the programming language used, these smallest units of
software are called differently
• modules, units or classes (in case of object oriented programming).
• the corresponding tests are called module, unit, and class testing.
• Abstracted from the programming language the terms components or
software modules are used.
Component Testing Test Harness

• At this test level, the tasks are closely related to development.


• To create the test environment, developer know-how is required.
• The program code of the test object - at least the code of the interface - must
be available and understood, so that the driver can be programmed to call the
test object correctly.
• Therefore, component tests are often carried out by developers.
• For these reasons, component testing is often referred to as development
testing.
Component Testing: Test Object

• Every single software component is examined in isolation from other software


components of the system. (isolation is meant to exclude external component
influences during the test)

• If the testing exposes a defect, the cause can be assigned clearly to the tested
component.

• The component that is tested can also be made up of several modules which
are assembled into a unit. The objective of component testing is to test the
internal component aspects but not the interaction with neighbouring
components.

• The test basis for component testing is primarily the specification of the
component (created at the design stage) and their program code. In addition,
all other documents which are related to the tested component also could be
considered as test basis.
Component Testing: Test Harness

Test harness contains :

Drivers Stubs

 Stubs and drivers are software replacement simulators required for modules not available
when performing a unit or an integration test.
Component Testing Test Harness: Stub & Driver

Driver

1. A driver is a piece of code which is emulating a calling function. Basically we


call the driver as main function which calls other modules to form complete
applications.
2. Drivers are created in integration testing following bottom-up approach.
Dummy main function has been created which will call other sub modules.
3. A piece of code that passes test cases to another piece of code. Drivers
invoke modules under testing.
Component Testing Test Harness: Stub & Driver

Driver
Drive
• Drivers are basically called in BOTTOM UP of M9
testing approach.
• In bottom up testing approach the bottom
level modules are prepared but the top level
modules are not prepared. M8 Module
on test
• Testing of the bottom level modules is not
possible with the help of main program.
Modules
• Hence, we prepare a dummy program or tested in an
driver to call the bottom level modules and M1 M2 earlier
perform its testing. stage
Driver: Example

Suppose we have an application in which three


modules are Login, Add student and Cancel Admission.
Now we want to do the testing of the Add student
module.
Add student module cannot run standalone; first we
have to enter into login page and then add student Login (Driver)
module will be executed. So Add Student module will
be called by the Login module.
Let us assume that Login module has not been
developed by the developers. (Module
Add student under test)
In this case we have to create a dummy module of
Login which will call Add student module and then the
functionalities of the add student module will be
tested.
In this case Login module will be called as driver and
the add student will be the module which is being
tested.
Component Testing Test Harness: Stub & Driver

Stubs
1. Stubs are created in the integration testing that is following Top-down
approach.
2. Stub is a piece of code emulating a called function. Stub is created by the
tester when high level modules are being tested and the other modules
are not yet created.
3. Overall testing is only possible when all the modules are present and
dummy modules have to be created to replicate basic functionality of
modules under construction.
4. A stub is basically a piece of code that simulates the activity of missing
modules. It just accepts the value from calling module and returns null
value.
Component Testing Test Harness: Stub & Driver

Stubs
• Stubs are basically used in TOP-DOWN
approach of integration testing.
• The upper modules are prepared first and Module
are ready for testing while the bottom tested in an
M9 earlier
modules are not yet prepared by the stage
developers.
• So in order to form the complete
application we create dummy programs M8 Module
on test
for the lower modules in the application
so that all the functionalities can be
tested.
Stub Stub
of M1 of M2
Component Testing Test Harness: Stub & Driver

4 basic types of stubs


1.Display a trace message which is used by the modules being
tested
2.Display the parameter values which are used by the modules
3.Return the values which are used by the modules
4.Return values selected by the parameter which are used by
the modules being tested.
Stubs: Example

Suppose we have an application in which three


modules are Login, Add student and Cancel Admission.
Now we are doing unit testing of Login module. Login (Module
Add student and Cancel admission are not yet under test)
prepared.
Then we will create dummy modules for Add student
and Cancel admission in order to carry out testing of
Add student Cancel
Login modules.
admission
These dummy modules of Add student and Cancel
admission are known as stubs.
They receive instructions from login module and (Stub) (Stub)
display the success or failure of login functionality.
Component testing: Test Objectives
• The most important task of component testing - to ensure that the test object
implements the specified functionality correctly and completely.
• Thus, various tests will be conducted to check :
Test of robustness
• Each software must work with a large number of neighboring components and exchange
data with them.

Efficiency
• How economically the component handles the available computer resources. E.g memory
consumption in kilobytes, response time in milliseconds, etc

Maintainability
• includes properties of a program that will determine how easy or difficult it is going to be
to change or enhance the program.
• e.g code structure, modularity, commenting the code, understandability, currentness of
documents etc.
Component testing: Test Strategy

The tester usually has access to the program code.

• The component test can therefore be carried out using white-box test techniques.
• The tester can create test cases by taking advantage of his/her knowledge of
component-internal program structures, methods and variables.

In practice, in many cases the component test is "only" carried out as a black-box test i.e.,
the inner structure is not used to design the test cases.

• Real software systems consist of hundreds or thousands of components. An


examination of the code is probably only feasible for selected components.
• Due to integration, the elementary program components are often composed to make
bigger units.
Testing Levels

Drivers and stubs


Component Test
Test first

Top down
Test Levels
Bottom up
Integration Test
Ad-hoc
System Test
Big bang
Test-Driven Development (TDD) or
“Test First’ Approach: component level

Principle:

• approach to development which combines test-first development


where you write a test before you write just enough production
code to fulfill that test and refactoring
• relies on the repetition of a very short development cycle

Primary goal:

1. specification and not validation


2. one way to think through your requirements or design before your
write your functional code
3. to write clean code that works
TDD Example

• Let’s say that you want to write a program that will say,
“Hello, [name]!”, where name is whatever name you give it.
• (Ex. If your name was Hidayah, and you wanted your program
to say hello to you using your name: “Hello, Hidayah!”)
• If you don’t give your program a name, then you want it to
say, “Hello, world!” Write test first
before write
code
TDD Example

Basically, what this test is saying is:


• There is a function called “hello”
• When you call hello( ), you
should get the string, “Hello,
world!”
• When you call hello( ) with a
parameter, you could get the
string, “Hello, + parameter!”

Run the test


TDD Example

• Both of our tests are currently


failing.
• That’s good; that’s expected!

This example is using testing framework Jasmine Fail


and test runner Test’em
TDD Example

• It’s our job now to go through


our tests, one by one, and
write the simplest code we
can to make each test pass.
• Our two tests are, “Hello says
hello,” and “Hello says hello
to someone.”
• Pay attention to light blue
boxes — these tell us why our
tests aren’t passing, and by
extension, what we can do to
fix it.

This example is using testing framework Jasmine Fail


and test runner Test’em
TDD Example

• Let’s focus on the first test for


now.
• Jasmine’s reason for why this
specific test didn’t pass is that
hello is not defined.
• So, let’s try defining hello!

Run the test

This example is using testing framework Jasmine


and test runner Test’em
TDD Example

• We now have a different error,


which means something different
is happening.
• Now, our first test’s feedback
says, “Expected undefined to
equal ‘Hello, world!’.”
• Jasmine is telling us that we’re
expecting the output of hello( ) to
equal “Hello, world!”, but instead,
we’re getting undefined.

This example is using testing framework Jasmine Fail


and test runner Test’em
TDD Example

• Let’s try putting something in


our function that will output
“Hello, world!” when the
function is run.

Run the test

This example is using testing framework Jasmine


and test runner Test’em
TDD Example

• We can see in the spec list that


“says hello” is now green, which
means it passed.
Congratulations! Now onto the
second one.

This example is using testing framework Jasmine Pass


and test runner Test’em
TDD Example

• If you also notice, our error


message has changed to
“Expected ‘Hello, world!’ to
equal ‘Hello, Fred!’”.
• So this lets us know that even
when we specify a name, our
program is still giving us, “Hello,
world!” (Which isn’t what we
This example is using testing framework Jasmine
and test runner Test’em want it to do!)
TDD Example

• We want a way to input a


name, and have it return back
to us.
• So, let’s add name as a
parameter.
• And it seems like we only
want “Hello, world!” to be
returned if we don’t specify a
name — so, let’s try using an
if statement!

Run the test


This example is using testing framework Jasmine
and test runner Test’em
TDD Example

• And if we go back to our test


specs they’re both passing!
• Our program works exactly as it
should.
• Development for this function
should stop here.

Pass
This example is using testing framework Jasmine
and test runner Test’em
Test-Driven Development (TDD) or
“Test First’ Approach: component level

involves adding a test quickly, just for coding to fail

run your tests and make sure that your new test fails

write and update the functional code, so that it can


easily pass new tests.

run your tests for one more time.


In case of any failure, you have to update the code
and follow the test procedure again.
Testing Levels

Drivers and stubs


Component Test
Test first

Top down
Test Levels
Bottom up
Integration Test
Ad-hoc
System Test
Big bang
Integration testing

• Second test level after component testing

• The assigned test objects (i.e. individual components)


Assumption have already been tested
• Defects found have been corrected

• Developers, testers and special integration teams


Integration build groups of components into larger modules and
sub systems.

Integration • Tests whether the interaction of all components


Testing works properly.
• To find defects at the interfaces and in the interaction
Objective
between integrated components.
Integration Testing: Test Object

• Individual components are assembled progressively into larger units


(integration) and undergo an integration test.

• Each sub-system can be the basis for further integration of larger units.

• In practice
• a software system is rarely developed on a greenfield, but an existing
system is modified, extended or linked with other systems.
• many system components are standard products which are purchased on
the market.
• In component testing, those old or standard components probably go
unnoticed. In integration testing, these system parts need to be taken into
account and their interaction with other parts needs to be controlled.
Integration Testing Test Harness

• During integration testing, drivers are required, that provide the test objects
with test data, accept the test results, and record them.

• Any existing drivers from the component testing can normally be reused.

• As interface calls and data traffic needs to be tested via the driver interfaces,
integration testing often needs additional diagnostic tools called monitors to
read and record the data passed between the components.
Integration testing: Test Objectives
The objectives of integration testing are clear:
• detect defects in the interfaces.
• detect defects in the interaction between components.
• test non-functional characteristics (e.g. performance) if possible
Problems can occur even when trying to integrate two components, when they cannot be
built
• because their interface formats are incompatible
• because some data is missing
• because the developers have split the system into different components than
specified.
Other issues:
• Can be difficult to find problems affecting the performance of the interacting program
parts.
• These are defects in the data exchange between the components which can only be
detected by a dynamic test.
Integration testing: Test Objectives

The following A component transmits nothing or syntactically incorrect data to the


defects can be receiving component, which causes it to malfunction or crash
broadly (functional defect of a component, incompatible formats, interfaces,
distinguished: protocol errors).
The communication works correctly, but the components involved
interpret data passed differently (functional defects of a component,
conflicting or misinterpreted specifications).

The data is transferred correctly, but at the wrong time (timing


problem) or in too short intervals of time (throughput or load
problem).

None of these types of defects may be found in component testing


because the failure only appear in the interaction between two
software units.
Integration testing without
component testing?
• Is it possible not to perform component testing, so that all test cases are
executed after the integration?
• Of course this is possible and, unfortunately, this is often an approach
encountered in practice.
• Some serious disadvantages are connected with this approach:

Most failures that will occur in such tests are caused by functional defects of individual
components. So an implicit component test has to be executed in an unsuitable test
environment, which complicates the access to the single component.

As no suitable access to the single component is possible, some failures cannot be provoked
and therefore defects can not be found.

If a failure occurs in the test, it may be difficult or even impossible to narrow down its cause.
Integration strategy: Top down

• The test sequence begins with the component that calls other components, but (except
the operating system) is not called itself.
• The lower unavailable components are replaced by stubs.
• Successively, the components of lower system layers are integrated.
• Each tested higher layer serves as a driver.

Example of hierarchy
Integration strategy: Top down

Advantage

• Only simple drivers are needed, as higher and pre-tested


components build the major part of the environment.

Disadvantage

• Subordinated, not yet integrated components must be replaced by


stubs, which can be very costly.
Integration strategy: Bottom up

• The test begins with the elementary components of the system that do not call other
components (except the functions of the operating system).
• Successively larger subsystems are assembled from tested components, followed by a
test of this integration.

Example of hierarchy
Integration strategy: Bottom up

Advantage

• No stubs are needed.

Disadvantage

• Higher components must be simulated by drivers.


Integration strategy: Remarks

• Both strategies can only be used in their pure form if the system to be tested
is programmed in a strictly hierarchical manner. This is rarely found in practice.

• Therefore, in reality, we always choose a more or less individual blend of the


two integration strategies.

• The larger the number of components introduced in an individual integration,


the more difficult it becomes to isolate defects and the greater the time
required for debugging.
Ad-hoc Integration testing

• The components are integrated, for example in the (random) order of their
completion.
• Once a component has completed its testing, it is checked whether it belongs
to an already existing and previously tested component, or it fits into a semi-
integrated subsystem.
• If so, both elements are integrated and the integration test is executed
between the two.

Advantage
• Time saved, as each component is integrated in its proper environment as
early as possible.

Disadvantage
• Stubs as well as drivers are needed.
Big – bang Integration testing

• The integration waits until all software components have been developed.
• Then, everything is thrown together at once.
• In the worst case, upstream component tests are abandoned

Example of hierarchy
Big – bang Integration testing

Disadvantage

• The waiting time for the big-bang to finish is a lost in test execution time.
• Since testing often suffers from a lack of time anyway, not one day of testing
should be given away.
• All failures are concentrated on one build; it will be difficult or impossible to
get the system to work at all.
• The localization and correction of defects is difficult and time consuming
Integration testing: Choosing a strategy

An optimal integration strategy (time and cost saving) depends on conditions that
must be analysed individually in each project :
The system architecture

• determines which and on how many components the whole system consist of and how they
depend on one another.

The project plan

• defines when single components are to be developed and when they are ready for testing.

The test plan/master test plan

• determines how intensive the system aspects must be tested and at what specific test level.

The test manager

• must establish an integration strategy from these conditions which is suitable for the project.
• This is done in consultation with the project manager.
To be continued…

You might also like