Chapter 17 - Software Testing Strategies

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 38

Chapter 17

Software Testing Strategies


Slide Set to accompany
Software Engineering: A Practitioner’s Approach,
7/e
by Roger S. Pressman

Slides copyright © 1996, 2001, 2005, 2009 by Roger S. Pressman

For non-profit educational use only


May be reproduced ONLY for student use at the university level when used in conjunction with
Software Engineering: A Practitioner's Approach, 7/e. Any other reproduction or use is
prohibited without the express written permission of the author.

All copyright information MUST appear if these slides are posted on a website for student use.
Software Lifecycle Activities

Requirements System Implemen-


Analysis Testing
Elicitation Design tation

Verified
Expressed in Structured By Implemented By
Terms Of By

class...
class...
class... ?
class....

Use Case Application Source


Subsystems Test
Model Domain Code Case Model
Objects
System/Software Testing
• Error detection and removal (debugging)
• Determine level of reliability
• Done by independent quality assurance group (except for
unit/component testing)
• The members of the SQA group must ensure that the developers are
doing high-quality work
– At the end of each workflow
– When the product is complete

3
Program testing
• Testing is intended to show that a program does
what it is intended to do and to discover program
defects before it is put into use.
• When you test software, you execute a program
using artificial data.
• Can discover the presence of errors NOT their
absence.
• Testing is part of a more general verification and
validation (V&V) process, which also includes
static validation techniques.

4
Verification vs. Validation
• Verification: "Are we building the software
right?”.
– The software should conform to its specification.
• Validation: "Are we building the right
software?”.
– The software should do what the user really
requires.
• Aim of V & V is to establish confidence that
the system is ‘fit for purpose’.
5
Terminology
• Failure: Any deviation of the observed
behavior from the specified behavior
• Error: The system is in a state such that
further processing by the system can lead
to a failure
• Fault: The mechanical or algorithmic
cause of an error
• Validation: Activity of checking for
deviations between the observed behavior
of a system and its specification.
6
Static Analysis vs Dynamic Analysis
• Static Analysis
– Without execution: Reading the source code
– E.g. Code Inspection
• Dynamic Analysis
– With execution: e.g. Testing;
– Black-box testing
– White-box testing

7
Test Strategy
• Unit Testing • System Testing
– Individual components (class or – The entire system is tested
subsystem) are tested – Carried out by developers-SQA
– Carried out by developers – Goal: Determine if the system
– Goal: Confirm that the meets the requirements
component or subsystem is (functional and nonfunctional)
correctly coded and carries out • Acceptance Testing
the intended functionality – Evaluates the system delivered
• Integration Testing by developers
– Groups of subsystems (collection – Carried out by the client. May
of subsystems) and eventually involve executing typical
the entire system are tested transactions on site on a trial
– Carried out by developers-SQA basis
– Goal: Test the interfaces among – Goal: Demonstrate that the
the subsystems. system meets the requirements
and is ready to use.
8
Testing Activities and Models

Object System Requirements


Client
Design Design Analysis
Expectations

Unit Integration System Acceptance


Testing Testing Testing Testing

Developer Client

9
Software Testing

Part 1: Unit Testing


Unit Testing

module
to be
tested

results

software
engineer
test cases
Software Testing Techniques

• Black-box testing
– Test the input/output behavior
• White-box testing
– Test the internal logic of the subsystem or
class

12
Black-box Testing
• Focus: I/O behavior. If for any given input (test
case), check the output to find if the unit passes
the test or not.
• Test cases designed from user requirements
– Almost always impossible to generate all possible inputs
("test cases")
• Goal: Reduce number of test cases by
equivalence partitioning:
– Divide inputs into equivalence classes
– Choose test cases for each equivalence class
• Example: If an object is supposed to accept a negative
number, testing one negative number is enough.

13
Black box testing: An example
public class MyCalendar {

public int getNumDaysInMonth(int month, int year)


{…}
}
Assume the following representations:
Month: (1,2,3,4,5,6,7,8,9,10,11,12)
where 1= Jan, 2 = Feb, …, 12 = Dec
Year: (1904,…,1999,2000,…,2011)

How many test cases do we need to do a full black


box unit test of getNumDaysInMonth()?
14
Black box testing: An example
• Equivalence classes for the month parameter
– Months with 30 days, Months with 31 days, February, Illegal
months: 0, 13, -1

• Equivalence classes for the Year parameter


– A normal year
– Leap years
– Illegal years: Before 1904, After 2011

How many test cases do we need to do a full black


box unit test of getNumDaysInMonth()?

3x4=
12 test cases
15
White-box Testing
• Focus: internal coding structure
– Every statement in the component is
executed at least once
• Four types of white-box testing
– Statement Testing
– Loop Testing
– Path Testing
– Branch Testing.

16
White-box Testing (Cont.)
• Statement coverage
– Tests each statement
• Loop coverage
– Loop to be executed exactly once
– Loop to be executed more than once
– Cause the execution of the loop to be skipped
completely
• Path coverage :
– Makes sure all paths in the program are executed
• Branch coverage
– Ensure that each outcome in a condition is tested at
least once
17
White-box Vs Black-box Testing
• White-box Testing
– Test cases are based on the code structure
– White-box testing often tests what is done, instead of
what should be done
– Cannot detect missing use cases
• Black-box Testing
– Test cases are based on the system specification
– Often not clear whether the selected test cases uncover a
particular error

18
Software Testing

Part 2: Integration and System


Testing
Integration Testing
• The entire system is viewed as a collection of
subsystems (sets of classes) determined
during the design

• Goal: Test all interfaces between subsystems


and the interaction of subsystems

• The integration testing strategy determines


the order in which the subsystems are
selected for testing and integration.

20
Why do we do integration testing?

• Unit tests only test the unit in isolation

• Many failures result from faults in the interaction


of subsystems

• Without integration testing the system test will be


very time consuming

21
Stubs and drivers

Driver
• Driver:
– A component, that calls the
TestedUnit
Tested
– Controls the test cases Unit

• Stub:
– A component, the TestedUnit Stub
depends on
– Partial implementation
– Returns fake values.
22
Example: A 3-Layer-Design
A
Spread
A
SheetView Layer I

B C D
Entity
Data Currency
B Calculator
C D Layer II
Model Converter

E F G
BinaryFile XMLFile Currency Layer III
E F G
Storage Storage DataBase

23
Big-Bang Approach
Test A

Test B A

Test C
Test B C D
Test D A, B, C, D,
E, F, G
Test E
E F G
Test F

Test G

24
Bottom-up Testing Strategy
• The subsystems in the lowest layer of the
call hierarchy are tested individually
• Then the subsystems above this layer are
tested that call the previously tested
subsystems
• This is repeated until all subsystems are
included.

25
Bottom-up A

Integration Test
B C D

Test E
E F G
Test B, E, F

Test F

Test C Test
A, B, C, D,
E, F, G

Test G Test D,G

26
Top-down Testing Strategy
• Test the subsystems in the top layer first
• Then combine all the subsystems that are
called by the tested subsystems and test
the resulting collection of subsystems
• Do this until all subsystems are
incorporated into the tests.

27
Top-down A

Integration Test
B C D

E F G

Test
Test A Test A, B, C, D A, B, C, D,
E, F, G

Layer I Layer I + II All Layers

28
Pros and Cons: Top-Down
Integration Testing
Pros:
– No drivers needed
Cons:
– Stubs are needed
– Writing stubs is difficult: Stubs must
allow all possible conditions to be tested
– Large number of stubs may be required,
especially if the lowest level of the
system contains many methods

29
Pros and Cons: Bottom-Up
Integration Testing
• Pro
– No stubs needed
• Con:
– Drivers are needed.
– Tests an important subsystem (the
user interface) last

30
System Testing

• Functional Testing
– Validates functional requirements
• Performance Testing
– Validates non-functional requirements
• Acceptance Testing
– Validates clients expectations

31
Functional Testing

Goal: Test functionality of system


• Test cases are designed from the
requirements analysis document (better:
user manual) and centered around
requirements
. and key functions (use
cases)
• The system is treated as black box

32
Performance Testing
Goal: Try to violate non-functional requirements

• Test how the system behaves when overloaded.


– Can bottlenecks be identified?

• Try unusual orders of execution


– Call a receive() before send()

• Check the system’s response to large volumes


of data
– If the system is supposed to handle 1000 items, try it
with 1001 items.

33
Types of Performance Testing
• Stress Testing • Security testing
– Stress limits of system (ex: max # of – Try to violate security
users) requirements
• Volume testing • Environmental test
– Test what happens if large amounts – Test tolerances for heat,
of data are handled humidity, motion
• Configuration testing • Quality testing
– Test the various software and – Test reliability, maintain- ability
hardware configurations & availability
• Compatibility test • Recovery testing
– Test backward compatibility with – Test system’s response to
existing systems presence of errors or loss of
• Timing testing data
– Evaluate response times and time to • Human factors testing
perform a function – Test with end users.
Acceptance Testing
• Goal: Demonstrate • Alpha test:
system is ready for – Client uses the
operational use software at the
– Choice of tests is made developer’s
by client environment.
– Many tests can be • Beta test:
taken from integration – Conducted at client’s
testing environment
– Acceptance test is (developer is not
performed by the client, present)
not by the developer.

35
Testing has many activities
Establish the test objectives
Design the test cases
Write the test cases
Test the test cases
Execute the tests
Evaluate the test results
Change the system

36
Test Team Professional
Tester too familiar
Programmer with code

Analyst

Test System
User Team Designer

Configuration
Management
Specialist
37
Summary
•• Testing
Testing is
is still
still aa black
black art,
art, but
but many
many rules
rules
and
and heuristics
heuristics are are available
available
•• Testing
Testing consists
consists of of
–– Unit
Unit testing
testing
–– Integration
Integration testing
testing
–– System
System testing
testing
•• Acceptance
Acceptancetesting
testing
•• Testing
Testing has
has its
its own
own lifecycle
lifecycle

38

You might also like