Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Software Testing Fundamentals

Software Testing
■ Software Testing is a critical element of software quality
& Debugging assurance and represents the ultimate review of
specification, design and coding.
■ It is concerned with the actively identifying errors in
software.
Lecture 6(b) ■ Testing is dynamic assessment of the software
BSIT 6th ◆ Sample input

◆ Actual outcome compare with expected outcome

1 2
Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006

Testing Objectives Testing vs. Debugging

■ Software Testing is the process of exercising a program with ■ Testing is different from debugging
the specific intent of finding errors prior to delivery to the ■ Debugging is removal of defects in the software, a
end user. correction process
■ It is the process of checking to see if software matches its
■ Testing is an assessment process
specification for specific cases called Test Case.
■ Testing consumes 40-50% of the development effort
■ A Good Test Case is one that has a high probability of finding
undiscovered error.

3 4
Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006
What can Testing Show? Who Tests the Software?

Errors

Requirements Consistency

Performance

Developer Independent Tester


An indication
of quality Understands the system Must learn about the system,
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality

5 6
Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006

Testing Paradox

■ To gain confidence, a successful test is one that the


software does as in the functional specs.
■ To reveal error, a successful test is one that finds an error.
■ In practice, a mixture of Defect-revealing and Correct-
operation tests are used.

Developer performs Constructive Actions


■ Two classes of input are provided to the test process:
Tester performs Destructive Actions
◆ Software Configuration: includes Software Requirements Specification,

Design Specification, and source code.


7 ◆ Test Configuration: includes Test Plan and Procedure, any testing tools 8
Thanks to M8034 @ Peter Lo 2006 that are to be used, and test cases and their expected results.
Necessary Conditions for Testing Attributes of a “Good Test”

■ A controlled/observed environment, because tests must be ■ A good test has a high probability of finding an error.
exactly reproducible ◆ The tester must understand the software and attempt to develop

◆ Sample Input – test uses only small sample input


a mental picture of how the software might fail.
(limitation) ■ A good test is not redundant.
◆ Testing time and resources are limited.
◆ Predicted Result – the results of a test ideally should
◆ There is no point in conducting a test that has the same
be predictable
purpose as another test.
■ Actual output must be able to compare with the expected
■ A good test should be neither too simple nor too complex.
output
◆ Side effect of attempting to combine a series of tests into one

test case is that this approach may mask errors.

9 10
Thanks to M8034 @ Peter Lo 2006
Thanks to M8034 @ Peter Lo 2006

Attribute of Testability? Software Testing Technique


■ Operability — It operates cleanly ■ White Box Testing, or Structure Testing is derived directly from the
■ Observability — The results of each test case are readily observed implementation of a module and able to test all the implemented code
■ Controllability — The degree to which testing can be ■ Black Box Testing, or Functional Testing is able to test any
automated and effective functionality is missing from the implementation.
■ Decomposability — Testing can be on different stages
■ Simplicity — Reduce complex architecture and logic to simplify White-box Black-box
Testing Testing
tests
■ Stability — Few changes are requested during testing
■ Understandability — The purpose of the system is clear to the
evaluator
Methods

11 12
Strategies
Thanks to M8034 @ Peter Lo 2006
White Box Testing Technique (till slide 23) White Box Testing Technique

■ White box testing is a test case design method that ■ White Box Testing of software is predicated on close
uses the control structure of the procedural design examination of procedural detail.
to derive test cases. ■ Logical paths through the software are tested by
... our goal is to providing test cases that exercise specific sets of
ensure that all conditions and / or loops.
statements and ■ The status of the program may be examined at various
conditions have
been executed
points to determine if the expected or asserted status
at least once ... corresponds to the actual status.

13 14
Thanks to M8034 @ Peter Lo 2006

Process of White Box Testing Benefit of White Box Testing

■ Tests are derived from an examination of the source code for the ■ Using white box testing methods, the software engineer
modules of the program. can derive test cases that:
■ These are fed as input to the implementation, and the execution ◆ Guarantee that all independent paths within a module
traces are used to determine if there is sufficient coverage of the have been exercised at least once;
program source code
◆ Exercise all logical decisions on their true or false
sides;
◆ Execute all loops at their boundaries and within their
operational bounds ; and
◆ Exercise internal data structures to ensure their validity.

15 16
Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006
Exhaustive Testing

■ There are 1014 possible paths! If we execute one test


per millisecond, it would take 3,170 years to
test this program!!

loop < 20 X

17

Thanks to M8034 @ Peter Lo 2006

Selective Testing

Selected path

loop < 20 X

18
Thanks to M8034 @ Peter Lo 2006
Condition Testing

■ Simple condition is a Boolean variable or


relational expression (>, <)
■ Condition testing is a test case design method that
exercises the logical conditions contained in a program
module, and therefore focuses on testing each condition
in the program.
■ (type of white box testing)

19
Thanks to M8034 @ Peter Lo 2006

Data Flow Testing Loop Testing

■ The Data flow testing method selects test paths of a ■ Loop testing is a white-box testing technique that focuses
program according to the locations of definitions and exclusively on the validity of loop constructs
uses of variables in the program ■ Four classes of loops:
1. Simple loops
2. Concatenated loops
3. Nested loops
4. Unstructured loops

Thanks to M8034 @ Peter Lo 2006


20 21
36
Test Cases for Simple Loops Test Cases for Nested Loops

■ Where n is the maximum number of allowable ■ Start at the innermost loops. Set all other loops to
passes through the loop: minimum values
■ Conduct simple loop tests for the innermost loop while
◆ Skip the loop entirely
holding the outer loops at their minimum iteration
◆ Only one pass through the loop parameter values. Add other tests for out-of -range or
◆ Two passes through the loop excluded values
■ Work outward, conducting tests for the next loop
◆ m passes through the loop where m<n
Simple but keeping all other outer loops at minimum values
◆ n-1, n, n+1 passes through the loop loop and other nested loops to "typical" values Nested
■ Continue until all loops have been tested Loops

22 23
Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006

Thanks to M8034 @ Peter Lo 2006


Thanks to M8034 @ Peter Lo 2006
Black Box Testing Black Box Testing

■ Black Box Testing focus on the functional requirements of the ■ Black box testing attempts to find errors in the
software, i.e. derives sets of input conditions that will fully exercise following categories:
all functional requirements for a program.
◆ Incorrect or missing functions

◆ Interface errors

Requirements ◆ Errors in data structures or external databases access

◆ Performance errors

◆ Initialization and termination errors.


Output

Input Events 24 25
Thanks to M8034 @ Peter Lo 2006

Process of Black Box Testing Random Testing

■ Input is given at random and submitted to the program


and corresponding output is then compared.

26 Thanks to M8034 @ Peter Lo 2006 27

Thanks to M8034 @ Peter Lo 2006


Comparison Testing Automated Testing Tools

■ All the versions are executed in parallel with a real- ■ Code Auditors
time comparison of results to ensure consistency. ■ Assertion Processors
◆ If the output from each version is the same, then it is
■ Test File Generators
assumed that all implementations are correct.
■ Test Data Generators
◆ If the output is different, each of the applications is
investigated to determine if a defect in one or more ■ Test Verifiers
versions is responsible for the difference. ■ Output Comparators

■ Other name = test automation tools, can be in both


static and dynamic, explore yourself
28 29
Thanks to M8034 @ Peter Lo 2006
Thanks to M8034 @ Peter Lo 2006

Thanks to M8034 @ Peter Lo 2006


Testing Strategy Test Case Design

■ A testing strategy must always incorporate test


"Bugs lurk in corners
planning, test case design, text execution, and the and congregate at
resultant data collection and evaluation boundaries ..."
Unit Test Integration
Test Boris Beizer

OBJECTIVE to uncover errors

CRITERIA in a complete manner


System Validation
Test Test CONSTRAINT with a minimum of effort and time

30 31
Thanks to M8034 @ Peter Lo 2006

Generic Characteristics of Software


Verification and Validation
Testing Strategies
■ Verification – It confirms software specifications ■ Testing begins at the module level and works
◆ Are we building the project right? toward the integration of the entire system
■ Validation – it confirms software is according to ■ Different testing techniques are appropriate at
customer’s expectations. different points in time
◆ Are we building the right product? ■ Testing is conducted by the software developer and
an Independent Test Group (ITG)
■ Debugging must be accommodated in any testing
strategy

Thanks to M8034 @ Peter Lo 2006 32 Thanks to M8034 @ Peter Lo 2006 33


Software Testing Strategy Software Testing Strategy

■ A strategy for software testing moves outward along the spiral.


◆ Unit Testing: Concentrates on each unit of the software as

implemented in the source code.


◆ Integration Testing: Focus on the design and the

construction of the software architecture.


◆ Validation Testing: Requirements established as part of
software requirement analysis are validated against the
software that has been constructed.
◆ System Testing: The software and other system

elements are tested as a whole.

35
M8034 @ Peter Lo 2006 65 Thanks to M8034 @ Peter Lo 2006

Testing Direction Software Testing Direction


■ Unit Tests
◆ Focuses on each module and makes heavy use of white box

testing
■ Integration Tests
◆ Focuses on the design and construction of software
architecture; black box testing is most prevalent with
limited white box testing.
■ High-order Tests
◆ Conduct validation and system tests. Makes use of black box
testing exclusively (such as Validation Test, System Test,
Alpha and Beta Test, and other Specialized Testing).

37
M8034 @ Peter Lo 2006 67 Thanks to M8034 @ Peter Lo 2006
Unit Testing Unit Testing
■ Unit testing focuses on the Module
to be
results from coding. Tested
Module ■ Each module is tested in turn Interface
to be and in isolation from the Local Data Structures
Tested others. Boundary Conditions
■ Uses white-box techniques. Independent Paths
Results
Error Handling Paths

Software
Engineer Test Cases
Test Cases

38 39
Thanks to M8034 @ Peter Lo 2006

Unit Testing Procedures

■ Module is not a stand-alone program, software must be


developed for each unit test.
■ A driver is a program that accepts test case data, passes such data to
the module, and prints the relevant results.
■ Stubs serve to replace modules that are subordinate the module to
be tested.
■ A stub or "dummy subprogram" uses the subordinate module's
interface, may do nominal data manipulation, prints verification of
entry, and returns.
■ Drivers and stubs also represent overhead

Thanks to M8034 @ Peter Lo 2006 40


Integration Testing Example
■ A technique for constructing the program structure while at the same
■ For the program structure, the following test cases may be
time conducting tests to uncover tests to uncover errors associated
derived if top-down integration is conducted:
with interfacing
◆ Test case 1: Modules A and B are integrated
■ Objective is combining unit-tested modules and build a program
structure that has been dictated by design. ◆ Test case 2: Modules A, B and C are integrated
◆ Test case 3: Modules A., B, C and D are integrated (etc.)
■ Integration testing should be done incrementally.
■ It can be done top-down, bottom-up or in bi-directional.

M8034 @ Peter Lo 2006 77

41

Thanks to M8034 @ Peter Lo 2006 42

Validation Testing System Testing


■ Ensuring software functions can be reasonably expected by the ■ System testing is a series of different tests whose primary
customer. purpose is to fully exercise the computer-based system.
■ Achieve through a series of black tests that demonstrate
■ System testing focuses on those from system engineering.
conformity with requirements.
■ A test plan outlines the classes of tests to be conducted, and a test ■ Test the entire computer-based system.
procedure defines specific test cases that will be used in an attempt ■ One main concern is the interfaces between software,
to uncover errors in conformity with requirements. hardware and human components.
■ Validation testing begins, driven by the validation criteria that were ■ Kind of System Testing
elicited during requirement capture. ◆ Recovery
■ A series of acceptance tests are conducted with the end users
◆ Security

◆ Stress

◆ Performance

Thanks to M8034 @ Peter Lo 2006 43 76

Thanks to M8034 @ Peter Lo 2006


Recovery Testing Security Testing

■ A system test that forces software to fail in a ■ Security testing attempts to verify that protection
variety of ways and verifies that recovery is mechanisms built into a system will in fact protect it
properly performed. from improper penetration.
■ If recovery is automatic, re-initialization, check- ■ Particularly important to a computer-based system that
pointing mechanisms, data recovery, and restart are manages sensitive information or is capable of causing
each evaluated for correctness. actions that can improperly harm individuals when
■ If recovery is manual, the mean time to repair is targeted.
evaluated to determine whether it is within acceptable
limits.

85 86
Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006

Stress Testing Performance Testing

■ Stress Testing is designed to confront programs with ■ Test the run-time performance of software within the
abnormal situations where unusual quantity frequency, context of an integrated system.
or volume of resources are demanded ■ Extra instrumentation can monitor execution
■ A variation is called sensitivity testing; intervals, log events as they occur, and sample
◆ Attempts to uncover data combinations within valid machine states on a regular basis
input classes that may cause instability or improper ■ Use of instrumentation can uncover situations that lead
processing to degradation and possible system failure

87 Thanks to M8034 @ Peter Lo 2006 88


Thanks to M8034 @ Peter Lo 2006
Debugging The Debugging Process
Test Cases
■ Debugging will be the process that results in the removal of the
error after the execution of a test case.
■ Its objective is to remove the defects uncovered by tests.
■ Because the cause may not be directly linked to the symptom,
it may be necessary to enumerate hypotheses explicitly and
then design new test cases to allow confirmation or rejection
of the hypothesized cause. New test Results
Regression Cases
Tests Suspected
Causes
Corrections Debugging
Identified
Causes

89 90
Thanks to M8034 @ Peter Lo 2006

What is Bug? Characteristics of Bugs

■ A bug is a part of a program that, if executed in the ■ The symptom and the cause may be geographically remote
right state, will cause the system to deviate from its ■ The symptom may disappear when another error is
corrected
specification (or cause the system to deviate from
■ The symptom may actually be caused by non-errors
the behavior desired by the user).
■ The symptom may be caused by a human error that is not
easily traced
■ It may be difficult to accurately reproduce input conditions
■ The symptom may be intermittent. This is particularly
common in embedded systems that couple hardware and
software inextricably
■ The symptom may be due to causes that are distributed
across a number of tasks running on different processors
91 92

Thanks to M8034 @ Peter Lo 2006


Debugging Techniques Debugging Approaches – Brute Force

■ Probably the most common and least efficient method


Brute Force / Testing
for isolating the cause of a software error.
■ The program is loaded with run-time traces, and
Backtracking
WRITE statements, and hopefully some information
will be produced that will indicated a clue to the cause
Cause Elimination
of an error.

Thanks to M8034 @ Peter Lo 2006 93 94


Thanks to M8034 @ Peter Lo 2006

Debugging Approaches – Cause


Debugging Approaches – Backtracking
Elimination
■ Fairly common in small programs. ■ Data related to the error occurrence is organized to isolate
potential causes.
■ Starting from where the symptom has been uncovered,
■ A "cause hypothesis" is devised and the above data are used
backtrack manually until the site of the cause is found.
to prove or disapprove the hypothesis.
■ Unfortunately, as the number of source code lines ■ Alternatively, a list of all possible causes is developed and tests
increases, the number of potential backward paths are conducted to eliminate each.
may become unmanageably large. ■ If the initial tests indicate that a particular cause hypothesis shows
promise, the data are refined in a attempt to isolate the bug.

95 96
Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006
Debugging Effort Consequences of Bugs

infectious
Time required
to diagnose the damage

Time required symptom and catastrophic


to correct the error determine the extreme
and conduct cause serious
regression tests disturbing

annoying
mild

Bug Type
Bug Categories: function-related bugs,
system-related bugs, data bugs, coding bugs,
design bugs, documentation bugs, standards
violations, etc.
101 102
Thanks to M8034 @ Peter Lo 2006
Thanks to M8034 @ Peter Lo 2006

Debugging Tools Debugging: Final Thoughts

■ Debugging compilers ■ Don't run off half-cocked, think about the


■ Dynamic debugging aides ("tracers") symptom you're seeing.
■ Automatic test case generators ■ Use tools (e.g., dynamic debugger) to gain more
■ Memory dumps insight.
■ If at an impasse, get help from someone else.
■ Be absolutely sure to conduct regression tests when
you do "fix" the bug.

Thanks to M8034 @ Peter Lo 2006 Thanks to M8034 @ Peter Lo 2006


103 104

You might also like