Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Testing Types

Ruchi K ,NMIMS ,Mumbai


Types of Testing
• Unit Testing:
• Individual subsystem
• Carried out by developers
• Goal: Confirm that subsystems is correctly coded and carries out
the intended functionality
• Integration Testing:
• Groups of subsystems (collection of classes) and eventually the
entire system
• Carried out by developers
• Goal: Test the interface among the subsystem

Ruchi K ,NMIMS ,Mumbai


System Testing

• System Testing:
• The entire system
• Carried out by developers
• Goal: Determine if the system meets the requirements (functional and global)
• Acceptance Testing:
• Evaluates the system delivered by developers
• Carried out by the client. May involve executing typical transactions on site
on a trial basis
• Goal: Demonstrate that the system meets customer requirements and is ready
to use

Ruchi K ,NMIMS ,Mumbai


Unit Testing
• Informal:
• Incremental coding
• Static Analysis:
• Hand execution: Reading the source code
• Walk-Through (informal presentation to others)
• Code Inspection (formal presentation to others)
• Automated Tools checking for
• syntactic and semantic errors
• departure from coding standards
• Dynamic Analysis:
• Black-box testing (Test the input/output behavior)
• White-box testing (Test the internal logic of the subsystem or
object)
• Data-structure based testing (Data types determine test cases)

Ruchi K ,NMIMS ,Mumbai


Black-box Testing
• Focus: I/O behavior. If for any given input, we can
predict the output, then the module passes the test.
• Almost always impossible to generate all possible inputs
("test cases")

Ruchi K ,NMIMS ,Mumbai


White-box Testing
• Focus: Thoroughness (Coverage). Every statement in the component is
executed at least once.
• Four types of white-box testing
• Statement Testing
• Loop Testing
• Path Testing
• Branch Testing

Ruchi K ,NMIMS ,Mumbai


Testing Activities
Subsystem Requirements
Unit System
Code Test Analysis
Design Document
Tested Document User
Subsystem
Subsystem Manual
Unit
Code Test
Tested Integration Functional
Subsystem
Test Test
Integrated Functioning
Subsystems System

Tested Subsystem

Subsystem Unit
Code Test
All tests by developer
Cf. levels of testing

Ruchi K ,NMIMS ,Mumbai


Testing Activities continued
Client’s
Global Understanding User
Requirements of Requirements Environment

Validated Accepted
Functioning
System
System PerformanceSystem Acceptance Installation
Test Test Test

Usable
Tests by client System
Tests by developer
User’s understanding
System in
Use
Tests (?) by user
Ruchi K ,NMIMS ,Mumbai
Some Observations
• It is impossible to completely test any nontrivial
module or any system
• Theoretical limitations: Halting problem
• Practial limitations: Prohibitive in time and cost
• Testing can only show the presence of bugs, not their
absence (Dijkstra)

total number of execution paths?

loop 200 times

Ruchi K ,NMIMS ,Mumbai


Levels of Testing in V Model

Level of abstraction system system


requirements integration

software acceptance
requirements test

preliminary software
design integration

detailed component
design test

code & unit


debug test

Time
N.B.: component test vs. unit test;
Ruchi Kacceptance
,NMIMS ,Mumbaitest vs. system integration
Test Planning
• A Test Plan:  A test plan includes:
• covers all types and phases of  test objectives
testing  schedule and logistics
• guides the entire testing process  test strategies
• who, why, when, what  test cases
• developed as requirements,  procedure
functional specification, and  data
high-level design are developed  expected result
• should be done before  procedures for handling
implementation starts problems

Ruchi K ,NMIMS ,Mumbai


Fault Handling Techniques
Fault Handling

Fault Tolerance
Fault Avoidance Fault Detection

Design Atomic Modular


Reviews
Methodology Transactions Redundancy

Configuration
Verification
Management

Testing Debugging

Unit Integration System Correctness Performance


Testing Testing Testing Debugging Debugging

Ruchi K ,NMIMS ,Mumbai


SOLVED EG : White-box Testing
Example
FindMean(float Mean, FILE ScoreFile)
{ SumOfScores = 0.0; NumberOfScores = 0; Mean = 0;
Read(ScoreFile, Score); /*Read in and sum the scores*/
while (! EOF(ScoreFile) {
if ( Score > 0.0 ) {
SumOfScores = SumOfScores + Score;
NumberOfScores++;
}
Read(ScoreFile, Score);
}
/* Compute the mean and print the result */
if (NumberOfScores > 0 ) {
Mean = SumOfScores/NumberOfScores;
printf("The mean score is %f \n", Mean);
} else
printf("No scores found in file\n");
}
Ruchi K ,NMIMS ,Mumbai
White-box Testing Example: Determining
theFindMean
Paths(FILE ScoreFile)
{ float SumOfScores = 0.0;
int NumberOfScores = 0;
1
float Mean=0.0; float Score;
Read(ScoreFile, Score);
2 while (! EOF(ScoreFile) {
3 if (Score > 0.0 ) {
SumOfScores = SumOfScores + Score;
4
NumberOfScores++;
}
5
Read(ScoreFile, Score); 6
}
/* Compute the mean and print the result */
7 if (NumberOfScores > 0) {
Mean = SumOfScores / NumberOfScores;
printf(“ The mean score is %f\n”, Mean); 8
} else
printf (“No scores found in file\n”); 9
}
Ruchi K ,NMIMS ,Mumbai
Constructing the Logic Flow Diagram
Start

F
2
T
3
T F
4 5

7
T F
8 9

Exit

Ruchi K ,NMIMS ,Mumbai


Finding the Test Cases
Start

1
a (Covered by any data)
2
b (Data set must contain at least one value)
(Positive score) d 3
e (Negative score)
c 4 5
(Data set must h (Reached if either f or
be empty) f g
6 e is reached)

7 j (Total score > 0.0)


(Total score < 0.0) i
8 9
k l
Exit

Ruchi K ,NMIMS ,Mumbai


Comparison of White & Black-box Testing
• White-box Testing: • Both types of testing are needed
• Potentially infinite number of paths have to be • White-box testing and black box testing are
tested the extreme ends of a testing continuum.
• White-box testing often tests what is done,
instead of what should be done • Any choice of test case lies in between and
• Cannot detect missing use cases depends on the following:
• Number of possible logical paths
• Black-box Testing:
• Nature of input data
• Potential combinatorical explosion of test
cases (valid & invalid data) • Amount of computation
• Often not clear whether the selected test cases • Complexity of algorithms and data structures
uncover a particular error
• Does not discover extraneous use cases
("features")

Ruchi K ,NMIMS ,Mumbai


The 4 Testing Steps
1. Select what has to be 3. Develop test cases
measured • A test case is a set of test data
• Analysis: Completeness of or situations that will be used
requirements to exercise the unit (code,
• Design: tested for cohesion module, system) being tested
or about the attribute being
• Implementation: Code tests measured
2. Decide how the testing is 4. Create the test oracle
done • An oracle contains of the
• Code inspection predicted results for a set of
• Proofs (Design by Contract) test cases
• Black-box, white box, • The test oracle has to be
• Select integration testing written down before the actual
strategy (big bang, bottom up, testing takes place
top down, sandwich)

Ruchi K ,NMIMS ,Mumbai


Unit-testing Heuristics
1. Create unit tests as soon as object 4. Desk check your source code
design is completed: • Reduces testing time
• Black-box test: Test the use 5. Create a test harness
cases & functional model • Test drivers and test stubs are
• White-box test: Test the needed for integration testing
dynamic model 6. Describe the test oracle
• Data-structure test: Test the • Often the result of the first
object model successfully executed test
2. Develop the test cases 7. Execute the test cases
• Goal: Find the minimal • Don’t forget regression testing
number of test cases to cover • Re-execute test cases every time a
as many paths as possible change is made.
Big cost -> what should be done?
3. Cross-check the test cases to
eliminate duplicates 8. Compare the results of the test with the
test oracle
• Don't waste your time!
• Automate as much as possible
Ruchi K ,NMIMS ,Mumbai
Dealing with Errors
• Verification:
• Assumes hypothetical environment that does not match real
environment
• Proof might be buggy (omits important constraints; simply
wrong)
• Modular redundancy:
• Expensive
• Declaring a bug to be a “feature”
• Bad practice
• Patching
• Slows down performance
Ruchi K ,NMIMS ,Mumbai
Another View on How to Deal with Errors
• Error prevention (before the system is released):
• Use good programming methodology to reduce complexity
• Use version control to prevent inconsistent system
• Apply verification to prevent algorithmic bugs
• Error detection (while system is running):
• Testing: Create failures in a planned way
• Debugging: Start with an unplanned failures
• Monitoring: Deliver information about state. Find performance bugs
• Error recovery (recover from failure once the system is
released):
• Data base systems (atomic transactions)
• Modular redundancy
• Recovery blocks

Ruchi K ,NMIMS ,Mumbai

You might also like