Professional Documents
Culture Documents
Strategi Teknik Testing
Strategi Teknik Testing
Pertemuan 2
Strategi dan
Teknik Testing
2
What’s Wrong?
A=0
A=A + 0 1 .
A=2 ?
Print A Print A
F T
3
What testing is
1) Common definition
Testing is to execute a
testing program with the purpose
of finding defects
2) Wider definition
”Testing is a technical
investigation of a product,
done to expose quality-
related information.”
4
Testing in Development Process
• Testing activities take place in all parts of
software development
• From requirement eliciting to final shipment
• Testing is part of the development process
• Testing is part of the company business
process
5
Testing in Development Process
• Testing During implementation: test to verify
that software behaves as intended by the
designer.
• Testing After implementation: test for
conformance with requirements and
reliability, and other non functional
requirement
6
Most Common Software problems
Incorrect calculation
Incorrect data edits & ineffective data edits
Incorrect matching and merging of data
Data searches that yields incorrect results
Incorrect processing of data relationship
Incorrect coding / implementation of
business rules
Inadequate software performance
Confusing or misleading data
Software usability by end users &
Obsolete Software
Inconsistent processing
Unreliable results or performance
Inadequate support of business needs
Incorrect or inadequate interfaces
with other systems
Inadequate performance and security
controls
Incorrect file handling
Types of testing and definitions
• Validation and Verification
– Validate
• correctness or suitability
• vertical experts to confirm master results
– Verification
• confirm software operates as it is required to
• double check to ensure results match those previously
validated and if not then re-validate them
Phase
Rational Unified
Inception Elaboration Construction Transition
Process (RUP)
Requirements
Design
Development
Testing
Maintenance
Phase
Rational Unified
Inception Elaboration Construction Transition
Process (RUP)
organization.
Design
Development
Testing
Maintenance
11
Verification & Validation
• Software V & V defined as a systems
engineering methodology to ensure that
quality is built into the software during
development.
• Debugging
– a process that locates and corrects these
defects
13
Verification versus Validation
14
Verification versus Validation
• Verification:
– “Are we building the system in the right way?”
– The system should conform to the specification
– It does what you specified it should do
Validation:
“Are we building the right system? ”
The system should do what the users really
requires
15
Verification versus Validation
16
The V & V Objectives
18
The V & V Objectives
19
The V & V process
• V & V Is a whole life-cycle process
20
Static and Dynamic V&V
Check correspondence between Are we building
a program and its specification the system
Static In the right way?
Verification
Dynamic
Prototype
Validation
Are we building
the right system?
Execution
base testing
21
Static and Dynamic V&V
• Static Verification
Concerned with analysis of the static
system representation to discover
problems
– Analysis of all documents produced
that represent the system
– Can be applied during all stages of the
software process
22
V&V
Static Dynamic = “testing”
Static Dynamic =
”Testing”
Integration Acceptance
Unit test System test
test test
25
Static verification
• Review (desk checking)
– Code reading done by a single person,
– informal.
– Uneffective compared to walkthrough or inspection
• Walkthrough
– The programmer(s) ”walks through”/”executes” his
code while invited participants ask questions and
makes comments.
– Relatively informal
• Inspection
– Usually a checklist of common errors is used to
compare the code against.
26
Purpose and goal of testing
are situation dependent
1. Find defects
2. Maximize bug count
3. Block premature product releases
4. Help managers make ship/no-ship
decisions
5. Assess quality
6. Minimize technical support costs
27
Purpose and goal of testing
are situation dependent
7. Conform to regulations
8. Minimize safety-related lawsuit risk
9. Assess conformance to specification
10. Find safe scenarios for use of the
product (find ways to get it to work, in
spite of the bugs)
11. Verify correctness of the product
12. Assure quality
28
Purpose and goal of testing
are situation dependent
30
Testing Levels
• Unit testing
• Integration testing
• System testing
• Acceptance testing
Unit testing
Tools Debug
Re-structure
Code Analyzers
Path/statement coverage tools
Education Testing Methodology
Effective use of tools
Srihari Techsoft
Incremental integration testing
– ‘Parts’ can be
• code modules
• individual applications
• client/server applications on a network.
Srihari Techsoft
Types of Integration Testing
Methods Problem
/ Configuration
Management
White Box
Testing
Black box
Black box testing - tests what the program is
supposed to do.
Test sets are developed and evaluated solely on the
specification. There is no knowledge of the
algorithms, data structures, or control statements.
White-box testing
Black box
requirements
output
input
events
Unit
test
.
. Integration Function Performance Acceptance Installation
test test test test test
.
Unit
Unit code
tests
software customer
developer site
tests
customer software
customer site
beta testing is a type of acceptance testing involving a software product to
be marketed for use by many users
selected users receive the system first and report problems back to the
developer
developers like it - exposes their product to real use and often reveals
unanticipated errors
Testing Strategy
Top-down
Big Sandwich
Compromise
Bang!
Bottom-up
non-incremental incremental
Testing Strategy
Bottom up integration
(from lower levels No test stubs necessary)
Sandwich testing
(combination of bottom-up and top-down no
test stubs and drivers needed)
2.What you should test?
Accuracy
–Interoperability
Compliance
–Security
• Reliability - A set of attributes that bear on the capability of software to maintain its
level of performance under stated conditions for a stated period of time.
Maturity
Recoverability
–Fault Tolerance
• Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of
such use, by a stated or implied set of users.
Learnability
Understandability
–Operability
• Efficiency - A set of attributes that bear on the relationship between the level of performance of the
software and the amount of resources used, under stated conditions.
Time Behaviour
Resource Behaviour
• Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
Stability
Analyzability
Changeability
–Testability
• Portability - A set of attributes that bear on the ability of software to be transferred from one environment
to another.
Installability
Replaceability
Adaptability
Who Tests Software?
user
developer independent tester
71
Who Tests Software?
• User
– Test while using it
• It’s not in purpose to do so
– Indirect test
72
Who Tests Software?
• Software Developer
– Understand system
– Test gently
– Driven by delivery
• Independent Tester
– Doesn’t understand system
– Will try to break it
– Quality driven
73