Professional Documents
Culture Documents
Chapter 4 - Testing Throughout The Software Life Cycle Part 2 (Full) v2
Chapter 4 - Testing Throughout The Software Life Cycle Part 2 (Full) v2
(ITS 64704)
Testing Throughout the Software Life Cycle
(Part 2)
Topics
Testing Levels
Top down
Top down
• After integration testing, system testing (3rd level of testing) may be required
before the system is delivered
• System testing verifies if specific requirements are met.
• Reasons for system testing:
Discrepancy of viewpoints
• Low level testing (unit and integration) are tested against technical specification, mostly from
developer’s viewpoint
• System testing examines the system from customer’s perspective.
• Some functions and system properties result from interaction of system components
• Only observable and testable at the level of system testing
System Testing: Test Object
To save cost, time and effort, the system testing is often performed in the
customer’s production environment instead of a separate test environment.
This needs to be avoided due to:
• This brings the risk that the customer’s production environment will be affected.
• Expensive failures and loss of data in the customer’s productive system can be the result.
The testers have little or no control over the parameters and configurations of the production
environment.
• Through the simultaneous operation of other customer systems, the test criteria might be
subtly modified.
The system testing that has already been conducted is hard or even impossible to reproduce.
System testing: Test Objectives
Functional requirements
• Specify the behavior which the system or system parts have to perform.
• They describe “what” the system(part) has to perform. Their implementation
is required in order for the system to be suitable for use.
• Characteristics of the quality attribute “functional stability” according to [ISO
25010]
Non-functional requirements
• Describe “how” well certain attributes of the system(part) behavior should
be performed.
• Quality attributes according to [ISO 25010] such as : reliability, performance
efficiency, usability, etc. Indirectly, maintainability and portability may also
influence the customer‘s satisfaction.
System testing: Test Strategy
Functional
Requirement
System testing: Test Strategy
• System test cases can be derived which describe how the user handles the
system and which actions are typically carried out.
• Different user groups each possess their own user profiles. Typical
interactions or use cases with a realistic frequency can be identified for these
groups.
• From these typical interactions, test scenarios can be derived.
• On the basis of the frequency that various actions are performed in using the
software, the tester determines how important the corresponding test
scenario is and with which priority it therefore has to be included in the test
plan.
System testing: Test Strategy
Non-
Functional
Requirement
System testing: Test Strategy
Load Testing
Performance Testing
• Measurements of processing speed and response time for specific use cases, normally
in relation to increasing load.
Volume testing
Stress testing
Testing reliability
• During nonstop operation (e.g. amount of failures per hour with given user
profile).
Testing robustness
Testing usability
Testing documentation
Testing portability/maintainability
Indistinct customer If there are no requirements, any system behavior is initially acceptable or cannot
requirements be assessed.
The user or customer will obviously have a certain idea of what they expect of
“their” software system. So requirements do exist.
However, these are not to be found in written form but only exist “in the minds”
of some persons that are involved in the project.
The testers then have the challenging task to retrospectively gather all this
information about the desired expected behavior.
System Testing: Problems
Neglected If the testers identify the original requirements, they will realize that in the minds
decisions of the various persons there are very different opinions and expectations
regarding one single matter.
No wonder, because during the project it has been neglected to record the
requirements in a written form, to agree upon and approve them.
Therefore the system test not only has to gather the requirements, but also needs
to seek clarification and obtain decisions which have been omitted for many
months. This needs to be done when it is actually too late.
This gathering of information is very time and cost intensive. Test and acceptance
of the system are delayed.
System Testing: Problems
Projects fail If requirements are not documented, the developers are missing clear objectives.
The probability that the implemented system fulfills the implicit customer
requirements is therefore extremely low.
Nobody can sincerely hope that even a partially suitable system can be produced
under these project conditions.
In these kinds of projects, the system test can often only “officially” confirm the
failure of the project.
*
Testing Levels
Top down
Bottom up
Integration Test
Test Levels Ad-hoc
System Test
Big bang
Regulatory
acceptance test
Contractual
acceptance test
Acceptance Test
User acceptance
Operational test
Acceptance Test: Definition
• Previous test levels are performed by the producer or the designing project
group, before the software is passed to the customers/users.
• The main focus here is the view and opinion of the client/user. Finding defects
is not the main focus in acceptance testing. The goal is to establish confidence
in the product and assess its fitness for the intended use.
• Performed against any regulations and standards that the system must adhere to.
• e.g Quality models, ISO standards, etc
Acceptance Test: Types
Testing Levels
• The initial delivery a software product is merely at the beginning of its life
cycle. Once deployed, the software is often in service for years or even
decades. The software product, its environment, its data and its configuration
will be corrected, modified and extended many times during this period.
Changes of
Defect fixing Quality improvement
environment
• Defect from • Updating operating • Improving factors
operating system for quality, e.g.,
• Emergency changes • Updating database maintainability,
(“Hot Fixes“) management system performance,
• Upgrades of usability without
commercial change of functional
software amount
• Patches of external
components
Maintenance Testing: Migration & Retirement
DEFINITION:
Regression testing is the repeated testing of an already tested program, after
modification.
OBJECTIVE
• to discover any defects introduced or uncovered as a result of the changes.
Extent of regression testing
1. Repeating all tests that detected the fixed defect (re-testing of defects)
2.Testing all program statements that have been fixed or modified (testing
modified functionality)
3.Testing all program parts that have been added (testing new functionality)
• Both, the pure re-testing of defects (1) as well as testing only the modified
program parts (2 and 3) are not enough.
• Simple local modifications can have unexpected side effects on other (also
distant) parts of the software system.
Complete regression testing
• In addition to testing fixed defects and added functions, all existing test cases
actually need to be repeated.
• Only then would the test have the same value as the same test that has been
performed in the original version of the program.
• Such complete regression testing would also have to be performed when the
system environment has been changed, since this can potentially have an
effect on each part of the system.
Topics
Testing Levels
Non-functional
Functional testing
testing
Structure-based Change-oriented
testing testing
Functional testing
*
THANK YOU