3 - Test Management Reader

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 97

Consultancy, Outsourcing and Training

Testing - Quality Management

Maslak Mah. Ahi Evran Cad. Maslak 42 Plaza, A Blok,


No: 9, Kat:11, Sarıyer, İstanbul
Telephone: +90 212 276 06 40
Fax: +90 212 276 06 47
Email: info@keytorc.com

http://www.keytorc.com
https://keytorc.com/blog/

ISTQB ADVANCED CERTIFICATE IN SOFTWARE TESTING


MODULE TEST MANAGER

TEST MANAGER READER

© 2022 Keytorc Software Testing Services ISTQB AL TM – 1


Table of contents
1. Course Introduction .................................................................................. 4
1.1 Course Objectives .................................................................................. 4
1.2 Homework ........................................................................................... 4
1.3 The book ............................................................................................. 4
2. Revision ISTQB Foundation ........................................................................ 5
2.1 Risk based testing .................................................................................. 5
2.2 Testing Tools ........................................................................................ 5
2.3 Other revision areas ................................................................................ 6
2.4 The V-model ......................................................................................... 6
3. Basic Aspects of Software Testing ................................................................ 7
3.1 Life cycle models ................................................................................... 7
3.2 Types of life cycles ................................................................................. 8
3.3 Planning using the V-model ..................................................................... 12
3.4 Development and testing ........................................................................ 15
3.5 Project Management (PM) ...................................................................... 16
3.6 Configuration Management (CM) .............................................................. 19
3.7 Key learning points ............................................................................... 22
3.8 Systems of Systems ............................................................................. 22
3.9 Safety Critical Systems .......................................................................... 23
4. Testing Processes.................................................................................. 24
4.1 Introduction ........................................................................................ 24
4.2 Test planning, monitoring and control ......................................................... 24
4.3 Test analysis ...................................................................................... 26
4.4 Test design ........................................................................................ 26
4.5 Test implementation .............................................................................. 26
4.6 Test execution .................................................................................... 27
4.7 Evaluating exit criteria and reporting........................................................... 27
4.8 Test closure ....................................................................................... 28
5. Test Planning ....................................................................................... 29
5.1 Introduction ........................................................................................ 29
5.2 Test Policy Document ............................................................................ 29
5.3 Test strategy document .......................................................................... 31
6. Testing and risk ..................................................................................... 37
6.1 Introduction ........................................................................................ 37
6.2 Risk Management ................................................................................ 38
6.3 Risk and the Test Strategy ...................................................................... 40
6.4 Risk Identification Techniques .................................................................. 41
6.5 Risk Analysis ...................................................................................... 44
6.6 Categorization & Classification ................................................................. 45
6.8 Risk Mitigation .................................................................................... 46
6.9 Test Management Issues........................................................................ 46
6.10 Distributed, Outsourced & Insourced Testing ............................................... 50
7. Test Estimation and Scheduling.................................................................. 51
7.1 Introduction ........................................................................................ 51
7.2 Scheduling Test Planning ....................................................................... 53
8. Test Progress Monitoring and Control ............................................................. 54
8.1 Introduction ........................................................................................ 54
8.2 Test Execution Monitoring....................................................................... 54
8.3 Test Progress Reporting......................................................................... 54
8.4 Controlling Test Progress ....................................................................... 54
9. Defect Management .................................................................................. 56
9.1 Introduction ........................................................................................ 56
9.2 Definitions.......................................................................................... 56
9.3 Classification scheme ............................................................................ 58

© 2022 Keytorc Software Testing Services ISTQB AL TM – 2


9.4 Practical example ................................................................................. 64
9.5 IEEE 1044.1 Guide ............................................................................... 64
9.6 Data analysis ...................................................................................... 65
10. People skills ...................................................................................... 67
10.1 Experience versus training .................................................................... 67
10.2 Individual Skills .................................................................................. 67
10.3 Test Team Dynamics ........................................................................... 68
10.4 Fitting testing within an organization ......................................................... 72
10.5 Motivation ........................................................................................ 73
10.6 Interview questionnaire production guide ................................................... 74
10.7 The Play .......................................................................................... 76
11. Reviews ........................................................................................... 82
11.1 Why reviews and what can be reviewed..................................................... 82
11.2 Types of product reviews ...................................................................... 82
11.3 Review process overview ...................................................................... 84
11.4 Implementation steps ........................................................................... 84
11.4 Metrics for reviews .............................................................................. 85
11.5 Final remarks .................................................................................... 85
12. Test Process Improvement ..................................................................... 87
12.1 Implementation - Change process............................................................ 87
12.2 Deployment ...................................................................................... 90
12.3 Metrics & Measurement ........................................................................ 91
12.4 Business Value of Testing ..................................................................... 91
12.5 Successes and Failures ........................................................................ 92
13. Test Tools ......................................................................................... 93
13.1 Test Tool Types ................................................................................. 93
13.2 Tool Selection and Implementation........................................................... 93
13.3 Tool Lifecycle .................................................................................... 97

© 2022 Keytorc Software Testing Services ISTQB AL TM – 3


1. Course Introduction
Welcome to the ISTQB Advanced Certificate in Software Testing as administered by the
ISTQB. Background information on ISTQB and the first level qualification, the Foundation
Certificate, may be found at the ISTQB web site (www.istqb.org).

1.1 Course Objectives


An ISTQB Advanced Test Manager should be able to:

• Manage a testing project by implementing the mission, goals and testing processes
established for the testing organization
• Organize and lead risk identification and risk analysis sessions and use the results of
such sessions for test estimation, planning, monitoring and control
• Create and implement test plans consistent with organizational policies and test
strategies
• Continuously monitor and control the test activities to achieve project objectives
• Assess and report relevant and timely test status to project stakeholders
• Identify skills and resource gaps in their test team and participate in sourcing adequate
resources
• Identify and plan necessary skills development within their test team
• Propose a business case for test activities which outlines the costs and benefits expected
• Ensure proper communication within the test team and with other project stakeholders
• Participate in and lead test process improvement initiatives

1.2 Homework
There will be written, practical, and background-reading exercises set as homework during
the course. This will ensure that the materials have been understood and will enable the
student to practice the application of the theory and techniques learnt in a safe environment. It
is recommended that students do at least the minimum level of homework indicated during
the course.

1.3 The book


A book has been produced to accompany this training course. This book contains a wide
variety of subject matters and covers a number of topics in depth. You will be asked to read
certain sections of the book for homework at the end of the various modules but it is
recommended that the book be read in its entirety.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 4


2. Revision ISTQB Foundation
Revision of the ISTQB Foundation Certificate in Software Testing. Let’s quickly look back at
what we covered on the ISQB Foundation Certificate in Software Testing. What do you
remember?

2.1 Risk based testing


Testing is Risk Management. Evaluate where the areas of high risk are;
- What is the likelihood?
- What is the impact?

Spend the available time and resources testing in order to mitigate the highest business risks.
Risk Assessment
We cannot test everything so we spend the available time and resource focusing testing
around the business high priority risk areas. “No Risk No Test”. We need to consider relative
risk based on:
- Criticality
- Use
- Visibility.

Risk and what to test


Establish the highest business risk i.e.
- How critical is the function/system (washing machine or life support system?)
- How visible is the function/system (in house or on the WEB visible to the whole world?)
- What impact will be caused by errors or poor performance of that function/system (slight
annoyance or unable to continue our day2day business)

2.2 Testing Tools


Static Analysis Tools (CAST)
- Review support tools (help to manage and control the review process)
- Requirements management tools (allow V&V, trace ability etc.)
- Code Syntax Checkers (checks language formed correctly)
- Data flow tools and data modelers (look at code structure)
- Complexity Measuring tools (McCabe, LDRA)

Dynamic testing tools


- Test Preparation and Design tools (Test cases and Scripts)
- Test data production tools (produce test data)
- Test harnesses, stubs, drivers, simulators
- Capture replay tools
- Coverage measuring tools
- Debuggers and performance measuring tools
- Results checkers and comparators

Wise Words regarding tools!


Where there is a need the tool vendors will aim to fill the gap! But only buy what you need, not
what they want to sell you.

Run tool trials before deciding whether or not to purchase them and consider the license
costs. Do you need to buy? Could you rent the tool? Do you really have an ongoing need?
Plan your tool implementation as a project in its own right, and trial it on a small project first.
Implement the tool during a quiet period when you are not testing.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 5


Test Management Tools (CAST)
- Defect management tools
- Change Management Tools
- Metrics tools
- Reporting tools
- Configuration Management Tools
- Project Management tools (MS project, PMW)

2.3 Other revision areas


Terminology (ISTQB Glossary)

Test Life Cycle (incl. life cycle models)


– Fundamental Test Process, test levels vs. test types,
– Waterfall, V-Model, Incremental development

Reviews
– Walkthrough, Inspection, Technical Review

Choosing techniques
– Risk, Test Basis, Knowledge of Testers, etc.

Defect management
– Incident template, Defect management process

For further reading and ISTQB Foundation revision, the book “Foundations of
Software Testing” by Dot Graham, Erik van Veenendaal, Isabel Evans and
Rex Black is highly recommended.

2.4 The V-model


The V model is a software development methodology
- It encourages early involvement of the testing team
- It adds value at requirements definition/specification stages
- It encourages early detection of faults
- It encourages reviews and static testing before code delivery
- It minimizes testing impact on the development critical path.

For more details on the V-model, see ‘Testing Practitioner’

© 2022 Keytorc Software Testing Services ISTQB AL TM – 6


3. Basic Aspects of Software Testing

For background reading on “Basic Aspects of Software Testing” refer to


chapter 1 “Test Principles” of The Testing Practitioner.
3.1 Life cycle models
What are our business drivers?
What are our project drivers?
What are our goals?
Which development lifecycle model?

The development process adopted for a project will depend on the project aims and goals.
There are numerous development lifecycles that have been developed in order to achieve the
required objectives. These lifecycles range from lightweight and fast methodologies (Scrum,
DSDM, XP), where time to market is of the essence, through to the fully controlled and
documented methodologies (V-model) where quality and reliability are key drivers. Each of
these methodologies has its place in modern software development and the most appropriate
development process should be applied to each project.

The lifecycle model that is adopted for the project will have a big impact on the testing that is
carried out. It will help to define the what, where, and when of our planned testing, regression
testing, integration testing, and our testing techniques. Testing must fit in around the lifecycle
or it will fail to deliver maximum benefit.

Verification and Validation


Verification – Are we building the product right?
Validation – Are we building the right product?

Verification and validation of baseline documentation using matrices


We can use a simple spreadsheet to confirm that no information has been lost between the
layers of documentation. We list the clauses of the source (parent) document across the top
and the clauses of the document being verified/validated (child) down the left hand side. We
then cross-reference the related clauses. By using the spreadsheet data sort tool we can
easily see requirements clauses that have been omitted and also any clauses that have not
been sourced from the parent document.

All documents in the baseline should be verified against their source documentation and
validated against the requirements. This ensures that no progressive distortion is occurring as
the documentation grows during the design phase.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 7


Requirement
1.0.1 1.0.2 1.0.3 1.1.0 1.1.1 1.1.2 1.2 1.2.1 1.2.2
Function
1.11 X X X
1.12
1.13 X
1.14 X X
1.15 X
1.2 X X

By using the above technique we can clearly see that:


- Requirement 1.1.0 is not covered in the Functional specification (fault)
- Clause 1.12 of the functional specification has no corresponding requirement. This
should be checked to ensure that we are not designing in unwanted functions (scope
creep).

3.2 Types of life cycles


- Sequential Lifecycle models
• Waterfall
• V-model
- Iterative Lifecycle models
• Pre-planned incremental delivery (Phased)
• Spiral
- Agile Development models
• RAD/DSDM
• Extreme Programming
• Scrum
- Evolutionary development model

There are various lifecycle models that have been produced in order to meet specific
development needs. The models specify the various stages of the process and the order in
which they are carried out.

Sequential lifecycle models

The Waterfall Model


This is one of the earliest models to be designed it has a natural timeline where tasks are
executed in a linear fashion. This model is so called because of the way in which the phases
of development naturally flow into one another. We start at the top of the waterfall with a
Feasibility study and flow down through the various project tasks finishing with
implementation into the live environment.

Design flows through into development, which in turn flows into build, and finally on into test.
Testing tends to happen towards the end of the project lifecycle so faults are detected close
to the live implementation date. With this model it has been difficult to get feedback passed
backwards up the waterfall and there are difficulties if we need to carry out numerous
iterations for a particular phase.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 8


V-model
The V model was developed to address some of the problems being experienced using the
traditional waterfall approach. Faults were being found late in the lifecycle, as the testing
function was not getting involved until the end of the project. Testing also added unnecessary
overhead on the critical path due to their late involvement. The V model bends the testing
activity around the point of code delivery. All testing activities that can be completed prior to
code delivery are carried out as early as possible. The emphasis is on early testing of project
deliverables, via reviews and static testing techniques. All test planning and preparation can
be completed prior to code delivery.

Iterative Lifecycles models

Pre-planned incremental delivery (Phased)


Not all lifecycles are sequential; there are also iterative lifecycles where, instead of one large
development time line from end to end, we cycle through a number of smaller self-contained
lifecycle phases for the same project.

The delivery is divided into builds with each build adding new functionality. The initial build will
contain the entire infrastructure required to support the initial build functionality. Subsequent
builds will need to be tested for the new functionality, regression testing of the existing
functionality, and integration test of both new and existing parts. This means that more testing
will be required at each subsequent delivery phase which must be allowed for in the project
plans. This lifecycle can give early market presence with critical functionality, can be simpler
to manage, and reduces initial investment although may cost more in the long run.

Spiral

© 2022 Keytorc Software Testing Services ISTQB AL TM – 9


This model can help to minimize the development risk where the full set of system
requirements may not yet be known. By building prototypes and simulations we can reduce
the impact of this risk. At the start of each phase we analyze the alternatives available to
achieve our goals and carry out a phase risk assessment. Following each phase we carry out
an assessment of the phase deliverables and the delivery process and then plan the next
phase.

Agile Development models

Agile development models is a group of software development methods based on iterative


and incremental development, where requirements and solutions evolve through collaboration
between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary
development and delivery, a time-boxed iterative approach, and encourages rapid and flexible
response to change. It is a conceptual framework that promotes foreseen interactions
throughout the development cycle. In February 2001 17 software developers published the
Manifesto for Agile Software Development to define the approach now known as agile
software development:

Individuals and interactions over processes and tools


Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

Rapid Application Development (RAD)


Some before the Agile manifesto was published Rapid Application Development (RAD) and
Dynamic System Development Methodology (DSDM) were already used to improve
development. In these models the development of different functions is done in parallel
followed by integration.

Function 4

Define
Function 3
Develop
Function 2 Build
Test
Function 1

0 5 10

Components/functions are developed in parallel as if they were mini projects, the


developments are time boxed, delivered, and then assembled into a working prototype. This
can very quickly give the customer something to see and use and provide feedback regarding
the delivery and their requirements. Rapid change and development of the product is possible
using this methodology, however the product specification will need to be developed from the
product at some point, and the project will need to be placed under more formal controls prior
to going into production. This methodology allows early validation of technology risks and a
rapid response to changing customer requirements.

DSDM is a refined RAD process that allows controls to be put in place in order to stop the
RAD process from getting out of control. Remember we still need to have the essentials of
good development practice in place in order for these methodologies to work. We need to
maintain strict configuration management of the rapid changes that we are making in a
number of parallel development cycles. From the testing perspective we need to plan this out
very carefully and update our plans regularly as things will be changing very rapidly.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 10


Extreme Programming (XP)
Extreme programming is one of the more recent lightweight development lifecycle models to
be identified. The methodology claims to be more human friendly than traditional development
methods
- Promotes the generation of business stories to define the functionality
- Demands an onsite customer for continual feedback and to define and carry out
functional/acceptance testing
- Promotes pair programming and shared code ownership amongst the developers
- States that Unit (component) test scripts shall be written before the code is written
- States that integration and test of the code shall happen several times a day
- States that we always implement the simplest solution to meet today’s problems.

From extreme programming explained by Kent Beck “Testing Strategy- Oh yuck”. “Nobody
wants to talk about testing. Testing is the ugly stepchild of software development. The
problem is, everybody knows that testing is important. Everybody knows that we don’t do
enough testing”.

Kent Beck says that the developers write every test case they can think of and automate
them. Every time a change is made in the code it is component tested then integrated with the
existing code which is then fully integration tested using the full set of test cases. This gives
continuous integration and all test cases must be running at 100%.

XP is not about doing extreme activities during the development process, it is about doing the
known value add activities in an extreme manner.

Scrum
Scrum is an iterative and incremental agile software development framework for managing
software projects and product or application development. Its focus is on "a flexible, holistic
product development strategy where a development team works as a unit to reach a common
goal" as opposed to a "traditional, sequential approach".

Roles
Product Owner
The Product Owner represents the stakeholders.

Development Team
The Development Team is responsible for delivering potentially shippable product increments
at the end of each Sprint. A Development Team is made up of 3–9 people.

Scrum Master
Scrum is facilitated by a Scrum Master, who is accountable for removing impediments to the
ability of the team to deliver the sprint goal/deliverables. The Scrum Master is not the team
leader, but acts as a buffer between the team and any distracting influences.

Process

© 2022 Keytorc Software Testing Services ISTQB AL TM – 11


A sprint is the basic unit of development in Scrum. The sprint is a "timeboxed" effort, i.e. it is
restricted to a specific duration. The duration is fixed in advance for each sprint and is
normally between one week and one month.
Each sprint is preceded by a planning meeting, where the tasks for the sprint are identified
and an estimated commitment for the sprint goal is made, and followed by a review or
retrospective meeting, where the progress is reviewed and lessons for the next sprint are
identified.
During each sprint, the team creates finished portions of a product. The set of features that go
into a sprint come from the product backlog, which is an ordered list of requirements. Which
backlog items go into the sprint (the sprint goals) is determined during the sprint planning
meeting. During this meeting, the Product Owner informs the team of the items in the product
backlog that he or she wants completed (the ones with the highest priority). The team then
determines how much of this they can commit to complete during the next sprint, and records
this in the sprint backlog. Each day during the sprint, a project team communication meeting
occurs. This is called a daily scrum, or the daily standup.

Evolutionary Lifecycle

This lifecycle is more delivery focused than development focused. Each evolutionary cycle
delivers a live, working system, with functional increments over the previous release.

The evolutionary process encourages active customer feedback. The customer gets early
visibility of the product, can feedback into the design, and can decide based on the existing
functionality whether to proceed with the development, decide what functionality to include in
the next delivery cycle, or even to halt the project if it is not delivering the expected value. An
early business focused solution in the market place gives an early return on investment (ROI)
and can provide valuable marketing information for the business.

3.3 Planning using the V-model


We identify the aims of each phase
We establish entry and exit criteria for each phase
We identify the techniques, tools, and metrics for each phase
We identify what is required for each phase of testing
We identify the deliverables for each phase
We review all documentation/items as soon as possible
We carry out V&V exercises on the documentation
We produce our test cases as early as possible
We plan for static testing and dynamic testing
We produce a test execution schedule

WE DECIDE IN ADVANCE HOW WE ARE GOING TO DO THIS AND CALL IT A TEST


PLAN! WHEN WE KNOW THE SPECIFIC CHARACTERISTICS FOR EACH PHASE WE
CAN PRODUCE OUR PHASE TEST PLAN.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 12


The V model test execution phases
The test execution phases for the V model development lifecycle:
- Component Test
- Component Integration
- System Testing
- System Integration Testing
- Acceptance Testing
-
The characteristics of each phase
Each of the test phases will need to be focused and controlled. In order for our phased
approach to be successful we need to identify the following attributes for each phase:
- Objective
- Scope
- Who does it
- An owner who is accountable
- Entry Criteria
- Exit Criteria
- Test Deliverables
- Typical test techniques
- Metrics
- Test Tools
- Applicable testing Standards

Here is an example of how a breakdown of the phase characteristics might look (maybe!) It is
up to you to decide the most effective approach, aims, and best techniques to apply at each
stage of any specific project.

Component Testing Characteristics


Component testing– The testing of individual software components.

Objective – To show each component functions as intended (according to the component


specification) and to provide visibility into the quality of the component.
Responsibility – Development
Scope – Test input fields, GUIs, code, calculations etc.
Who does it – Developer typically, development owned.
Entry Criteria – Component is complete and ready to test (compiles, has been reviewed,
software design has been approved, complies with static criteria).
Exit Criteria – Component passes all tests and is signed off, level of coverage achieved, test
report written, cyclomatic complexity within criteria.
Test Deliverables – Component test report/results, test log, test code & test data, component
sign off from developer, stubs & drivers.
Typical test techniques – White box coverage techniques, Syntax testing, BVA, Ad Hoc.
Metrics – Number of defects found/fixed, priority, statistical analysis.
Test Tools – Static analysis tools, debuggers, harnesses, dynamic analysis, coverage,
comparator.
Applicable testing Standards - BS7925-2 Component Testing Standard, TMap Next, MISRA
coding standards.
Typical non-functional test types – resources usage (e.g. memory), time-behavior, portability,
maintainability

Component Integration Testing


Testing performed to expose faults in the interfaces and in the interaction between integrated
components (BS7925 -1).

Objective – To confirm functionality of modules when components are combined together.


Verification against the global design, with focus on interaction and interfacing between
modules
Responsibility – Integration Team
Scope – Test all interfaces, data exchange, and variable passing between components
Who does it – Integration tester, integrator / development responsibility

© 2022 Keytorc Software Testing Services ISTQB AL TM – 13


Entry Criteria – Components passed Component test phase (complies to exit criteria),
integration test specification reviewed and approved, global design reviewed and approved,
passed ‘confidence test’
Exit Criteria – All Integration test cases executed, all critical tests passed, number of defects
within set limits, test results documented, test report written
Test Deliverables – Integration test report/result, problem reports, sign off, integration test
specification
Typical test techniques - White box techniques, equivalence partitioning, state transition
testing, syntax, cause/effect graphing
Metrics - Number of faults found, priority, statistical analysis
Test Tools - Test harnesses (stubs, drivers), simulators, record & playback for regression
testing, defect management, test management
Applicable testing Standards – BS7925-2 Component Testing Standard, TMap Next, IEEE
829 for documentation.
Typical non-functional test types – resources usage (e.g. memory), performance

System Testing Characteristics


System Testing - The process of testing an integrated system to verify that it meets specified
requirements [Hetzel]

Objective – To show the system meets the functional specification, and all the non-functional
requirements.
Responsibility – Test team (within development)
Scope – Functionality, non-functional attributes such as performance, security, installation,
error handling, recovery etc. etc.
Who does it – Independent Test Team & technical experts / development responsibility
Entry Criteria – System passed integration test phase, development sign off, test team
acceptance (intake / confidence test), release notes available
Exit Criteria – All system test cases run and complete, no high priority defects outstanding,
Mean Time Between Failures (MTBF), # of defect per test hour under threshold, requirements
coverage
Test Deliverables – System test plan, test report, test results, test specifications and -
procedures, test evaluation report (recommendations for product and project)
Typical test techniques – Black box techniques (e.g. equivalence partitioning, state transition
testing, cause/effect graphing), specialist non-functional test techniques (e.g. error-guessing).
Metrics - Tests passed/failed/run, number of faults and priority, environment log, test logs,
progress reports, time & effort planned v spent, requirements coverage, test effectiveness.
Test Tools – Performance monitoring, data generators, capture/replay, test management
tools, incident management tools.
Applicable testing Standards – BS7925-2 Component Testing Standard, TMap Next, IEEE
829 for test documents, ISO 9126 for non-functional exit criteria (will be replaced by ISO
25000)
Typical non-functional test types – reliability, performance, usability, portability

System Integration Testing


Objective – To confirm system and network functionality when the system is integrated into an
existing network of systems.
Scope – System interfaces, files, data, operational profiles, usually end to end testing based
on business processes.
Who does it – Independent test team.
Entry Criteria – System test sign off for each system being integrated
Exit Criteria – All integration test cases complete, no category A or B bugs outstanding,
system meets reliability requirements.
Test Deliverables – Integration Test Report, test results, configuration guide, integrated test
model.
Typical test techniques – Black box
Metrics - Tests passed/failed/run, number of faults and priority, test coverage, test
effectiveness.
Test Tools – Harnesses, stubs, drivers, data generators, simulators, capture replay,
comparators

© 2022 Keytorc Software Testing Services ISTQB AL TM – 14


Applicable testing Standards – see useful standards for software testing

Acceptance testing
Formal testing conducted to enable a user, customer, or other authorized entity to determine
whether to accept a system or component (BS7925-1).

Objective – To show the delivered system meets the business need (validation), formal
acceptance of the product.
Responsibility – User / Customer or representative
Scope – Requirements based testing. The whole system, test cases based on requirements
document, covers functionality, usability, help, user guides etc.
Who does it – Users, business representatives, live support, or, test (user/customer
responsibility).
Entry Criteria – Sign off from System Test/Integration test phase (system test report
available), user requirements reviewed and approved, test plan reviewed and approved
Exit Criteria – All acceptance test cases completed, no category A or B business priority
defects outstanding, list with known defects, business sign off for live implementation, MTBF,
acceptance test report approved
Test Deliverables – Acceptance test report, results, test logs, problem reports, change
requests, test specifications and test procedures
Typical test techniques – Equivalence partitioning, exploratory testing, use Case testing, error
guessing, process cycle test.
Metrics – Number of faults, business priority, tests passed/failed/run
Test Tools – Capture Replay, comparators, performance, defect management, test
management
Applicable testing Standards – BS7925-2 Component Testing Standard, TMap Next, IEEE
829 for test documents, ISO 9126 for non-functional exit criteria (will be replaced by ISO
25000)
Typical non-functional test types – usability, performance, security

3.4 Development and testing


- Lifecycle has a big impact on testing.
- Phased releases have cumulative test cases and more regression
- A small change can mean a lot of testing, retesting, and regression testing

Development and testing are intrinsically linked regarding timescales and effort. But this does
not mean that the effort and elapsed time for development and testing have a linear
relationship.

The bigger the project the more time will be required to develop and test the system. True.
The development approach adopted will also have a large impact on the time and effort taken
to test.

An example of this would be a phased release where each phase contains increased
functionality over the previous release. The effort, and time taken, for system and integration
testing would be increased due to the “extra” regression testing required to ensure that the
new functionality has not impacted the existing functionality.

A small change that may take a developer one hour to code may take two weeks to test if the
change is in a module that is business critical, highly visible and often used (Hans Schaffer
priority spreadsheet).

Be aware of this effort anomaly if asked to supply testing timescales based purely on a
development estimate to fix/change/develop.

Project Communication
Historically testing has been carried out at the end of the development phase. Testers were
grey characters that sat in a room somewhere and the perception was that testers halted the
projects progress by finding problems just before the scheduled implementation date.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 15


Now that we work in a more enlightened era we know that testers can add value at every
stage of the development process. We do not wait until the code is ready before we find the
faults we get involved from the very beginning.

In order to do this successfully we need to know, be known by, and be trusted by, the key
stakeholders on the project. We need to sell them testing by showing them the value of
carrying out the activities that we KNOW will add value. We need to educate them and prove
that we can remove errors, and the causes of errors, early in the project.

Team Interaction
The testing team will need to interact with the following project areas:
- Customer- Project sponsor/Manager
- System Users
- Project Managers
- Design Team
- Development Team
- Technical Support team
- Technical authors
- Configuration Management Team
- Change Control Board
- Fault Management Area
- Etc. etc.

3.5 Project Management (PM)


Lifecycle terminology
Critical path – The time taken between a projects inception and its implementation into the
live environment
Task – Any scheduled item of work
Milestone – A point in the plan that has been allocated as an indicator of completion of a task
or set of tasks that significantly contributes towards the goals of the project.
Dependencies – Any task or list of tasks that is directly dependent on another action/task
completing before it can commence.
Workflow – The list of tasks planned out for an individual or team.
Project management is concerned with:
- Identification of all project deliverables/goals
- High level control of the following:
- The plans, methods, processes and controls involved in meeting the project goals.
- Identification of the tasks, resource, timeframes, and ownership.
- Progress monitoring and reporting
- Project risks and risk management

The project plan will identify:


- All major project activities
- High level project tasks
- Major Project Milestones
- Project deliverables
- Parallel activities across teams
- Dependencies between team tasks Entry/Exit criteria between project phases Project
resource requirements
The project plan is a high level plan made up from the lower level plans.
Lower level plans would detail the sub-tasks required in order to achieve a milestone on the
project plan. Lower level plans will be used to control the day-to-day work of the teams
- Development Plan
- Test Plan
- Integration Plan

© 2022 Keytorc Software Testing Services ISTQB AL TM – 16


Key planning benefits
The plans must be maintained by the plan owner/manager
- Progress is monitored by checking actual progress against planned progress.
- Milestones are used to help focus on the completion of critical pieces of work (tasks).
- Dependencies allow the impact of slippage to be easily identified (slipping one day can
delay the project by a week!).
- Individual resource allocation and management is possible by assigning individuals to
tasks.

Phase entrance/exit criteria (Quality gates)


The project Plan should contain a master schedule of activities. This schedule will distinguish
between different phases of activity in the project lifecycle, for example development, test and
implementation stages. When planning we record the entrance and exit criteria between
these stages, which sets the rules governing the deliveries readiness to move on to the next
phase. Exit criteria from one stage should closely match the input criteria to the next. Both
should detail specifically what is required.

We have entrance and exit criteria for the following reasons:


- They provide a measurable state of product readiness to progress to the next stage.
- Exit criteria provides focus on the delivery requirements from the delivering team
- Entrance criteria provide focus on acceptance requirements from the receiving team.
- The quality gate is shown as a milestone on the project plan and is on the critical path.
- It prevents the false economics of progressing the product before it is ready in response
to external pressures
- It confirms that all planned tasks have been completed prior to product promotion to the
next stage.

A typical set of entry criteria to the system test phase may look something like this:
- Sign off document from development owner authorizing product release to system test
team.
- Component test report.
- Integration in the small test report.
- Release note detailing release content, details of any new functionality, and the status of
any known bugs.
- Build statement detailing the components and versions comprising the release.
- The product itself (code, GUIs, etc.).
- Installation and backup instructions.
- Support documentation.
- User documentation.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 17


Development task list

High level milestones


Although 31 tasks are shown on the Development task list for the screen development
process, the project plan may only need to show the two specific milestones:
- Screen layout sign off
- Screen sign off

The project is not necessarily concerned with the actions required to achieve the goal, but
needs to know that the task is on track, and when it has been completed.

Development Gantt chart

Managing dependencies
Dependencies must be shown on the task list. In order to work productively some tasks must
be completed before others can start. Specialist skills and resource are often required for

© 2022 Keytorc Software Testing Services ISTQB AL TM – 18


specialist tasks. When delays occur it is essential to manage the delay to the specific task
and the potential delays to the dependent tasks

Resource Planning
Allocate the required resource to the specific tasks. The planning tool shows:
- Task Usage – how much resource is allocated to each task (spend)
- How much time is required to complete the task (effort required)
- How long each task will take to complete (time elapsed)
- Who is doing what and when (workflows/stacks)

This allows visibility of over commitment at individual or team level, and where spare capacity
is available to carry out any further work.

3.6 Configuration Management (CM)


At the start of the project:
- Appoint a CM manager and define ownership and responsibilities
- Define your CM strategy (CM rules, naming conventions, Document Control,
documentation library etc.)
- Define your Build and Release Management strategy
- Define your Change Management Process and procedures
- Define your Fault Management Process and Procedures
- Decide if a CM tool is required

CM in action. At any point in time you must know:


- What is the current software configuration?
- What is its status?
- Any changes to software configurations
- Any changes to the software
- Any changes to the environment
- Any changes to the documentation baseline (requirements, functional specification etc.)
- Any changes to the test ware (test conditions, test cases, scripts etc.)
- Any other changes that may impact development/test

Symptoms of bad CM
- Faults cleared earlier re-appear.
- Expected fixes are not included in the release.
- Unexpected changes are included.
- Features and functions disappear between releases.
- Variations in code functionality on different environments.
- Wrong code versions delivered.
- Different versions of baseline documentation in use.
- Code delivered does not match release note.
- Can’t find the latest version of the build.
- No idea which user has which version.
- Wrong functionality tested.
- Wrong functionality shipped.
- Etc. Etc.

How do we improve our CM? In order to improve the process we must employ:
- Status Accounting (data collection, analysis, and reporting)
- Configuration Auditing (procedural conformance)
- Post Implementation Reviews (what went well, what can we do better?)

This is how we improve our own quality of life!

Wise words
CM is control over change.
It can be a full time job and is difficult to do well.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 19


We need to know at all times where we are, where we came from, and how we got here!
Testing is only repeatable if we are in control of all of these variables.

Build Management
This is a very important part of the development process. A build statement should always
accompany each release from development.
The build statement identifies:
- The name and version of the build (unique identifier).
- A comprehensive list of the components that make up the release, including individual
component version numbers.
- Technical information such as component file sizes, to allow simple verification of the
component files.
- Any other specific details of the build that would be useful to the test team, such as
modules excluded for any reason.

Release management
Release management handles the transmittal of items between teams or phases of the
development lifecycle.
Establishes and controls the roll out
- When are releases scheduled?
- Is there any overlap?
- What test environments will I need?
- What resource will I need?
- What needs to be in each release?

Release notes
A release note identifies:
- The name and version of the release (unique identifier).
- Installation and back out instructions
- Any environment changes required including database changes.
- Any configuration changes required.
- Any fault fixes contained in that release.
- Any change requests incorporated in that release.
- Details of any known faults.
- Details of the Component and Integration (link) test results.
- Sign off authority for the release from the “owner”.
- Any other information that may be useful to the test or implementation teams.

Change Management
We must distinguish between faults and changes. Faults are the responsibility of the
development team and need to be corrected as a development cost. Changes were either not
specified or incorrectly specified and are a project cost.

Changes need to be managed through a change control process to capture the change
details and business benefits, the impact on the design; development; test; and
implementation teams, costs and timeframes. Once this information has been gathered
informed decisions can be made regarding proceeding with a change, deferring it, or rejecting
it.

Configuration Items
When planning your testing decide on your documentation control processes and naming
conventions.

Documentation Control
Decide the level of control required and define the minimum details that will be included in all
documentation. You may decide that in your documentation you will have the following
configuration details:
- Item name
- Document Author details
- Document Owner details

© 2022 Keytorc Software Testing Services ISTQB AL TM – 20


- Status: Draft/Issued
- Version
- Date of production
- Copy number
- Controlled copy yes/no
- Details of master copy: paper/electronic, location, file reference, etc.
- Source format (word 2000 etc.)
- Change history
- Distribution list
- Sign off list

An initial draft of a document may be called draft 0.1 with subsequent drafts called 0.2, 0.3
etc. When the document is issued it may be called version 1.0 and when re-issued it may be
version 1.1, 1.2 etc.

Configuration Control
- Code: Decide on your naming convention for releases of code, numerical designations
etc.
- Environments: Define a naming convention for your test environments. Decide how you
are going to record the environment hardware, software, configurations, and the code
version for each environment

Code
The decision could be made that:
- A full release of code might be called version 1.0, 2.0 etc.
- A fault fix release or minor enhancement might be 1.1, 1.2 etc.
- A patch to existing functionality might be 1.0.A, 1.0.B, etc.

Every time we make a change to the code the version must change in order that we can
identify the version and will know what has changed.

Environments
For our environments we may produce the following table to help us with our Environment
CM.
Environment Name Owner Hardware Software Config Code
System Test SYS01 T Man Server xyz Oracle xx 123 V1.2
Console NT abc
Workstation Win2000 standard
Printer

Integration INT01 I Man etc etc etc

Acceptance ACC01 A Man

Tools are available to help us with CM


We need to know all of the above in order to be able to reproduce our test results. It is
essential that we record the following:
• Test Scripts executed
- Versions of test cases
- Test database used
- Versions of test data used
• Version of code tested
• Environment
- Hardware versions
- Software versions
- Configurations (hardest and user configurable)

It is essential that we maintain control of these details in order to give us controlled repeatable
test results.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 21


3.7 Key learning points
We need to ensure we adopt the right testing approach to fit with the project development
lifecycle.

We need to ensure we have the contacts in place between the test team, the other project
teams and project roles, and the business representatives.

We need to ensure that the project has the correct processes and controls in place such as:
- Release Management
- Build Management
- Configuration Management
- Incident Management

We need to plan our approach, document it, and get it agreed by all key stakeholders.

Without quality development we will fail.


Without quality testing we will fail
Without communication we will fail
Without teamwork we will fail

So we all need to work together.

3.8 Systems of Systems


A system of systems is a set of collaborating components (including hardware, individual
software applications and communications), interconnected to achieve a common purpose,
without a unique management structure. Characteristics and risks associated with systems of
systems include:
• Progressive merging of the independent collaborating systems to avoid creating the entire
system from scratch. This may be achieved, for example, by integrating COTS systems
with only limited additional development.
• Technical and organizational complexity (e.g. among the different stakeholders) represent
risks for effective management. Different development lifecycle approaches may be
adopted for contributing systems which may lead to communication problems among the
different teams involved (development, testing, manufacturing, assembly line, users, etc.).
Overall management of the systems of systems must be able to cope with the inherent
technical complexity of combining the different contributing systems, and be able to
handle various organizational issues such as outsourcing and offshoring.
• Confidentiality and protection of specific know-how, interfaces among different
organizations (e.g. governmental and private sector) or regulatory decisions (e.g.
prohibition of monopolistic behavior) may mean that a complex system must be
considered as a system of systems.
• Systems of systems are intrinsically less reliable than individual systems, as any limitation
from one (sub)system is automatically applicable to the whole systems of systems.
• The high level of technical and functional interoperability required from the individual
components in a system of systems makes integration testing critically important and
requires well-specified and agreed interfaces.

Management & Testing of Systems of Systems


Higher level of complexity for project management and component configuration management
are common issues associated with systems of systems. A strong implication of Quality
Assurance and defined processes is usually associated with complex systems and systems of
systems. Formal development lifecycle, milestones and reviews are often associated with
systems of systems.

Life cycle Characteristics for Systems of Systems


Each testing level for a system of systems has the following additional characteristics to those
described in section

© 2022 Keytorc Software Testing Services ISTQB AL TM – 22


• Multiple levels of integration and version management
• Long duration of project
• Formal transfer of information among project members
• Non-concurrent evolution of the components, and requirement for regression tests at
system of systems level
• Maintenance testing due to replacement of individual components resulting from
obsolescence or upgrade

Within systems of systems, a testing level must be considered at that level of detail and at
higher levels of integration. For example “system testing level” for one element can be
considered as “component testing level” for a higher level component. Usually each individual
system (within a system of systems) will go through each level of testing, and then be
integrated into a system of systems with the associated extra testing required.

3.9 Safety Critical Systems


“Safety critical systems” are those which, if their operation is lost or degraded (e.g. as a result
of incorrect or inadvertent operation), can result in catastrophic or critical consequences. The
supplier of the safety critical system may be liable for damage or compensation, and testing
activities are thus used to reduce that liability. The testing activities provide evidence that the
system was adequately tested to avoid catastrophic or critical consequences. Examples of
safety critical systems include aircraft flight control systems, automatic trading systems,
nuclear power plant core regulation systems, medical systems, etc.

The following aspects should be implemented in safety critical systems:


• Traceability to regulatory requirements and means of compliance
• Rigorous approach to development and testing
• Safety analysis
• Redundant architecture and their qualification
• Focus on quality
• High level of documentation (depth and breadth of documentation)
• Higher degree of auditability.

Compliance to Regulations
Safety critical systems are frequently subject to governmental, international or sector specific
regulations or standards. Those may apply to the development process and organizational
structure, or to the product being developed. To demonstrate compliance of the organizational
structure and of the development process, audits and organizational charts may suffice. To
demonstrate compliance to the specific regulations of the developed system (product), it is
necessary to show that each of the requirements in these regulations has been covered
adequately. In these cases, full traceability from requirement to evidence is necessary to
demonstrate compliance. This impacts management, development lifecycle, testing activities
and qualification /certification (by a recognized authority) throughout the development
process.

Safety Critical Systems & Complexity


Many complex systems and systems of systems have safety critical components. Sometimes
the safety aspect is not evident at the level of the system (or sub-system) but only at the
higher level, where complex systems are implemented (for example mission avionics for
aircraft, air traffic control systems). Example: a router is not a critical system by itself, but may
become so when critical information requires it, such as in telemedical services. Risk
management, which reduces the likelihood and/or impact of a risk, is essential to safety
critical development and testing context. In addition Failure Mode and Effect Analysis (FMEA)
and Software Common Cause Failure Analysis are commonly used in such context.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 23


4. Testing Processes

For background reading on “Testing Process” refer to chapter 2 “TMap Test


Process” of The Testing Practitioner.

4.1 Introduction
Although executing tests is important, we also need a plan of action and a report on the
outcome of testing. Project and test plans should include time to be spent on planning the
tests, designing test cases, preparing for execution and evaluating status. The idea of a
fundamental test process for all levels of test has developed over the years. Whatever the
level of testing, we see the same type of main activities happening, although there may be a
different amount of formality at the different levels, for example, component test might be
carried out less formally than system test in most organizations with a less documented test
process. The decision about the level of formality of the processes will depend on the system
and software context and the level of risk associated with the software.

So we can divide the activities within the fundamental test process into the following basic
steps:
o Test planning, monitoring and control;
o Test analysis;
o Test design;
o Test implementation;
o Test execution;
o Evaluating exit criteria and reporting;
o Test closure.

These activities are logically sequential, but, in a particular project, may overlap, take place
concurrently and even be repeated. This process is particularly used for dynamic testing, but
the main headings of the process can be applied to reviews as well. For example, we need to
plan and prepare for reviews, carry out the reviews, and evaluate the outcomes of the
reviews. For some reviews, such as inspections, we will have exit criteria and will go through
closure activities. However, the detail and naming of the activities will be different for static
testing.

4.2 Test planning, monitoring and control


During test planning, we make sure we understand the goals and objectives of the customers,
stakeholders, and the project, and the risks which testing is intended to address. This will give
us what is sometimes called the mission of testing or the test assignment. Based on this
understanding, we set the goals and objectives for the testing itself, and derive an approach
and plan for the tests, including specification of test activities. To help us we may have
organization or program test policies and a test strategy. Test policy gives rules for testing;
e.g. “we always review the design documents”. Test strategy is the overall high level
approach; e.g., “system testing is carried out by an independent team reporting to the
program quality manager. It will be risk-based and proceeds from a product (quality) risk
analysis”. If policy and strategy are defined already they drive our planning but if not we
should ask for them to be stated and defined

Test planning has the following major tasks, given approximately in order, which help us build
a test plan:
o Determine the scope and risks, and identifying the objectives of testing; we consider what
software, components, systems or other products are in scope for testing, the business,

© 2022 Keytorc Software Testing Services ISTQB AL TM – 24


product, project and technical risks which need to be addressed, and whether we are
testing primarily to uncover defects, show that the software meets requirements,
demonstrate that the system is fit for purpose, or to measure the qualities and attributes
of the software.
o Determine the test approach (techniques, test items, coverage, identifying and interfacing
with the teams involved in testing, test ware); we consider how we will carry out the
testing, the techniques to use, what needs testing and how extensively (i.e. what extent of
coverage). We’ll look at who needs to get involved, and when (this could include
developers, users, IT infrastructure teams); we’ll decide what we are going to produce as
part of the testing (e.g., test ware such as test procedures and test data). This will be
related to the requirements of the test strategy.
o Implement the test policy and/or the test strategy; we mentioned above that there may be
an organization or program policy and strategy for testing. If this is the case, during our
planning we must ensure that what we plan to do adheres to the policy and strategy, or
have agreed with stakeholders, and documented, a good reason for diverging from it.
o Determine the required test resources (e.g. people, test environment, PCs); from the
planning we have already done we can now go into detail; we decide on our team make-
up, and we also set up all the supporting hardware and software we require for the test
environment.
o Schedule test analysis and design tasks, test implementation, execution and evaluation;
we will need a schedule of all the tasks and activities, so that we can track them and
make sure we can complete the testing on time.
o Determine the exit criteria; we need to set criteria such as coverage criteria (for example
the percentage of statements through the software that must be executed during testing)
which will help us track whether we are completing the test activities correctly. They will
show us which tasks and checks we must complete for a particular level of testing before
we can say that testing is finished.

Management of any activity does not stop with planning it. We need to control and measure
progress against the plan. So, test control is an ongoing activity. We need to compare actual
progress against the planned progress, and report to the project manager and customer on
the current status of testing, including any changes or deviations from the plan. We’ll need to
take actions where necessary to meet the objectives of the project. Such actions may entail
changing our original plan, which often happens. When different groups perform different
review and test activities within the project, the planning and control needs to happen within
each of those groups but also across the groups to coordinate between them, allowing
smooth hand-offs between each stage of testing. Test planning takes into account the
feedback from monitoring and control activities which take place throughout the project.

Test monitoring and control has the following major tasks:


o Measure and analyze the results of reviews and testing; we need to know how many
reviews and tests we have done. We need to track how many tests have passed and how
many failed, along with the number, type and importance of the defects reported.
o Monitor and document progress, test coverage and exit criteria; it is important that we
inform the project team how much testing has been done, what the results are, and what
conclusions and risk assessment we have made. We must make the test outcome visible
and useful to the whole team.
o Provide information on testing; we should expect to make regular and exceptional reports
to the project manager, project sponsor, customer and other key stakeholders to help
them make informed decisions about project status. We should also use the information
we have to analyze the testing itself.
o Initiate corrective actions; for example to tighten exit criteria for defects fixed, or asking
for more effort to be put into debugging, or prioritizing defects for fixing test blockers.
o Making decisions; based on the measures and information gathered during testing, and
any changes to business and project risks, or our increased understanding of technical
and product risks, we’ll make decisions or enable others to make decisions: to continue
testing, to stop testing, to release the software or to retain it for further work for example.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 25


4.3 Test analysis
Test analysis is the activity where general testing objectives are transformed into tangible test
conditions, i.e. to define “what” is to be tested. During test analysis we take general testing
objectives identified during planning and build test designs and test procedures (scripts).
Test analysis has the following major tasks, in approximately the following order:
o Review the test basis (such as the product risk analysis, requirements, architecture,
design specifications, and interfaces); examining the specifications for the software we
are testing. We use the test basis to help us build our tests. We can start designing
certain kinds of tests before the code exists, as we can use the test basis documents to
understand what the system should do once built. As we study the test basis, we often
identify gaps and ambiguities in the specifications, because we are trying to identify
precisely what happens at each point in the system, and this also prevents defects
appearing in the code.
o Identify test conditions based on analysis of test items, their specifications, and what we
know about their behavior and structure; this gives us a high level list of what we are
interested in testing. If we return to our driving examiner example, she might have a list
of test conditions including “behavior at road junctions”, “use of indicators”, “ability to
maneuver the car” and so on. In testing, we use the test techniques to help us define the
test conditions. From this we can start to identify the type of generic test data we might
need.
o Evaluate testability of the requirements and system. The requirements may be written in a
way that allows a tester to design tests; for example if the performance of the software is
important, that should be specified in a testable way. If the requirements just say “the
software needs to respond quickly enough” that is not testable, because “quick enough”
may mean different things to different people. A more testable requirement would be “the
software needs to respond in 5 seconds with 20 people logged on”. The testability of the
system depends on aspects such as whether it is possible to set up the system in an
environment that matches the operational environment, whether all the ways the system
can be configured or used can be understood and tested. For example, if we test a web
site, it may not be possible to identify and recreate all the configurations of hardware,
operating system, browser, connection, firewall and other factors that the website might
encounter.

4.4 Test design


As Test analysis is about the “what” to test, test design is about “how” something is to be
tested. The test conditions are being used to identify test cases by using test techniques that
have been identified in the test strategy or test plan. The test cases must be traceable back to
the requirements or user stories. Sometimes in lower test levels the design of test cases is
done in parallel with the test analysis phase. In higher test levels it is normally a separate
phase apart from test analysis. In an agile development project it can be performed during the
test execution.

Test design has the following major tasks, in approximately the following order:
o Design the tests using techniques to help select representative tests that relate to
particular aspects of the software which carry risks or which are of particular interest,
based on the test conditions and going into more detail. For example, our driving
instructor might look at her list of test conditions and decide that junctions need to include
T junctions, cross roads and so on. In testing, we’ll define the test case and test
procedures.
o Design the test environment set-up and identify any required infrastructure and tools; this
includes testing tools and support tools such as spreadsheets, word processors, project
planning tools, and non-IT tools and equipment – everything we need to carry out our
work.

4.5 Test implementation

© 2022 Keytorc Software Testing Services ISTQB AL TM – 26


During test implementation, we take the test conditions and make them into test cases and
test ware, and we set up the test environment. This means that, having put together a high
level design for our tests, we now start to build them. We transform our test conditions into
test cases and procedures, other test ware such as scripts for automation. We also need to
set up an environment where we will run the tests, and build our test data. Setting up
environments and data often involves significant time and effort, so you should plan and
monitor this work carefully.

Test implementation has the following major tasks, in approximately the following order:
o Develop and prioritize our test cases, using the techniques, and create test data for those
tests. We will also write instructions for carrying out the tests (test procedures). For the
driving examiner this might mean changing the test condition “junctions” to “take the route
down Mayfield Road to the junction with Summer Road and ask the driver to turn left into
Summer Road and then right into Green Road, expecting that the driver checks mirrors,
signals and maneuvers correctly, while remaining aware of other road users.” We may
need to automate some tests using test harnesses and automated test scripts.
o Create test suites from the test cases for efficient test execution. A test suite is a logical
collection of test cases which naturally work together. Test suites often share data and a
common high-level set of objectives. We’ll also set up a test execution schedule.
o Implement and verify the environment; we make sure the test environment has been set
up correctly, possibly even running specific tests on it.

4.6 Test execution


Test execution consists of the execution of the test cases that were written in previous
phases. Every step of the execution should be logged, such that it is possible to analyze in
case of a defect. In some cases the test cases must be executed again to check a repair
action.

Test execution has the following major tasks:


o Execute the test suites and individual test cases, following our test procedures. We might
do this manually or by using test execution tools, according to the planned sequence.
o Log the outcome of test execution and record the identities and versions of the software
under test, test tools and test ware. We must know exactly what tests we used against
what version of the software, and we must report defects against specific versions, and
the test log we keep provides an audit trail.
o Compare actual results (what happened when we ran the tests) with expected results
(what we anticipated would happen).
o Where there are differences between actual and expected results, report discrepancies
as incidents. We analyze them to gather further details about the defect, reporting
additional information on the problem, identify the causes of the defect, and differentiating
between problems in the software and other products under test and any defects in test
data, in test documents, or mistakes in the way we executed the test. We would want to
log the latter in order to improve the testing itself.
o Repeat test activities as result of action taken for each discrepancy. We need to re-
execute tests that previously failed in order to confirm a fix (confirmation testing or re-
testing). We execute corrected tests and suites if we had defects in our tests. We test
corrected software again to ensure that the defect was indeed fixed correctly
(confirmation test) and that the programmers did not introduce defects in unchanged
areas of the software and that fixing a defect did not uncover other defects (regression
testing).

4.7 Evaluating exit criteria and reporting


Evaluating exit criteria is the activity where test execution is assessed against the defined
objectives. This should be done for each test level, as for each we need to know whether we
have done enough testing. Based on our risk assessment, we’ll have set criteria against
which we’ll measure “enough”. These criteria vary for each project and are known as exit
criteria. They tell us whether we can declare a given testing activity or level complete. We

© 2022 Keytorc Software Testing Services ISTQB AL TM – 27


may have a mix of coverage or completion criteria (which tell us about test cases that must be
included e.g. “the driving test must include an emergency stop” or “the software test must
include a response measurement”), acceptance criteria (which tell us how we know whether
the software has passed or failed overall e.g. “only pass the driver if they have completed the
emergency stop correctly” or “only pass the software for release if it meets the priority 1
requirements list”) and process exit criteria (which tell us whether we have completed all the
tasks we need to do e.g. “the examiner/tester has not finished until she has written and filed
her end of test report”). Exit criteria should be set and evaluated for each test level.

Evaluating exit criteria has the following major tasks:


o Check test logs against the exit criteria specified in test planning; we look to see what
evidence we have for which tests have been executed and checked, and what defects
have been raised, fixed, confirmation tested, or are outstanding.
o Assess if more tests are needed or if the exit criteria specified should be changed; we
may need to run more tests if we have not run all the tests we designed, or if we realized
we have not reached the coverage we expected, or if the risks have increased for the
project. We may need to change the exit criteria to lower them, if the business and project
risks rise in importance and the product or technical risks drop in importance. Note – this
is not easy to do, and must be agreed with stakeholders.
o Write a test summary report for stakeholders; it is not enough that the testers know the
outcome of the test. All the stakeholders need to know what testing has been done and
the outcome of the testing, in order to make informed decisions about the software.

4.8 Test closure


During Test closure the results and outputs should be administered and archived.

Test closure has the following major tasks:


o Test completion check to ensure that all test work is done. This consists of checking the
status of all test cases; did they all run successfully or skipped with a reason? Also the
status of the defects should be checked to see if they are in a final state: closed, deferred
or cancelled.
o Administration of test ware to the relevant person(s). This consists of handing over the
test documents (test plans, test cases) and test results (loggings, test reports), but also all
deferred or accepted defects should be communicated to the owners. The test
environment should be administered regarding versions, data bases and technical
information (like IP-addresses) and handed over to the environment administrator. In
some cases documents and other artifacts are stored in a configuration management
system. A clear linkage to the system and version should be made.
o A Project Retrospective meeting is held to gather lessons learned, both good practices to
apply as bad practices to avoid. In some cases issues are accommodated within project
plans. Areas to consider include:
- Was the user representation in the risk analysis sessions sufficient?
- Were the estimates accurate?
- What are the trends of the defects?
- What are the results of the cause analysis of the defects?
- Are there any improvement opportunities?

© 2022 Keytorc Software Testing Services ISTQB AL TM – 28


5. Test Planning
5.1 Introduction
Testing is a difficult thing to do well and therefore it requires detailed planning. All testing
activities take time. You must allow sufficient time during the planning stage for all the tasks to
be carried out correctly and to the required standard.

The aim is to produce and execute tests that are:


• Measurable
• Objective
• Repeatable

Test cases will need to be


• Risk based
• Prioritized

In order to identify and produce the required tests, execute them, and manage the process we
produce a detailed plan set. These plans vary in strategic aim, use, level of detail, and
content.

During the planning stage we produce our test documentation set:


• Test Policy
• Test Strategy
• Project Test Plan
• Phase Test Plan

5.2 Test Policy Document


A document describing the organizations philosophy towards software testing.

Test Policy statements will reflect the nature of the business, the risks associated with the
products and market place, and the business attitude regarding the required quality of
products and deliverables. The test policy will dictate the overall approach to testing and the
strategies employed.

The test policy is the first document that is produced in the test documentation tree. The test
policy is a short, concise, high-level document usually created and owned by the
organizations IT department (or equivalent). This document will define the organizational
approach to testing and the aims and objectives of the testing.

A test policy provides the direction for the lower level test documentation, helping to keep the
testing focused on the test objectives stated. During the early test planning stages of the
project, when the lower level documentation is being produced (strategy, test plans etc.), the
policy is the guiding factor that provides the aims, objectives, and targets that the testing will
be expected to achieve. It provides:
- The philosophy for testing within the organization
- The definitions of what testing means
- A guide to what will need to be covered in the lower level documents
- The framework under which the testing will be carried out.

Example test Policy Statements


For example a company manufacturing a circuit board component for use in the space shuttle
may include test policy statements such as:
- Build and test processes will be automated and amalgamated wherever possible
- All individual components will be tested for accuracy of tolerance prior to insertion into
a build.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 29


- Each complete circuit will be tested for functionality, accuracy, resilience, and
reliability prior to being shipped.
- All circuits will have functional tolerance testing executed whilst exposed to
temperatures ranging between –100 degrees Celsius and +300 Degrees Celsius.
- The company will plan to achieve CMMI level 5 within the existing 3 year plan.
- Etc.

The test policy will in principle cover all testing activity within the organization including new
developments, maintenance activities, or 3rd party developed/bought in software, although the
way in which these are covered may be dealt with in separate sections of the document itself.
The document must address all test activities and should be agreed by all parties involved in
the development process to promote understanding and a shared vision of the testing aims
and objectives.

Test Policy Contents


- Rule set
- A definition of testing
- The organizational approach to testing
- The testing process
- The evaluation of testing
- Quality levels required
- Test process improvement

A defined rule set


The rule set gives the policy statements that are the key drivers for the test process. They
give the controls to the process, which allows clear understanding and focus for all testing
activities. E.g.:
- The company will NOT implement untested software into the live environment
- All key stake holders (as defined in the project plan) MUST have signed to accept
delivery of the software release prior to any live upgrades taking place.
- All major projects WILL have an individual project manager appointed on completion
of a successful feasibility study. The project manager will be responsible for the
overall project management and delivery.
- The test function will be responsible for all faults in the live environment (Dependent
on the agreed system operating requirements at the time of project definition having
been maintained)
- Up to 20-25% of overall project budget will be allocated to the test function.
- Etc.

Definition of testing
This is the definitive statement detailing what the organization understands by the term
testing. This will state what testing will be required and what the testing is meant to achieve.
- Testing will confirm that the delivered software solves a business problem
(Acceptance).
- Testing will confirm that the software functions as detailed in the product design
documentation (Functional).
- Testing will confirm that no existing systems, system processes, functions, or facilities
are impacted by any enhancement, or changes to an existing system (Regression).
- Testing will confirm that the functionality of any networked system or component will
not be impacted as a result of the introduction of a new component/system and or any
changes to an existing system (Integration).
- Testing will consider and address all non-functional attributes of each software
development/release.

The organizational approach to testing


The organizational approach to testing will detail how the test function fits into the overall IT
structure. Successful testing requires strong communications and links to the other teams
involved in the development activities such as customers, designers, developers, etc.
- The company structure will allow for three distinct teams; Design, Development, and
Testing. These teams will be independent from each other, with independent

© 2022 Keytorc Software Testing Services ISTQB AL TM – 30


management reporting lines. Each area will interact with the other two but will remain
responsible for their specific function in each project. OR,
- Specialist resource will be allocated to each project team, with the required test
resource being allocated to the project. The project teams will be co-located to foster
closer team working and all project resource will report directly to the project
manager. OR
- The required testing personnel and skills will be held within a dedicated pool of test
resource which will be made available to the projects on a requirements basis. These
resources will be matrix managed out to each project for the duration required.

The testing process


The test process will also be dictated by the Test Policy, such as:
- Development and execution of a test plan in accordance with departmental
procedures and user requirements.
- All testing activities will be carried out within our existing ISO9001 quality
accreditation.
- All testing activities will be carried out within the development framework methodology
process XYZ
- The testing approach will be as required to meet the needs of the V-model (or other)
development lifecycle needs.

The evaluation of testing


The approach to evaluating the success and value of the testing activities will also be stated
in the Test Policy document.
- Test effectiveness will be an integral part of the Post Implementation Review process.
- The required data will be gathered to allow the cost of each fault found during test
process to be evaluated.
- The number of live faults found within the first 3 months of live operation will be
analyzed and compared to the faults found during test in order to gauge test
effectiveness.
- Test effectiveness measures will be carried out by an independent external
consultancy to ensure impartiality at all times.
- The initial project meeting for all new projects will consider previous test effectiveness
measures when formulating the test approach for that project.

Quality levels required


The companies required quality levels should be stated in the test policy document.
- No more than one high severity fault per 1000 lines of delivered code to be found in
the first six months of live operation.

Test process improvement (TPI)


The companies approach to test process improvement will be stated in the test policy
document.
- Post project reviews will be performed after each project.
- The TMM plan will be applied to all test activities within the overall company CMMI
accreditation process as indicated in the company’s CMMI plan.

5.3 Test strategy document


The test strategy is produced after the test policy document and will dictate the test
framework required to meet the test policy. A high level document defining the test phases to
be performed and the testing within those phases for a program (one or more projects).

The test strategy details the overall testing approach and what will be done in order to satisfy
the criteria detailed in the test policy. This document is a strategic document and as such it
must complement the other IT strategic working practices and development procedures.
Testing is a part of the development process and must be fully integrated with the other
project teams in order to be successful. Test strategies cannot be developed in isolation, they
must have buy in from the other project areas and work in conjunction with the other teams –

© 2022 Keytorc Software Testing Services ISTQB AL TM – 31


remember testing cannot do it alone! If the test policy is the What, then the test strategy is the
How (at a high level).

The strategy will detail:


- Test approach
- Test team structure
- Ownership and responsibilities
- Test tools
- Metrics
- Reporting process
- Fault management
- CM and Change control
- Etc.

Test strategy details


Some examples of the type of information you may see in a strategy for a company designing
WEB sites may look like this:

In order to meet the companies test policy on quality deliverables within tight timescales the
company has adopted the extreme programming lightweight development methodology (XP).
All WEB development projects will be developed using the XP methodology.
- The independent test function will provide the XP coach to the development teams for
each project.
- All project baseline documentation and code will be subject to appropriate review and
sign off (project budget up to 15%)
- The independent test function will conduct acceptance testing against the business
stories specified by the customer.
- The independent test function will specify, create and execute the acceptance test
cases in conjunction with the customer.
- Auto mated test tools will be considered at the start of (and throughout) each project
and will be used wherever advantage can be identified.
- etc.

Test Strategy sections


Typically the Test Strategy is comprised of two main sections:
- The risks to be addressed by the testing of the software
- The testing that will be executed to address the identified risks

Inputs to the formulation of the strategy will include:


- Test Policy Document/s
- Risk register/s
- Test Process Improvement information (if any information is available)
- Business sector, organization and structure.

Details of the test phases


The phases of testing required will be identified, with the structure, content and controls for
each of the phases described. The following information will be described for each phase:

Entry and exit criteria:


Each phase will have a specified focus for the testing. Entry and exit criteria are required for
each phase in order to manage the process as the required products move between the test
phases. These criteria ensure that ALL the test objectives and the required deliverables have
been achieved for each stage.

They can also be helpful in avoiding the temptation to promote a product to the next stage
before it is ready (quality development and testing takes a finite time, to promote a product to
the next stage before it is ready, in order to try and gain time, is false economics and is likely
to result in a greater delay at a later stage).

© 2022 Keytorc Software Testing Services ISTQB AL TM – 32


Consecutive phases will have a close match between the exit criteria from the delivering
phase and the entry criteria of the next phase. This will prevent a product meeting the exit
criteria from one stage and failing to meet the entry criteria of the next, which would result in
the need to identify actions and owners in order to restart the process.

Approach to testing:
Each phase will have a defined approach to testing along with a rational for its selection:
- Top-down
- Bottom-up
- Priority driven

Test case design techniques:


The test strategy will state which test case design techniques are to be considered for each
phase. It is usual to expect that the Component Test Phase would include more White Box
type test techniques compared to the Acceptance Test Phase, which normally uses Black Box
test techniques exclusively. Remember that the strategy is written early and applies to all
developments, specific project and phase test plans will be produced for each product that will
indicate any variations from the test techniques specified in the strategy for each phase.

Test completion criteria:


The test completion criteria will be specified for each phase. This ensures that the proof exists
(test results) to support the decisions made regarding promotion of the product between
phases. Test completion criteria are linked to risk and ensure that all the testing required has
been executed and the agreed test coverage targets have been met.

The degree of test independence required:


Independent testing tends to achieve better results than less formal testing but of course
costs more. The greater the independence of the test function the more formal the process
must be, with a correlation between higher risk solutions and more independence of the test
function. The strategy will dictate the independence of the test function required in order to
achieve the desired levels of software quality and the justification for the decision.

Standards that must be adhered to:


The strategy will describe the standards that apply to each test phase and which specific
standard requirements will be complied with.

Test environments in which the testing will be executed:


The strategy will detail the test environments to be used. There may be a single test
environment that is used for all phases of testing with ownership of the environment being
transferred between the parties responsible for test execution at each phase, or a unique
environment for each test phase.

The approach to test automation:


The strategy will state the company approach to test tools and their use. This may well be a
process in its own right with a need being identified, feasibility, tool assessment, trialing and
implementation all being specified. Test execution and management tools can be specified in
the strategy for use across all projects to give consistency across the organization.

The degree of reuse of software:


The strategy will indicate to what level the various test software items will be reused between
levels of testing. This includes:
- Test harnesses, driver, stubs and simulators
- Automated test scripts
- Test data

Recycling these items between test levels can save on resource and will prevent repetition
and duplication of test preparation activities across the levels of testing. However, beware
recycling the test data and scripts between the test levels as any errors in the data or the
scripts that have not been discovered are carried forward. Subsequent test levels may then

© 2022 Keytorc Software Testing Services ISTQB AL TM – 33


also get the same incorrect results due to the use of the incorrect data or test scripts. This can
also foster laziness resulting in the same testing being executed a number of times which is
not the most effective use of resource and can detract from the true focus of a particular test
phase.

The approach to retesting and regression testing:


The strategy will cover the approach to retesting and regression testing. Scheduling of
releases, timing and contents, how the testing will be planned and executed, the level of risk
assessment and automation etc.

The test process to be used:


Testing based on the V-model, other industry standard development lifecycle methodology, or
internal development methodology. Details of the test process that are to be applied and any
procedures and templates to be used.

Measures and metrics:


Details of the process measurements that are to be made, the data that will need to be
captured at each specific point, how the data will be analyzed and how it will be used.

Incident management:
The strategy dictates how incidents will be managed, what the process will be, who is
responsible and if any tools will be used.

The test strategy for an organization must reflect the business arena and physical attributes of
the organization itself in order to be of maximum use. The company test strategy is not
always a single document, but can made up of a number of other strategy documents
comprising the complete strategy document set. This document set may be made up of such
documents as:
- Corporate test strategy
- Specific site/location test strategy
- Program test strategy (for a series of projects)
- Project test strategies

And, in some instances, may also be presented as part of the test plans. Different strategies
may be required for testing different applications. The strategy sets the framework for the
testing therefore it is not surprising that a different strategy may be required for testing a
safety critical application compared to a non-critical web-based application.

5.4 Test Plan


Project Planning Hierarchy

© 2022 Keytorc Software Testing Services ISTQB AL TM – 34


Project Plan Hierarchy

Project
Plan

Project Project
Project Project
Development Implementation
Design Plan Test Plan
Plan Plan

Master Test Plans


Master test plans document how the overall test strategy will be applied to a specific project.
They will confirm compliance, or explain non-compliance, with the test strategy.

Master test plans are specific to one particular project and are closely related to the
associated project plan for that development. The project plan will detail the project critical
path showing where the testing elements of the project are on the critical path.

Master Test Plan objectives


The master test plan also contains details and information that will enable the test team to:
- Identify the timescales for the required testing
- Assess the costing for the project
- Assess the project test resource requirements
- Identify the levels of testing required
- Identify the number of test cycles required
- Satisfy management and users (customers) that adequate testing will be performed
- To define and communicate contributions, roles and responsibilities for everyone
involved in delivering the test aspect of the project.
- Identify the required project deliverables

Level Test Plan


A document providing detailed requirements for performing testing within a level (phase) e.g.
component test plan, integration test plan etc.

Level Test Plan details


The level test plan is the detailed document that will include all the information required to
control testing for each phase.

The level test plan will contain details such as:


- Details of the modules/functions/processes to be tested in that phase
- Details of the business risks associated with those functions
- Test coverage requirements
- Test techniques to be applied
- Methods and tools
- Test specification and execution requirements
- Details of the test environments to be used
- Testing Schedule
- Details of any variations from the project test plan or the test strategy

Some organizations will have all of the above documents; some will amalgamate or split the
documentation set. What is important is that all the required information is covered.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 35


To summarize:

Test Policy - the organizations philosophy


Test Strategy - high level program document defining test approach.
Master Test Plan – detailed project document defining test phases and content.
Level Test Plan – detailed document providing all testing and test requirements for a phase.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 36


6. Testing and risk

For background reading on “Risk-Based Testing” refer to chapters 4 “Risk


based testing”, 5 “Good enough testing” and 6 “Measuring Software Quality”
of The Testing Practitioner.
6.1 Introduction
Risk based testing – what could go wrong if the system fails. This differs from the risk of the
system failing during live operation. Testing can be carried out to identify if a risk is present or
not, but what happens if the testing activity itself fails? Do we have to consider the risk of not
testing at all, or is this the same as the test process failing?

Risk is all about prioritization of the test we can run within a series of limitations of time, cost
and quality goals. One must therefore ask if all testing is indeed or should be ‘Risk Based’?

We must be aware that risk based testing is applicable to both new products and
maintenance packages. The risk of a new product failing may have less impact than that of a
maintenance activity. For example: what is the risk of the package of change failing vs. the
package of change regressing the current live system? Maintenance fixes come in two forms,
planned and emergency. Are you able to establish risks associated with emergency fixes or
should we just assume that if it were not a major problem then it would not be an emergency.
Therefore the risk of putting the fix live may have no more impact than the current failure. In
other words perhaps we should test an emergency fix with a view to ensuring it does not
make the current system any worse.

Just because something can go wrong does not mean that it will. However, experience shows
us that not only will it go wrong; it will probably be the first thing that goes wrong and in a way
we had never considered. Often known as ‘Murphy’s Law’

Objectives
The key objective is to provide the student with sufficient information to be able to introduce
the concepts and basic principles of risk based testing and risk management into their
organization.

What is Risk?
Perhaps a value only definable subjectively and even that would vary depending on the
circumstance or the perspective. Risk is what is taken when balancing the likelihood of an
event vs. the impact if it does. In effect what are we willing to leave to chance? When we
cross the road we must consider the likelihood of falling over half way across against the
distance and speed of the traffic. If the chance of falling over is low, we will risk crossing when
the distance is short and the speed is high. If the likelihood of falling is high, the distance must
be greater and the speed less, we would then have a chance to get up again without suffering
any impact [sic]. The hard part is judging how many times you will fall vs. the speed and
distance of the car!! In most cases the level of risk we [the business] are willing to take is
dependent on the amount of time we have and available budget. The issue with testing and
risk is that the testing activity is often squeezed between a late development activity and a
fixed time to market.

The more that is available of both the more testing can be done and as a result, either directly
or indirectly, risk is reduced. In order to be able to quantify the level of risk in a system or
subsystem then analysis of the situation is first required

© 2022 Keytorc Software Testing Services ISTQB AL TM – 37


After establishing how much risk there is one can take measures to minimize or eradicate all
or some of it. Please bear in mind that risks are not static, they are as dynamic as the system
of which they are a part and as that develops so risks are introduced, removed or changed.

Product Risks
The intent is to consider not just the risk of the product itself not working but the impact it
could have on existing systems. Implementations of web sites are a prime example. Web
sites are designed to have impact on the volume of business a company does. This in turn
has a direct impact on the ability of the business as usual [BAU] or legacy systems.

The web site itself may be risk free in that it operates at the functional and non-functional level
perfectly well. The risk consideration is will the legacy systems stand up to the change of
operational profile. To place BAU at risk is a recipe for failure of not just your systems but
probably your business as well.

Extra reading with respect to web testing risks and strategies


- ‘The Web Testing Handbook’, Splaine & Jaskiel – STQE Publishing – ISBN 0-
9704363-0-0
- ‘Testing Applications on the Web, Nguyen – Wiley – ISBN 0-471-39470-X

Project Risks
The integrity of the system design the project management plan and the business case for a
system do not guarantee success. It is a good start but without test effort to support the
project then the chances diminish significantly

The project is reliant on a number of key factors within the test team, some of which can be
equally applicable to the development and design effort;
Ø The ability of the staff involved
Ø The availability of the right tools
Ø A suitable environment upon which to test the system
Ø The support activities such as fault fixing and delivery
Ø The ease with which the system can be maintained.

Again web sites can be a prime example. The speed of technology change means that few
have much if any relevant experience of a specific architecture or the components that make
it up. It is therefore much more likely that a significant amount of errors will be made and the
risk of the project not being a success as a result is increased.

6.2 Risk Management


When do we Identify Risk?
Putting a risk on the risk register does not mitigate it! It requires analysis. Risks are not static.
All key stakeholders have a vested interest in the risks; as a result the risks must be
monitored and kept up to date as required. Any change may have a direct or indirect impact
on one or more risks.

You may well have a very full risk register, but when did YOU last look at it?

How can Testing help with Risk?


Testing in the first instance is a preventative activity. Reviews and Inspections find many
faults before they are propagated into later stages and into code. Once in the code dynamic
testing roots out many more. The iterative tasks of debugging and regression testing mean
that the risks to the system should be well understood and under control well before the
implementation date of that system.

Sadly even the best-intentioned fall foul of a series of restrictions:


Ø Poor risk analysis
Ø Late entry into the development cycle
Ø Lack of time
Ø Lack of budget
Ø Change

© 2022 Keytorc Software Testing Services ISTQB AL TM – 38


Can Testers Mitigate Risk?
There are two schools of thought on this subject. One says that testing a system can only
identify if an identified risk is present or not, testing can do nothing physically to reduce the
risk. The other view is that without testing the likelihood and impact of a risk will not be known.
If the impact and severity are known then the risk of the system failing is known and as a
result at least some of that risk has been mitigated.

However, by definition we can be certain that strong change control processes and the
minimization of change and scope creep will reduce the level of risk.

Risk Analysis
Identification of risks is not sufficient. Risks must be analyzed to establish the type of threat
they pose and to establish what, if anything, can be done about them. In many cases just
because a risk has been identified does not mean that a test or series of tests can or should
be devised to establish the level of that risk.

For Example: For many, risk is subjective and often perceived rather than measurable. The
risk of a meteorite hitting you on the head whilst you line up the winning putt of the British
Open Golf championship.

Firstly, too many there is no risk at all, because they will never be in that position.
Secondly is it possible to simulate a test? Thirdly would we want to? Fourthly, how big would
the meteorite need to be? They can be specs of dust up to any size we could imagine. The
outcome of being hit by a spec of astral dust would probably mean you would not notice, in
fact we are being hit by them all the time. Could you therefore blame the miss on a speck of
dust that you do not notice? If the meteorite were a mile across would anyone care about the
golf?

When analyzing the risk perhaps we should consider separate classes dependent on the
likely ‘public view’ of the error. Could I suggest we have e-business, front office, back office
and batch ‘classes’ of risk?

Risk Identification
Risk identification is an exercise driven by priorities within the system or project under test. It
is easy to identify risk; the art is to identify risks that matter.

Risk Identification is often mistakenly considered to be a ‘one off’ task that takes place during
the early stages of the project. In part this is true; however, we must continue the identification
exercise throughout the project as any change may uncover another risk.

Who owns the risk


All key stakeholders have a vested interest in the risks; as a result the risks must be
monitored and kept up to date as required.

The users are ultimately responsible for the risk. They will have been involved in the
identification stage, the prioritization, the analysis and mitigation exercise, as indeed will most
of the key stakeholders. The only difference being that the users will have to work with the
system in a live environment.

The final decision to implement the system and move from the comparative safety of the test
environment into the live environment is not one to be taken lightly. Often a large amount of
money will have been earmarked for a marketing budget and new products to enhance or
replace existing ones will define the company’s position in the market for the next few years.
Whilst the system is in test there is a chance that any remaining problems will be found and
resolved. The onus at this point is with the test team. However, once the system has gone live
the spotlight is on those who made the decision to ‘go live’.

The tester’s role is to ensure that the information presented to the users at any point during
the test phases is objective, accurate and has focused on the ‘right’ aspects of the system.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 39


Views of Risk
Risk is based on personal perception of the impact and likelihood. As a result each risk must
be analyzed with this in mind. The resultant categorization and priority rating will always
reflect the uncertainty of the risk analysis

Does the product deliver the required service? Where ‘service’ is defined as the activity of the
product.
Ø Commercial
o Process the product
o Support customers
Ø Safety Critical
o Failure rate
o Redundancy
Ø Embedded Systems
o Real Time transactions
o Failover mechanism

From this the impact on any individual or group will differ depending on how often the function
is used and how much of the function is dependent on that part of the system.

Business/User View:
Ø Will the new system provide the business benefits identified in the system proposal
document?
Ø I know it is unlikely to happen, but what if it does?
Ø What impact to the current business profile

Testers View:
Ø What is the priority of each of the risks?
Ø How much test coverage is needed to identify if the risk is still there?
Ø What volume of faults will be found and how fast will they be turned around?

Developers View:
Ø How complicated is the processing?
Ø Do the skills and toolsets exist to support this product
Ø Are the skills and tools available to the team
Ø What are the delivery timescales

Project Managers View:


Ø Will the deadline be achievable?
Ø Is the new technology stable
Ø Can the system be implemented

6.3 Risk and the Test Strategy


The Test Strategy should be used to document the test stages and techniques to be used
when addressing specific types of risk. Every project has a different set of objectives; the level
of risk, the test coverage and the test techniques used will determine the likelihood of
achieving these objectives. What do we need to put into our test strategy to ensure that risk
management has been considered?

Reviews
Ø Risks to test design techniques
Ø Less intuition or ‘gut feel’ more business reasons
Ø Applicable for different test levels
Ø Supports customers to allow them to make a choice
Ø Provides the mechanism for communication between key stake holders
Ø Make the test process more manageable
Ø Better test coverage in the right areas – targeted testing
Ø No risk – no test

© 2022 Keytorc Software Testing Services ISTQB AL TM – 40


When, during the project lifecycle, a risk is detected in the lifecycle it will either increase or
decrease the impact. For example if a batch job failure is detected before it is triggered then
the impact is small otherwise……….

Roles and Responsibilities


The ‘users’ are likely to be best placed to assess the likely business impact of a risk?
Ø End users
Ø System managers
Ø Live support
Ø Business managers
Ø Compliance managers
Ø Customer Relationship managers

The project team members are best placed to assess the likelihood of an error occurring
Ø Designers
Ø Programmers
Ø Project manager
Ø QA staff
Ø Test manager
Ø Test team

As an example:
A new web interface is introduced that will be required to provide 100,000 customers with
account information within 3 seconds of the enquiry.

There is a risk that this requirement will not be met because the communications channel may
fail, that other system ‘traffic’ will impede this function or the application layer will provide the
wrong information.

This situation could be tested using 4 different techniques at 3 different stages:


- Cause Effect Analysis at Functional System Test stage
- Reliability and Recovery Testing at Non-Functional System Testing stage
- Performance Testing at Large Scale Integration Testing stage

The risk of the communications channel not being of a suitable specification should have
been mitigated during the architecture design stage of the system.

Risk and Test Coverage


Testers have the responsibility to ensure that the test coverage for any identified risk is in line
with the priority of that risk.

However, it is essential that the coverage is monitored and managed as testing progresses. It
is simple enough to agree at the outset what the perceived level of risk may be. It is much
harder to ensure that as risks change then the testing to mitigate those changes as well. It
must be borne in mind that coverage can be reduced as well as increased.

6.4 Risk Identification Techniques


Example: 29th September 2000: An Indian Airlines Airbus A320 flight IC229 had to circle
Gauhati airport several times because an elephant broke through a wall and strolled around
the airport.

Even if we had used any of the methods below would the risk to aircraft and the passengers
have been identified?

These methods are dependent on the maturity of the systems development process, the skill
and experience of the staff, the available time and when in the cycle risk is considered.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 41


Expert Interviews:
In the likely event that key stakeholders will not be available for brainstorms or other events,
expert interviews should be planned. Keeping these interviews to 30-40 minutes will ensure
the attention of the interviewee.

The interviewer should ensure that the interviewee has received written objectives of the
review in advance.

The major disadvantage of these interviews is the lack of cross-fertilization of ideas. The other
rules are pretty much the same as for a brainstorm in that the objective is to generate a
quantity of data rather than analyzed output.

Independent Assessment:
As the title suggests the use of external experts can be used in place of or alongside internal
resource. Independence does not mean that those involved must come from outside the
company. However, the key is that whoever is involved is experienced in the identification of
risk, but has no vested interest in the viability, delivery or commerciality of the product.

Independence is a state of mind not necessarily a physical state. The assessment must be
held within a series of guidelines and those taking part must first have been given sufficient
information about the product, its priorities and use to be able to focus on at least the major
areas of concern.

It may be that a similar product is available for comparison in which case some basic
understanding of its operation should also be provided to the assessors.

Risk Templates:
Once a risk has been identified an amount of information must be recorded. Risk templates
should be made available to all team members. It is not expected that one person or indeed
one team will provide all information. The content of a risk template is specific to the type of
product being produced, the market sector, the safety criticality or the level of compliance
required. It is therefore impossible to provide a generic definitive template.

Lessons Learned:
Relies on recording issues as they occur in other aspects of the system or project under test.
Typically the risks identified in Stage #1 of a project can be reviewed at Stage #2 etc.
Experience is gained at two levels, individual and company. Individuals are subject to direct
experience and gain insights and learning via reflection, at company level the experiences of
a group are more diverse and require a more disciplined approach to review and analysis.

Direct recording [metrics] of data related to risks such as the number of faults, the system
down time, actual cost to repair and estimated business cost would provide objective
examples. Analysis of these risks will provide input to the risk identification activities of the
future.

It should be remembered that the risk identification process itself should also be subject to
review on a periodical basis at no less than an annual basis. Lessons learned should focus on
establishing areas where certain types of risk occurred that were not previously identified or
considered.

It is not always necessary to learn just from the experience. It should be remembered that you
could always learn from someone else. Similarly companies can learn from each other. In a
commercial environment this may not be the result of a direct relationship, however, public
bodies such as regional education and health authorities, police forces and other government
agencies could set up formal Risk Review Bodies.

Risk Workshops:
Organized events managed by a facilitator than can be carried out as ‘one-offs’ or a series of
progressive meetings. The objective is to take a view that combines the freestyle of a

© 2022 Keytorc Software Testing Services ISTQB AL TM – 42


brainstorming session with a series of checklists. In effect these are the key areas, now let’s
have a think!

Using a series of workshops with differing groups of resource and therefore perspectives,
risks will evolve over a series of events and will become a complete picture rather than a
single point of view. This type of event promotes a project wide series of priorities but has the
effect of watering down or compromising in areas where agreement cannot be reached.

Risk Poker:
Another approach for the risk workshops is to use Risk Poker. Risk Poker is based on
Planning Poker and invented by Improve. All stakeholders can give their estimated risks in
one meeting based on user stories as used in Agile development.
In the meeting each stakeholder has a desk of Risk Poker cards. These cards are
comparable to Planning poker, but in addition some have a colored dot. The risk poker cards
have the following colored dots: light blue, green, yellow, orange, red, purple. For every user
story each stakeholder selects a card indicating the technical risk. In case the stakeholder is
not involved in technical issues, he will not give an indication. The color indicates the risk
involved where light blue is the lowest risk and purple the highest risk. One can also decide to
only use 3 or 4 colors for convenience.
Of all given business risk values the highest risk differ and lowest risk are discussed. When
they differ a lot, the stakeholders who gave these risks must explain their reasoning for that
risk. The purpose of the discussion is that everybody knows and understands the different
viewpoints and ultimately come to a consensus risk value (i.e. color). If not such value is
reached, it can be a decision to take the highest risk value or just the average.
Once the business risks are discussed, the same is done with the technical risk values for the
same user story.
When both risk types for this user story have been agreed upon, the next user story is being
evaluated using the Risk poker cards, until all user stories are evaluated.

Brainstorming:
Is driven via a series of meetings where communication is the key. No one person in a large
project will have the complete picture of how a system works. Based on these 2 main
premises the level of mutual understanding will increase and it promotes a ‘no blame’ culture.

Multi-disciplinary teams with a wide range of specialist skills are essential to the integrity of
the process, where no one aspect of the project is in the majority. The objective of the team is
to provide a wide range of perspective and perceptions on all aspects of the system and its
operation.

Brainstorming sessions should last no more than 2 hours and focus on the identification of
risks; the analysis will follow later to rationalize and consolidate the information.

The basic rules are:


Ø No criticism, analysis comes later
Ø Building up the suggestions of others is encouraged
Ø Risks are recorded using a short title and indication of the source
Ø Quantity is the aim. Analysis is not.

Like all formal activities a facilitator for the meetings will be required, whose role it is to ensure
a high level agenda is available and followed and to ensure the meeting does not get stuck on
details.

The output from a brainstorm is a long list of issues that will be later rationalized into data that
can be acted upon by appropriate members of the management team.

Prompt Lists and Checklists:


In an attempt to minimize the process or re-inventing the wheel a series of prompt lists should
be available to all those involved in the risk identification process. Ideally the list should focus
on generic areas of the system. The objective is prompt thought in the areas of risk to be

© 2022 Keytorc Software Testing Services ISTQB AL TM – 43


considered but to should also give an indication of the type of problems your system could be
subjected to.

For example:
Headings could possibly include;
Ø Compliance to specific standards and regulations
Ø Possible internal and external threats to hardware, software, data or human resource
Ø Objectives and acceptance criteria, such as performance criteria, virus protection,
usability, reliability measures
Ø Long term benefits e.g. staff level reduction

Checklists are a more complete and detailed listing of specific risks that must be addressed.
For example:

“Compliance to section 1 paragraph 8 of IEEE 1234”, Would be a checklist item as opposed


to the generic prompt, which would have generated discussion around the area of standards
and the probable issues.

Fault Prediction:
“If a guy tells me that the probability of failure is 1 in 105, I know he is full of crap.” Richard P.
Feynmann, Nobel Laureate commenting on the NASA Challenger disaster.

In order to establish the likelihood of an error in the components of the system and perhaps
the type of error we can expect to encounter we should consider some predictive techniques.

Fault trees provide both a quantitative and qualitative analysis of certain failure situations of a
system and can be used for either hardware or software components or indeed a combination
of both. Predictive techniques and models [e.g. Piwowarski et al. and the Rivers-Vouk Model]
provide historical statistical analysis of fault trends that can be used as a guide to the likely
number of type of error in an identified system type. [see further reading]

Self-Assessment:
This method is pretty much self-explanatory. Using the outcome from the lessons learned
technique the level of accuracy of the risk assessment process from a single or series of
previous stages can be assessed. The limitation of this approach is that you are limited by
your own knowledge base, if you were off target last time you may choose a different
approach this time that could be just as way off but in another direction.

6.5 Risk Analysis


6.5.1 Quantitative Risk Analysis
Can the impact of the risk be measured?

Measures can be in terms of:


Ø Pure financial cost
Ø Loss of faith by customers
Ø Damage to corporate identity or brand
Ø Impact on other functions or systems
Ø Detection and repair time
Ø Maintenance and implementation costs

Risk reduction can be achieved in either [probability or impact] reducing the probability or by
reducing the severity. However, predicting the impact of an issue in one part of a system pre-
supposes an understanding of the whole system, e.g. a change in volume in one area affects
performance of another that may cause a timeout failure in yet another.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 44


If so tests can be devised to reduce the major contributors to that measure. Once those tests
have been executed and the results analyzed it may well be that the measured ‘cost’ risk has
been reduced below an acceptable threshold.

Note:
Risks may increase or decrease in priority as changes to system states occur.
E.g. an embedded safety system is at little risk of failure all the time the two failover systems
are operable, as backups fail so the risk to the main system increases. However, the
likelihood of all 3 systems failing together is much lower than a single system failure. The risk
analysis therefore requires that both impact and likelihood be taken into account.

6.5.2 Qualitative Risk Analysis


Do certain levels of service have to be maintained to meet legal compliance or industry
standards?

If so then the tests devised must focus on achieving a certain level of quality. Once that
agreed level has been achieved then the risk will be deemed to have been mitigated.

Examples of qualitative risk:


Ø Compliance to ISO9000 series
Ø An agreed duration of a specific test will have been identified within which an agreed
number of failures [or Mean Time Between Failure [MTBF]] will have been agreed as
acceptable
Ø BS7799 security standard achieved
Ø Usability guidelines either at company or industry level have been met.

Note:
Risk analysis requires consideration of system states other than the initial state.
As situations occur other risks may be invoked or nullified.
For example if a fighter plane has 2 engines and both fail - at that point the ejector seat
mechanism is invoked. The failure of the ejector seat mechanism is not a risk at normal state
but is Critical at the engine failure state.

It can be seen that a valuable asset in the analysis of risk is the State Transition test case
design technique.

6.6 Categorization & Classification


Risks can be categorized and classified using whichever scale suits. The key is to ensure that
all stakeholders are aware of the implication of a particular category or class and the
methodology used to arrive at that point.

How does one type of risk impact different groups of users? If we understand this then we
may need to run different tests or provide different types or levels of prevention. Much like
Fault Analysis it is essential that over classification is not used as a method of ensuring a
personal favorite is considered above all others.

4 levels of category are usually favored and the terms used can vary:
Ø Critical
Ø High
Ø Medium
Ø Low

There is no point in having a ‘no risk’ category, if it is no risk it won’t be on the risk register in
the first place ☺

Classification of Risk can be based on:

© 2022 Keytorc Software Testing Services ISTQB AL TM – 45


Ø The test technique required mitigating it, e.g. Usability, Security etc.
Ø Safety
Ø Economics
Ø Health and Safety
Ø Politics
Ø Technical issues
Ø Skills
Ø Tools
Ø Mandatory Dates [e.g. Y2K]
Ø Etc.

6.8 Risk Mitigation


The mitigation action is directly linked to the categorization and classification of the analyzed
risk. Contingency action is a common approach especially when serious risks are considered
to be out of normal control bounds. Flooding, terrorism, epidemics are amongst the most
common. In this case most systems have some sort of contingent Disaster Recovery
procedure. As long as these procedures are ‘tested’ then the risk is mitigated.

Perhaps we should consider the issue of risk compensation as a method of mitigating risk?

Example: Problem:500 motorcyclists a year are killed in accidents in the U.K.


Solution: Ban motorcycles!

A little drastic I think. However, the problem is not so simple. In order to mitigate the risk we
make crash helmets compulsory and perhaps all-in-one leather suits, knee and elbow pads.
The problem has not gone away, why? Because with all this new padding the riders will feel
safer and now take more or different risks. The death rate stays as it is and we have to think
of new mitigating actions. We may have solved one risk, but we will have introduced others!!!

Risk and Metrics


Tracking the lifecycle of the risks provides metrics in a number of areas;
Ø Number of tests
Ø Initial classification to final classification
Ø Cost to fix
Ø Status at implementation
Ø Live failure comparison

Risk and Prioritization


Prioritization of risk is key to the specification and scheduling of test cases.
The Foundation course introduced Critical, Complex and Error Prone as drivers for
establishing with the users the areas of the system that were considered ‘at risk’. The
continued use of these criteria will focus the Identification and Analysis phases of the Risk
Management process.

6.9 Test Management Issues


6.9.1 Exploratory Testing
Session-based test management (SBTM) is a concept for managing exploratory testing. A
session is the basic unit of testing work, uninterrupted, and focused on a specific test object
with a specific test objective (the test charter). At the end of a single session, a report,
typically called a session sheet is produced on the activities performed. SBTM operates within
a documented process structure and produces records that complement verification
documentation. A test session can be separated into three stages:
• Session Setup: Setting up the test environment and improving the understanding of the
product.
• Test Design and Execution: Scanning the test object and looking for problems

© 2022 Keytorc Software Testing Services ISTQB AL TM – 46


• Defect Investigation and Reporting: Begins when the tester finds something that looks to
be a failure.

The SBTM session sheet consists of the following:


• Session charter
• Tester name(s)
• Date and time started
• Task breakdown (sessions)
• Data files
• Test notes
• Issues
• Defect

At the end of each session the test manager holds a debriefing meeting with the team. During
debriefing the manager reviews the session sheets, improves the charters, gets feedback
from the testers and estimates and plans further sessions.
The agenda for debriefing session is abbreviated PROOF for the following:
• Past: What happened during the session?
• Results: What was achieved during the session?
• Outlook: What still needs to be done?
• Obstacles: What got in the way of good testing?
• Feelings: How does the tester feel about all this?

6.9.2 Systems of Systems


The following issues are associated with the test management of systems of systems:
• Test management is more complex because the testing of the individual systems making
up the systems of systems may be conducted at different locations, by different
organizations and using different lifecycle models. For these reasons the master test plan
for the systems of systems typically implements a formal lifecycle model with emphasis
on management issues such as milestones and quality gates. There is often a formally
defined Quality Assurance process which may be defined in a separate quality plan.
• Supporting processes such as configuration management, change management and
release management must be formally defined and interfaces to test management
agreed. These processes are essential to ensure that software deliveries are controlled,
changes are introduced in a managed way and the software baselines being tested are
defined.
• The construction and management of representative testing environments, including test
data, may be a major technical and organizational challenge.
• The integration testing strategy may require that simulators be constructed. While this
may be relatively simple and low-cost for integration testing at earlier test levels, the
construction of simulators for entire systems may be complex and expensive at the higher
levels of integration testing found with systems of systems. The planning, estimating and
development of simulators is frequently managed as a separate project.
• The dependencies among the different parts when testing systems of systems generate
additional constraints on the system and acceptance tests. It also requires additional
focus on system integration testing and the accompanying test basis documents, e.g.
interface specifications.

6.9.3 Safety Critical Systems

The following issues are associated with the test management of safety-critical systems:
• Industry-specific (domain) standards normally apply (e.g. transport industry, medical
industry, and military). These may apply to the development process and organizational
structure, or to the product being developed.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 47


• Due to the liability issues associated with safety critical systems, formal aspects such as
requirement traceability, test coverage levels to be achieved, acceptance criteria to be
achieved and required test documentation may apply in order to demonstrate compliance.
• To show compliance of the organizational structure and of the development process,
audits and organizational charts may suffice.
• A predefined development lifecycle is followed, depending on the applicable standard.
Such lifecycles are typically sequential in nature.
• If a system has been categorized as “critical” by an organization, the following non-
functional attributes must be addressed in the test strategy and/or test plan:
o Reliability
o Availability
o Maintainability
o Safety and security
Because of these attributes, such systems are sometimes called RAMS systems.

6.9.4 Other Test Management Issues

Failure to plan for non-functional tests can put the success of an application at considerable
risk. Many types of non-functional tests are, however, associated with high costs, which must
be balanced against the risks.
There are many different types of non-functional tests, not all of which may be appropriate to
a given application.
The following factors can influence the planning and execution of non-functional tests:
• Stakeholder requirements
• Required tooling
• Required hardware
• Organizational factors
• Communications
• Data security

Stakeholder Requirements
Non-functional requirements are often poorly specified or even non-existent. At the planning
stage, testers must be able to obtain expectation levels from affected stakeholders and
evaluate the risks that these represent.
It is advisable to obtain multiple viewpoints when capturing requirements. Requirements must
be elicited from stakeholders such as customers, users, operations staff and maintenance
staff; otherwise some requirements are likely to be missed.
The following essentials need to be considered to improve the testability of non-functional
requirements:
• Requirements are read more often than they are written. Investing effort in specifying
testable requirements is almost always cost-effective. Use simple language, consistently
and concisely (i.e. use language defined in the project data dictionary). In particular, care
is to be taken in the use of words such as “shall” (i.e. mandatory), “should” (i.e. desirable)
and “must” (best avoided or used as a synonym for ”shall”).
• Readers of requirements come from diverse backgrounds.
• Requirements must be written clearly and concisely to avoid multiple interpretations. A
standard format for each requirement should be used.
• Specify requirements quantitatively where possible. Decide on the appropriate metric to
express an attribute (e.g. performance measured in milliseconds) and specify a
bandwidth within which results may be evaluated as accepted or rejected. For certain
non-functional attributes (e.g. usability) this may not be easy.

Required Tooling
Commercial tools or simulators are particularly relevant for performance, efficiency and some
security tests. Test planning should include an estimate of the costs and timescales involved

© 2022 Keytorc Software Testing Services ISTQB AL TM – 48


for tooling. Where specialist tools are to be used, planning should take account of learning
curves for new tools or the cost of hiring external tool specialists.
The development of a complex simulator may represent a development project in its own right
and should be planned as such. In particular, the planning for simulators to be used in safety-
critical applications should take into account the acceptance testing and possible certification
of the simulator by an independent body.

Hardware Required
Many non-functional tests require a production-like test environment in order to provide
realistic measures. Depending on the size and complexity of the system under test, this can
have a significant impact on the planning and funding of the tests. The cost of executing non-
functional tests may be so high that only a limited amount of time is available for test
execution.
For example, verifying the scalability requirements of a much-visited internet site may require
the simulation of hundreds of thousands of virtual users. This may have a significant influence
on hardware and tooling costs. Since these costs are typically minimized by renting (e.g. “top-
up”) licenses for performance tools, the available time for such tests is limited.
Performing usability tests may require the setting up of dedicated labs or conducting
widespread questionnaires. These tests are typically performed only once in a development
lifecycle.
Many other types of non-functional tests (e.g. security tests, performance tests) require a
production-like environment for execution. Since the cost of such environments may be high,
using the production environment itself may be the only practical possibility. The timing of
such test executions must be planned carefully and it is quite likely that such tests can only be
executed at specific times (e.g. night-time).
Computers and communication bandwidth should be planned for when efficiency-related tests
(e.g. performance, load) are to be performed. Needs depend primarily on the number of
virtual users to be simulated and the amount of network traffic they are likely to generate.
Failure to account for this may result in unrepresentative performance measurements being
taken.

Organizational Considerations
Non-functional tests may involve measuring the behavior of several components in a
complete system (e.g. servers, databases, networks). If these components are distributed
across a number of different sites and organizations, the effort required to plan and co-
ordinate the tests may be significant. For example, certain software components may only be
available for system testing at particular times of day or year, or organizations may only offer
support for testing for a limited number of days. Failing to confirm that system components
and staff from other organizations are available “on call” for testing purposes may result in
severe disruption to the scheduled tests.

Communications Considerations
The ability to specify and run particular types of non-functional tests (in particular efficiency
tests) may depend on an ability to modify specific communications protocols for test
purposes. Care should be taken at the planning stage to ensure that this is possible (e.g. that
tools provide the required compatibility).

Data Security Considerations


Specific security measures implemented for a system should be taken into account at the test
planning stage to ensure that all testing activities are possible. For example, the use of data
encryption may make the creation of test data and the verification of results difficult.
Data protection policies and laws may preclude the generation of virtual users on the basis of
production data. Making test data anonymous may be a non-trivial task which must be
planned for as part of the test implementation.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 49


6.10 Distributed, Outsourced & Insourced Testing
In many cases, not all of the test effort is carried out by a single test team, composed of fellow
employees of the rest of the project team, at a single and same location as the rest of the
project team. If the test effort occurs at multiple locations, that test effort may be called
distributed. If the test effort is carried out at one or more locations by people who are not
fellow employees of the rest of the project team and who are not co-located with the project
team, that test effort may be called outsourced. If the test effort is carried out by people who
are co-located with the project team but who are not fellow employees, that test effort may be
called insourced.

Common across all such test efforts is the need for clear channels of communication and
well-defined expectations for missions, tasks, and deliverables. The project team must rely
less on informal communication channels like hallway conversations and colleagues spending
social time together. Location, time-zone, cultural and language differences make these
issues even more critical. Also common across all such test efforts is the need for alignment
of methodologies. If two test groups use different methodologies or the test group uses a
different methodology than development or project management, that will result in significant
problems, especially during test execution.

For distributed testing, the division of the test work across the multiple locations must be
explicit and intelligently decided. Without such guidance, the most competent group may not
do the test work they are highly qualified for. Furthermore, the test work as a whole will suffer
from gaps (which increase residual quality risk on delivery) and overlap (which reduce
efficiency).
Finally, for all such test efforts, it is critical that the entire project team develop and maintain
trust that each of the test team(s) will carry out their roles properly in spite of organizational,
cultural, language, and geographical boundaries. Lack of trust leads to inefficiencies and
delays associated with verifying activities, apportioning blame for problems, and playing
organizational politics.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 50


7. Test Estimation and Scheduling

For background reading on “Test Estimation” refer to chapter 7 “Test


Estimation” of The Testing Practitioner.

7.1 Introduction
Many managers estimate testing at 10% to 15% of development effort. This is in reality a
severe under-estimate. Figures collected across the industry suggest that the actual test effort
on most projects is between 40% and 50% of development effort. This can get hidden if staff
is embarrassed by their real estimates and actuals …

Taking this as a starting point you must also consider the risk associated with the project. The
higher the risks the greater the amount of testing is needed. Conversely if you have a stable
component which has been used in the field (proved by use) it may not need to be tested
again, except as part of a regression test.

When estimating allow for:


- Planning and reviewing the plans
- Designing and building the tests
- And reviewing those designs
- Acquiring the test environment and test data
- Reviews and inspections of source documents (requirements, designs)
- Rework out these reviews
- Set up of test management
- Test readiness meetings
- Test run (times 3 – because you are testing to find problems so you expect to run the
tests several times)
- Test control and reporting
- Rework of test material
- Rework of test items
- Suspension, restart activities
- Logging and tracking problems
- Wrap up and completion activities
- Management and reporting
- Configuration control and management
- Some contingency for the unexpected problems.

Several estimation techniques exist for test activities:


- Make a Workbreakdown structure and estimate the components, for example by
means of Wide Band Delphi (bottom-up estimation). You can also use Planning poker
cards for this.
- Use metrics of other projects and apply them to the current project. For example, from
previous projects the test effort was 40% of which 25% functional testing and 15%
non-functional testing.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 51


Test estimation can be approached either bottom-up or top-down.
Top down estimation

Top down - we know about XYZ projects....

They have these phases, and the breakdown between the


phases is usually:

10% phase 1
10% phase 2
30% phase 3
20% phase 4
30% phase 5

We have a budget of 50 weeks, so


we will budget for:
phase 1 - 5 weeks
phase 2 - 5 weeks
phase 3 - 15 weeks
phase 4 - 10 weeks
phase 5 - 15 weeks

This phase breaks down into these tasks...

Bottom up estimating

Bottom up - we break down the project into its smallest


tasks, estimate in detail for each task and build the
estimates back up

© 2022 Keytorc Software Testing Services ISTQB AL TM – 52


7.2 Scheduling Test Planning
In general, planning for any set of activities in advance allows for the discovery and
management of risks to those activities, the careful and timely coordination with others
involved, and a high-quality plan. The same is true of test planning. However, in the case of
test planning, additional benefits accrue from advanced planning based on the test estimate,
including:
• Detection and management of project risks and problems outside the scope of testing
itself
• Detection and management of product (quality) risks and problems prior to test execution
• Recognition of problems in the project plan or other project work products
• Opportunities to increase allocated test staff, budget, effort and/or duration to achieve
higher quality
• Identification of critical components (and thus the opportunity to accelerate delivery of
those components earlier).

Test scheduling should be done in close co-operation with development, since testing heavily
depends on the development (delivery) schedule.

Since, by the time all the information required to complete a test plan arrives, the ability to
capitalize on these potential benefits might have been lost, test plans should be developed
and issued in draft form as early as possible. As further information arrives, the test plan
author (typically a test manager), can add that information to the plan. This iterative approach
to test plan creation, release, and review also allows the test plan(s) to serve as a vehicle to
promote consensus, communication, and discussion about testing.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 53


8. Test Progress Monitoring and Control
8.1 Introduction
There are many different methods of monitoring test progress, and various activities that can
and should be monitored. Test progress can be measured once the initial high level planning
stage is complete, through lower level plans, test specification and the test execution stages.

When the test planning stage is complete and the risk assessment and coverage
requirements have been decided a test preparation schedule can be produced (plan). A test
case, test script and test data production schedule will have owners assigned to the required
tasks and completion dates. This will allow the test ware production rates to be measured
against the plan to ensure that the team is still on target.

8.2 Test Execution Monitoring


The test planning, preparation and production of the test items are carried out prior to the
code delivery. These activities are usually carried out by the test team, often without the other
project areas displaying much interest in the progress being made. Once the code has been
delivered the testing activities are on the critical path, progress needs to report regularly.

This progress can be monitored in a number of different ways:


• Number of test cases executed as a percentage of all the test cases
• Number of high priority test cases executed as a percentage of all the high priority
test cases
• Use of statistical methods

8.3 Test Progress Reporting


Test progress reports are the means for keeping people informed regarding the test progress.
Test progress reports can take many different formats and layouts and the information
contained within them will depend on the report recipients. Test reports can be tailored to
make them more appropriate to the target audience. The technical content of a report may
well be of interest to the technical project team members but be confusing to business
representatives. What is required is that all the information that the report needs to convey is
contained in the report in a clear and concise manner. A standard progress report template
may contain the following information:

Content of a Test Progress Report


• Report identifier
• Test area/level/phase
• Period covered by the report
• Planned activities for that period
• Status of the planned activities
• Other outstanding activities
• Progress report
• Issues
• Comments
• Planned activities for next period

Test reports can contain updates presented in a number of different formats. Tables can be
used to show percentages or ratios, or progress can be shown graphically.

8.4 Controlling Test Progress


In order to manage the process of tracking the test progress, the controls need to set up in
advance. The plan indicates the progress that needs to be made in order to meet the required

© 2022 Keytorc Software Testing Services ISTQB AL TM – 54


targets. The actual progress needs to be tracked and compared to the plan/schedule.
Divergence from the plan can then be seen and action taken in order to recover any slippage.
Early identification of the slippage allows the best chances of recovery with minimum impact,
if you do not know you are slipping, or how much you have slipped, then it is difficult to
assess the situation and take the required action.

Regular meetings should be scheduled in order to track progress against the plan. Any
milestones that are missed, or are in danger of being missed, need to be addressed.

How to deal with slippage


As the test execution phase is the last phase prior to live implementation, the timescales for
the test activity are often put under pressure. The delivery from development into the test
stage often slips and there is a fixed delivery date. This results in less time to execute the
testing that was initially planned. In this situation there are a number of things that can be
done providing the test preparation was carried out correctly and the testing is under control.
When delivery to test has slipped and there is no longer sufficient time we can do the
following:
- Defer the live implementation date so that testing can be completed as indicated in
the original elapsed timescales. Not possible if the delivery date is fixed.
- Allocate additional resource in order to allow the required testing to be executed in the
reduced timescales. This is most effective if the test scripts are well written and no
specialist knowledge is required for test execution. If this is not the case allocating
extra resource can slow the progress as the new resource distracts the existing
resource from making the expected progress.
- Execute the higher priority tests first within the time available and provide a risk
assessment of the impact of not executing the lower priority test cases.
- Under extreme conditions, and only with the approval of the customer and project
management, the test completion criteria can be reassessed and an easement
granted.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 55


9. Defect Management

For background reading on “Defect Management” refer to chapter 17 “The


Bug Reporting Process” of The Testing Practitioner.
9.1 Introduction
This section deals with Defect Management as described in IEEE Std. 1044-1993 Standard
Classification for Software Anomalies, and the supporting IEEE Std. 1044.1-1995 Guide to
Classification of Software Anomalies.

IEEE Std. 1044-1993 - Standard Classification for Software Anomalies


IEEE Std. 1044.1-1995 - Guide to Classification for Software Anomalies

Fault report details


Some of the information that you may want to consider recording when producing an incident
report.

• Unique incident identifier


• Priority
• Severity
• Short description of the incident
• Full description of the incident
• Type (bug, documentation, test error, date, environment, hardware, other)
• Evidence (screen prints, error messages etc.)
• Repeatable Yes/No
• Incident status (changes through to closure)
• Test case ID and version (retest and ID regression scripts)
• Version of code
• Environment details
• Details of standing and test data
• Details of Test User (test log on and password Ids etc.)
• Test analyst
• Actual date/time
• System date/time
• Test phase/stage introduced and found

Notes on IEEE 1044.1-1995


This information has been reprinted with permission from IEEE Std. 1044-1993* *,"Standard
Classification for Software Anomalies" Copyright1993*, by IEEE.
The IEEE disclaims any responsibility or liability resulting from the placement and use in the
described manner.

9.2 Definitions
What is IEEE 1044-1993?
The IEEE Standard for the Classification of Software Anomalies
Dictionary Definition - Anomaly “Irregularity or deviation form rule”.
IEEE Standard Definition of anomaly

© 2022 Keytorc Software Testing Services ISTQB AL TM – 56


“Any condition that deviates from expectations based on requirements specifications, design
documents, user documents, standards etc. or from someone’s perceptions or experiences.
Anomalies may be found during, but not limited to, the review, test, analysis, compilation, or
use of software products or applicable documentation.”

Dictionary definition – Anomaly “Irregularity or deviation from rule”.

Definitions from IEEE Std1044-1993.

Anomaly: Any condition that deviates from expectations based on requirements


specifications, design documents, user documents, standards etc. or from someone’s
perceptions or experiences. Anomalies may be found during, but not limited to, the review,
test, analysis, compilation, or use of software products or applicable documentation.

Category: An attribute of an anomaly to which a group of classifications belongs.

Classification: A choice within a category.

Classification Process: The classification process is a series of activities, starting with the
recognition of an anomaly through to its closure.

Mandatory Category: A category that is essential to establish a common definition and to


provide common terminology and concepts for communication among projects, business
environments and personnel.

Optional Category: A category that provides additional details that are not essential but may
be useful in particular situations.

Supporting Data Item: Data used to describe an anomaly and the environment in which it
was encountered.

What is an anomaly?
The term anomaly has been chosen for its more neutral connotation rather than:
- Error
- Fault
- Failure
- Incident
- Problem
- Defect
- Bug etc.

What are categories?


Category - An attribute of an anomaly to which a group of classifications belongs
- Recognition
- Investigation
- Action
- Disposition

What are classifications?


Classification - A choice within a category
For example the category Recognition may have the following classifications:
- Product status - usability
- Project activity - what were you doing-review?
- Project phase - where in the lifecycle - phase?
- Repeatability - was it repeatable?
- Suspected cause and symptoms etc.

What is provided?
This Standard provides the following:

© 2022 Keytorc Software Testing Services ISTQB AL TM – 57


- A uniform approach to the classification of anomalies found in software and its
documentation.
- It describes the processing of anomalies discovered during any software life cycle
phase.
- It provides comprehensive lists of software anomaly classifications and related data
items that are helpful to identify and track anomalies.

What are the advantages of applying the standard?


The aim of testing is to find problems as early as possible in the software development
lifecycle. This encourages the use of methodologies, techniques, and tools to help find
problems sooner. Collecting software anomaly data is necessary to evaluate how well these
methodologies, techniques, and tools work.

9.3 Classification scheme


The classification scheme considers the following:
- The environment and activity in which the anomaly occurred.
- The symptoms of the anomaly.
- The software or system cause of the anomaly.
- Whether the anomaly is a problem or an enhancement request.
- Where the anomaly originated (by phase and document).
- The resolution and disposition of the anomaly
- The impact of several aspects of the anomaly
- The appropriate corrective action.

This data can also help to identify when in a projects lifecycle most problems are introduced.
Anomaly data can also assist in the evaluation of reliability and productivity measures.

By classifying anomalies, they are naturally grouped together by type. This allows easier
manipulation of the data collected in order to identify weaknesses in any area of the
development process.

Classification
How do we classify an anomaly using the standard?

The classification process: A series of activities starting with recognition of an anomaly


through to its closure.
The process is divided into 4 sequential steps interspersed with three administrative activities.

Classification process
The classification process is a series of activities, starting with the recognition of an anomaly
through to its closure. The process is divided into four sequential steps interspersed with
three administrative activities. The steps are as follows:

Step 1: Recognition
Step 2: Investigation
Step 3: Action
Step 4: Disposition

Administrative activities
The three administrative activities applied to each step are as follows:
- Recording
- Classifying
- Identifying impact

© 2022 Keytorc Software Testing Services ISTQB AL TM – 58


Classify

Recognition
The recognition step occurs when an anomaly is found. Recognition of an anomaly may be
made by anyone regardless of where in the software lifecycle the anomaly was discovered.

Recording the recognition


The following supporting data items types are recorded:
- Product Hardware details
- Product Software details
- Databases
- Test Support Software
- Platform
- Firmware
- Other

© 2022 Keytorc Software Testing Services ISTQB AL TM – 59


Classifying the recognition
The following important attributes of the anomaly are observed and shall be classified:
- Project activity (Mandatory) – review, coding, testing etc.
- Project phase (Mandatory) – requirements, design, test, live etc.
- Suspected cause – Hardware, software, data, interface, documentation etc.
- Repeatability – One off, intermittent, repeatable etc.
- Symptom (Mandatory) – System crash, i/p problem, o/p problem etc.
- Product Status – Unusable, degraded, Affected, unaffected etc.

Identifying impact
The person identifying the anomaly shall record their perception of the impact:
- Severity (Mandatory) – Urgent, high, medium, low, none
- Priority - Urgent, high, medium, low, none
- Customer value – Priceless, high, medium, low, none, detrimental
- Mission Safety - Urgent, high, medium, low, none
- Project schedule (Mandatory) - High, medium, low, none
- Project cost (Mandatory) - High, medium, low, none
- Project risk - High, medium, low, none
- Project quality/reliability - High, medium, low, none
- Societal - High, medium, low, none

Investigation
Following recognition, each anomaly shall be investigated. The investigation shall be
sufficient to identify all known related issues and propose solutions or indicate that the
anomaly requires no action.

Recording the investigation


The following supporting data items types are recorded as part of the investigation process,
any new information related to the recognition of the anomaly is updated:
- Investigator name and contact details
- Estimated start date of investigation
- Estimated end date of investigation
- Actual start date of investigation
- Actual end date of investigation
- Person hours
- Documents used in investigation
- Other

Classifying the investigation


During the investigation all mandatory categories shown shall be classified, in addition,
classification entries made during recognition shall be reviewed and corrected where
appropriate:
- Actual cause (Mandatory) – product, test system, platform etc.
- Source (Mandatory) – Specification, Code, database etc.
- Type – Logic problem, interface, data, computation etc.

Identifying impact
Previous impact classification shall be reviewed and updated on based on the results of the
analysis.

Action
A plan of action shall be established based on the results of the investigation. The action
includes all activities necessary to resolve the immediate anomaly and those activities
required to revise processes, policies, or other conditions necessary to prevent the
occurrence of similar anomalies.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 60


Recording the action
The following supporting data items are necessary to record the decisions on how to handle
the Anomaly:
- Resolution Identification – items to be fixed
- Resolution Action – fix, test, implement
- Corrective Action – standards, policies, procedures to be revised

Classifying the action


Once the appropriate actions are determined, the mandatory categories shown are classified:
- Resolution (Mandatory) – immediate, Eventual, Deferred, no fix
- Corrective Action (Mandatory) – Departmental, Corporate, Industry etc.

Identifying impact
Previous impact classification shall be reviewed and updated on based on the results of the
analysis.

Disposition
Following completion of either all required resolution actions or at least identification of long
term corrective actions, each anomaly shall be disposed of by:
- Recording the Disposition
- The following data items for each anomaly are recorded:
- Action implemented
- Date report closed
- Date document update complete
- Customer notified
- Reference document number

Classifying the Disposition


Anomalies shall be completed using one of the disposition classifications shown:
- Closed – implemented, not a problem, out of scope, duplicate
- Deferred
- Merged with another problem
- Referred to another project

Previous impact classification shall be reviewed and updated based on the results of the
analysis.

Supporting data items


Definition - Data used to describe an anomaly and the environment in which it was
encountered.

The standard provides comprehensive lists of data items for inclusion at each step of the
process, such as:
- Project Activity
- Project Phase
- Cause (Actual and suspected)
- Possible source
- Type of anomaly

Classification codes
All supporting data items are assigned a 5 character alphanumeric classification code, 2
alphabetical followed by 3 numerical.

The characters are derived as follows:


- RR for the Recognition Step
- IV for the investigation Step
- AC for the Action Step
- DP for the disposition step

The numbers then identify the specific category and classification.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 61


Project activity
During the recognition step we capture the project activity
- Analysis
- Review
- Audit
- Testing
- Validation
- Etc.

We can now see statistics of our fault finding success rates for these activities.

Project phase
During the recognition step we capture the project phase
- Requirements
- Design
- Testing
- Implementation
- Live.

So we can see at which stage we are finding the faults.

System Attributes
During the recognition step we capture the system attributes of the anomaly

Operating system crash


Input problem
Output problem
Total product fail
Error message
Etc.

We can now see statistics of the impact of the anomalies.

Anomaly actual cause


During the Investigation step we capture the Anomaly actual cause
- Product
- Test system
- Platform
- Third party
- User
- Unknown etc.

We can now see statistics that may point towards anomaly nurseries.

Anomaly source
During the Investigation step we capture the Anomaly Source
- Specification
- Code
- Database
- Manuals and guides
- Plans and procedures
- Reports etc.

We can now see statistics that may point towards weaknesses in our development process.
Anomaly type
During the Investigation step we capture the Anomaly type
- Logic/computational problems
- Interface/Timing problems
- Data problems

© 2022 Keytorc Software Testing Services ISTQB AL TM – 62


- Documentation problems
- etc.

We can now see statistics that may point towards common problems.

Anomaly type
During the Resolution step we capture the resolution details
Software fix
- Update documentation
- User Training
- Etc.

We can now see statistics on the costs associated with each anomaly

Anomaly corrective action


During the Action step we capture the Anomaly Corrective Action
- Revise process
- Implement training
- Reallocate people/resources
- Create/improve processes
- Etc.

We close the loop and improve our processes to prevent similar anomalies from occurring.

Quick overview
Quick guide to Anomaly management using IEEE1044:1993

© 2022 Keytorc Software Testing Services ISTQB AL TM – 63


9.4 Practical example
Look at the extract from IEEE Std.1044-1993 table 3C. Notice how it helps you to define your
classification.

This information has been reprinted with permission from IEEE Std. 1044-1993* *,"Standard
Classification for Software Anomalies" Copyright1993*, by IEEE. The IEEE disclaims any
responsibility or liability resulting from the placement and use in the described manner.

Category Compliance Code Classification


Required
Type Mandatory IV320 Computational Problem
IV300 IV321 Equation insufficient or incorrect
IV321.1 Missing Computation
IV321.2 Operand in equation incorrect
IV321.3 Operator in equation incorrect
IV321.4 Parentheses used incorrectly
Mandatory IV322 Precision Loss

The classification codes are designated as follows:

Each of the steps or activities being classified is assigned a two-character alpha prefix
- RR for the Recognition Step
- IV for the Investigation Step
- AC for the Action Step
- IM for the Impact Identification Activity
- DP for the Disposition Step

Three digits, identifying the categories and classifications, follow the prefix. Where further
clarification is needed a decimal number is assigned.

For example classification code IV321.1 first guides the user to the Investigation Step (IV).
Secondly the category the classification belongs to is type (IV300). The type of anomaly is
identified as a computational problem (IV320), and is further identified as an equation
insufficient or incorrect (IV321), and is more specifically defined as a missing computation
(IV321.1).

9.5 IEEE 1044.1 Guide


IEEE Std. 1044.1-1995 - Guide to Classification for Software Anomalies

What is the aim of the guide?


- To provide support information
- To assist users in applying the standard
- To help users decide whether to conform completely to the standard or just extract
ideas
- To allow users to implement and customize the standard for their own organization

Adopting the process


In order to achieve compliance with the standard all mandatory process steps must be met.
- Project Activity
- Project Phase
- Symptom
- Actual Cause
- Source
- Type
- Resolution

© 2022 Keytorc Software Testing Services ISTQB AL TM – 64


Getting started
Read IEEE Std. 1044-1993. Decide if full compliance is required or just improvements to your
existing process

Consider the categories that you currently record


- Is classification easy?
- Do we use the data now or could we in the future?

Decide which categories will be mandatory

Deciding the classification


List the classifications to be used under each category
- Add high level classifications first
- Work down to form a classification hierarchy
- Needs to be intuitive and logical

Document the categories and classifications


- What does each mean in your organization?

Identify the data items


Supporting data items completes the anomaly tracking system
- Text (description, proposed fix, workaround etc.)
- Identifiers (code version, environment, config, etc.)
- Measurements (dates, time to fix, dev/test time etc.)
- Pointers (related anomalies. test cases, screen print etc.)
- Administrative (owner, fix date, build deadline etc.)

Planning implementation
- Determine how to incorporate the anomaly tracking system into your environment
- Plan how you are going analyze the data and distribute the results of that analysis
- provide training on the scheme to your users and to management
Development methodologies
IEEE Std. 1044.1 contains example classifications for the different development
methodologies and business models
- Waterfall
- Phased
- Spiral
- DoD (defence systems software development)
- Etc.

The standard advises it does not dictate. You can implement and control your anomaly
reporting system using any of the following:
- A commercial tracking product
- In house tracking software development
- Paper based system (difficult to analyze data)

9.6 Data analysis


This is the value added!

We can look at each anomaly individually


- This gives us an indication of the quality of the product
We can look at data for an entire project
- This helps identify possible problem areas within a project
We can look at data across several projects
- This helps identify organizational problem areas or industry wide problems.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 65


How does analysis help?
4 main types of analysis are identified:
- Statistical Analysis
- Project Management
- Process Improvement
- Product Assessment

Statistical analysis
We must have sufficient anomalies logged or results are meaningless

Useful analysis would include


- Frequency of occurrence
- Correlation - looking for trends or relationships between categories

Statistical analysis tools


- There are a number of tools available but some statistical analysis experience would
be useful

Project management
Analysis of the anomalies can help us to:
- Identify the impact of enhancements against the project plan.
- Compare project costs, risks, and impact to the project quality or reliability. This will
help us to make informed decisions regarding fix/no fix impact as we approach the
live implementation date.

Process improvement
By looking at the cause of anomalies we can identify weakness in our development process.
Where a large number of anomalies are attributed to the classification “requirements error”,
we may wish to allocate more resource to producing the requirements or introduce tighter
review processes. This is prevention rather than cure!

Product assessment
To err is human
To err repeatedly is stupid
To learn from your mistakes is good
To learn from other peoples mistakes is excellence!

Product assessment
In order to assess the product we can provide management with details of anomalies found
and their severity/priority. By analyzing the types of anomalies found we can pinpoint weak
functions or code modules that may give further problem in live. This may allow further testing
to focus on these areas if time allows. We can also use the anomaly database to show that
our phase input and output criteria have been met prior to promoting the product to the next
phase

Conclusions
IEEE Std. 1044-1993 satisfies the kinds of anomaly tracking, reporting, analysis, and anomaly
prevention encouraged by the CMMI.

IEEE Std 1044-1993 provides a solid foundation for information required to be tracked for
ISO9001.

IEEE Std. 1044-1993 also satisfies the anomaly tracking and classification requirements of
DoD Software.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 66


10. People skills

For background reading on “People Skills – Team Composition” refer to


chapters 23 “Team building” and 24 “Test career paths” of The Testing
Practitioner.
10.1 Experience versus training
We learn from our mistakes

Individual skills are acquired through the learning process. There are two main ways in which
we learn, we can learn from our own experiences, what worked well, what didn’t work so well
etc. We then apply our learning to the same, or a similar, problem the next time around in
order to improve the way we do things. Learning is something that we all do naturally, as a
baby curiosity is the driver and sensations such as feeling, smell, taste, and pain, are some of
the information gatherers that input to the learning experience. If you take a large mouthful of
scalding hot tea and burn your mouth then it is understandable that the next time you have
tea you will remember. The “blow to cool” approach is one solution that you may want to
adopt in order to prevent you making the same mistake. So you have learnt how to manage
hot tea!

We learn from other peoples mistakes.

The second main way to learn is through training. Training teaches us how to do things
correctly. Training can be delivered in many different forms from talking, reading, mentoring,
etc. through to formal training courses. The training materials will have been written and
produced by “experts” that will have already learnt from their experience. Industry standards
and approaches tend to be derived from a combination of experience by numerous people
when addressing a problem, so training gives us recourse to a wider range of experience. We
all know that two heads are better than one, so the more experience that goes into producing
the training the better that training should be.

To make the same mistake continually is stupid


To learn from your mistakes is clever
To learn from other peoples’ mistakes is excellent

10.2 Individual Skills


The levels of testing as described in the V model have been arranged to give focus to the test
effort, with each level having a specified aim.

Component test – Prove the component works in isolation.


Component Integration Test – Prove components integrate correctly.
System test – Prove that the system functionality is present and correct and that the non-
functional attributes meet the requirements.
System Integration Test – Prove that both system and network continue to function correctly
when integrated.
Acceptance test – Testing by all business parties involved in use of the live system, users,
support, dba etc.

As can be seen the skills required to execute the above levels of testing well will vary
significantly.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 67


To execute component testing well, a tester will need to be familiar with the code, have
experience of design and coding in the correct language, experience of static test techniques
and dynamic white box test techniques, and experience of the appropriate test tools required.

For System/Integration testing, professional testers are usually required. Knowledge of


development lifecycles, risk analysis, planning, test estimation, test specification, test
processes, and test process improvement, are required for structured testing. Knowledge of
white box and black box test techniques, functional and non-functional testing, experience of
dynamic test tools and test management tools and automation, are all required. Experience of
interface testing, test environments, test data, and communications skills are attributes that
will add value to the tester’s skills range.

For Acceptance testing a tester will need in depth business knowledge, be familiar with the
system requirements, and have an understanding of the principles of testing.

All parties will need to be aware of requirements analysis and prioritization, fault reporting,
change control, configuration management, test management, standards, etc. They will also
benefit from an understanding of risk, they will need to know how to design and execute test
cases, check results and report on progress.

All of the above deal with specific “technical” skills or experience, but a good tester must also
possess strengths in the ‘so called’ soft skills. Testers need to be able to communicate
effectively and efficiently with all parties, be diplomatic, give and receive criticism in a positive
manner, influence people both within, and external to, the test team, and negotiate
successfully when required to do so.

The IT arena is far too complex for any individual to experience everything from project
inception to completion. Testers have been categorized as experienced in the roles of system
testers or UAT testers for some time, now we are starting to see the emergence of test
specialists in areas such as test tools or specific non-functional test areas such as
performance or security.

For further reading on this subject read chapter 22 of the book “The Testing Practitioner”

10.3 Test Team Dynamics


Nothing achieves more than a team of skilled, motivated, individuals working together.
Successful teams rarely just happen; they are formed and grown by careful selection, training,
and management.

Work carried out in the field of test team dynamics has shown that natural personality traits
can be used to group types of people together based on their behavior. Dr Meredith Belbin
carried out research in this field and found 9 distinct role types. Belbin’s work and other
research into teams can be used to conclude that:-
- Teams with a Common Purpose achieve Synergy, i.e. 1+1 = 11.
- Activities/Projects have several phases requiring different skills.
- No Individual has everything.
- Belbin identified 9 roles each embodying a subset of those skills.
- During Research, Teams with a balance of Belbin roles outperformed others.
- Put Teams together on a meaningful task and the results can be extraordinary.

Note that Belbin’s 9 roles is one of many examples of team building dynamics. Hereafter we
will also briefly discuss the Myers Briggs Indicator Type.

Belbin Team Roles


Team-worker
The team’s great conciliator. The one best able to bring agreement by showing empathy with
many different types of people. The person that most people in the team like and trust. On the

© 2022 Keytorc Software Testing Services ISTQB AL TM – 68


negative side, they may avoid making difficult decisions, and often let others treat them like
doormats.

Plant
The natural creator/innovator within the team. The source of many of the team’s best and
most radical ideas, which is often coupled with an unusual sense of humor. This person is
imaginative and solves difficult problems. On the negative side, they tend to disengage if their
ideas are rejected (taking their ball home!), and can’t stop coming out with ideas even when
the project is finished. Other weaknesses for people having this role are they sometimes
ignore details and may be too preoccupied to communicate effectively.

Co-ordinator (CO)
The natural people organizer. The person who gets things done through other people. Tend
to be calm, confident and ‘in control’. Confident chair person, promotes decision making and
delegates well. On the negative side the CO may become manipulative, and is frequently
referred to as a ‘political’ role.

Monitor-evaluator (ME)
Typically the most thoughtful and methodical member of the group. Strategic and discerning,
judges accurately. The person who prevents teams making mistakes, and ensures that
decisions are taken for the right reasons. On the negative side, they can appear to be
negative about new ideas, and hold back progress on trivial points. Other weaknesses for
people having this role are they lack drive and are overly critical.

Completer-Finisher (CF)
The person who ensures that every job is finished properly with no outstanding commitments.
Seek out errors and omissions, delivers on time. Works extremely well to deadlines and is
unlikely to let people down. On the negative side, CF’s can be obsessed by time, numbers
and anything you can measure, have a tendency to be great worriers and are reluctant to
delegate.

Shaper (SH)
The enthusiastic, energetic driver within a team. This person tends to be very good at
formulating and envisioning objectives, and wants to get going straight away. Challenging,
thrives on pressure. Has the drive to overcome obstacles. On the negative side, the SH might
be prone to temperamental outbursts, and readily shows his/her contempt for those who
appear to hold back progress and may thus provoke others or hurt their feelings.
Specialist (SP)
The person who a team of experts turn to for expert advice! A knowledgeable professional
who always knows his brief extremely well, and can provide the best of expert advice. Single
minded, self-starting. On the negative side, SP’s tend not to contribute when discussions fall
outside their area of expertise, and they will often give the technical reasons why a particular
proposal won’t work, rather than giving an alternative solution. They have problems
overlooking the big picture.

Implementer (IMP)
Disciplined, well-organized, reliable and dedicated to completing work assigned to them. The
person who often provides structure to meetings, projects and administration. Turns ideas into
practical actions. On the negative side, they can be inflexible, and too focused on the current
task to see the bigger picture. They often respond slowly to new possibilities.

Resource-Investigator (RI)
The natural relationship builder. Extrovert, enthusiastic, communicator. This person often
seems to know lots of people, be friendly with many of them, and know where/to who they
should go to get anything. They are good in the early stages of a project, but tend to lose
interest as work becomes more mundane. Other negatives are that they frequently miss
deadlines because of the time they spend chatting to others. Other weaknesses for people
having this role are they may be overoptimistic and tend to lose interest once the initial
enthusiasm has passed.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 69


Conclusion
The study into management teams can easily be extended to cover all teams, and this
material will be useful for discussing strengths and weaknesses in a positive context in any
team. Belbin’s work, by demonstrating that no one can be good at everything, actually makes
it OK to have weaknesses! “Nobody’s Perfect but a Team can be”.

In order to build a team with the correct balance of roles you will need to identify the existing
roles in the test team. Do not just recruit to fill a technical skills need, also consider the
individuals role within the team and whether that individual will complement the existing team
role types or not. A team containing individuals with the various team roles will naturally have
conflict at times, but that is the road to gaining the best solution to a specific problem. If this
conflict gets out of hand it is the manager’s job to resolve the issue, if possible without
detriment to any of the team member’s feelings or standing within the team.

Myers-Briggs Type Indicator (MBTI)


The Myers-Briggs Type Indicator (MBTI) is an instrument that describes individual
preferences for using energy, processing information, making decisions, and relating to the
external world. These preferences result in a four-letter “type” that can be helpful in
understanding different personalities. The personality types exhibit different ways in which
people communicate and how you might best communicate with those types (see the Sticky
Notes for a list of communication strategies for each type and a test to determine your own
type). You may have taken an MBTI test in the past. For those who haven’t, here is a basic
outline of the MBTI’s four preferences. Each preference has two endpoints. As an introduction
(or a review) let’s see how each of these preferences might be reflected when a person is
going to lunch.

The first preference describes the source of your energy—introvert or extrovert. An introvert
draws energy internally, from his own thoughts and ideas. An extrovert draws energy from
interactions with others. The extrovert might ask everybody if they want to go to lunch. The
introvert may prefer having lunch by himself.

How one processes information is the next preference—sensing or intuitive. A sensing person
is visual and fact oriented, while an intuitive person is open and instinctual. When an intuitive
person looks at a menu, he tries to get a general idea of the type of food and the price range.
The sensing person might read every line of the menu before deciding if he wants to eat
there.

Decision making is the third preference— thinking or feeling. The thinking person uses logic
and standards in making decisions. A feeling person is more concerned with feelings and
personal relationships when making a decision. The thinking person might figure out which
restaurant is the closest or which one has the cheapest food. The feeling person might
suggest not going to a particular restaurant because someone in the group recently ended a
relationship there.

The fourth preference —judging or perceiving— deals with how an individual relates to the
external world. The judging individual is organized and structured. The perceiving person is
spontaneous and flexible. The judging person has the lunch date in his Outlook calendar. The
group will be leaving precisely at 12:00. The perceiving individual may make the lunch date
on the spur of the moment. If it is around noon sometime, he’s OK with that. When you’re
communicating with another person, you will have an easier time if you know his MBTI type.
But since most people do not wear their MBTI classification on their lapels (except for some
particular groups), it’s important to appreciate that each of us looks at the world differently.

So how do you communicate when you are a different type from the other person? The first
thing to do is acknowledge those different types. To assume that another person is being
recalcitrant because he wants to do something his way is not a useful step toward
communicating. Adapting your communication processes to appeal to both styles would
acknowledge the legitimacy of the other person’s style. Often group decision-making

© 2022 Keytorc Software Testing Services ISTQB AL TM – 70


processes assume everyone will speak out. The extrovert asks, “So what does everybody
think about this? Let’s hear from you.” The introvert may not give immediate answers. He may
be pondering the ramifications of his answer before contributing it. You could try procedures
that may appeal to introverts. For example, instead of asking for oral responses, you could
have everyone write down ideas on index cards or post-it notes. Place the cards up on a
board in clusters of related ideas. Now each person’s voice has been heard.

As a side note to individual brainstorming, I’d like to interject the notion of “throwing the cards
on the table.” In communicating ideas, we often get hung up on our own ideas. We may feel
that we need to promote or protect the ideas that we have suggested. One way to avoid this
is to adopt the concept of “throwing the cards on the table.” After ideas are generated in the
individual brainstorming, the index cards are literally thrown on the table. The originator’s
identity is lost (at least theoretically). Each idea then can be considered on its own merits, not
on the relative status of the originator. The best ideas often are compilations of many ideas
that have been thrown on the table.

Another example of introvert/extrovert interaction is in estimating the time needed to


implement a requirements story. Often an extrovert will start by announcing a value. Introverts
can appear to acquiesce to that value, since they may be internally analyzing that value as
well as others. The Planning Poker game allows introverts to have the means to ponder their
responses. In Planning Poker, each estimator has a deck of cards with various values on
them. To estimate a story, each person selects a card with his estimate and places it on the
table face down. When everyone has chosen a card, the cards are flipped over. If the
individual estimates are far apart, the estimators briefly discuss their differences. The
estimators listen to the discussion to see if it alters their estimate. Each person then selects a
card with his new estimate, and the cards are again flipped over. The process continues for a
few rounds until there is either convergence on an estimate or an agreement that the effort
required to implement the story is difficult to estimate or unknown.

There are many other areas in which differences in styles may lead to problems in
communication. For example, in many agile environments, story cards capture requirements.
Each card includes a brief description of the requirement and an estimate of the effort to
completely implement the requirement. Judgers might want pre-printed templates for these
story cards and a check that each card has been completely filled in. Perceivers may be
happy with blank cards. The brief description can satisfy intuitive people, while sensing
people may want more details recorded on the cards. The differences between the styles of
judging types who prefer exactness and perceiving types who are comfortable with
inexactness often emerge with different ways to track time estimates and progress on the
completion of the story card implementation. The time needed to implement a requirement
story is commonly estimated in story points. Story points represent the relative effort to
complete a story rather than absolute time. To emphasize that story points do not correspond
to exact times, the values they can take are usually limited to those in a Fibonacci series (i.e.,
1, 2, 3, 5, 8, 13, …). The time to complete a story is estimated based on velocity (approximate
number of story points that can be implemented per iteration). While perceiving types are
comfortable with that inexactness, judging types may prefer actual day or hour values. During
the planning for an iteration, the tasks required to implement each story are typically
estimated in hours. Having two levels of estimates—story points for rough estimates and
hours for detailed estimates on tasks—can help satisfy both personality types.

Progress of story completion can be communicated in ways that suit both intuitive and
sensing types. Agile teams commonly track progress of stories on a large board called the
storyboard. The movement and location of the requirement story cards on the storyboard
demonstrates the progress. An intuitive type can get a picture of progress with just a quick
glance at the board. Sensing types typically prefer seeing a numerical tracking mechanism,
such as a spreadsheet. Using the spreadsheet, they may create intricate measures of
progress using graphs or formulas. Updating a storyboard and entering the details into a data-
manipulation program usually can satisfy the communication needs of both types.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 71


10.4 Fitting testing within an organization
There are many factors that will contribute to the organizational structure of testing within a
company. Company size, departmental organization, number of employees, industry,
business, budgets etc. will all need to be considered when planning a test strategy.

The test function is critical to the successful production of quality software. The higher risk
solutions will require a greater degree of test coverage and therefore will also require more
resource/time in order to plan and execute the required tests. Delivery of reliable, quality
solutions that are usable and fit for purpose is always the aim but we must work within the
constraints of the resources available. There is also a relationship between the quality of the
delivery and the level of independence of the test function. Industry statistics show that the
greater the level of independence the greater the quality of the delivery. Here are some
examples of test structure and some of the advantages and draw backs that are associated:

Basic test approach


In a very small company, all required testing may be carried out by the developers alone.
They will carry out each level of testing from component and integration in the small, system
testing, integration in the large, and even acceptance testing. This can be done with each
developer taking part or with a designated developer taking responsibility for all the required
testing related to a project or product. This concentrates a high percentage of the skills
required in one or two individuals, but this also has the inherent risk if one of those individuals
should become unavailable for any reason. Also with no independent test function testing
may not be as thorough or effective as two heads are always better than one.

Small company, few employees, limited resources


In a small company with less than 20 employees and say 6 developers, the company may
adopt the ‘conscience’ testing approach. Each developer will be responsible for component
testing their own code, with a senior developer carrying out integration and system testing.
The code may then go straight to the customer for Acceptance testing. This can work well for
small teams co-located as long as communications and team working are close to optimum.
This can reduce customer confidence, as all faults that would have been found during the
independent test phase are visible to the users.

Independent test team


This is the usual set up where a structured development process based on the V model
development lifecycle is in place. The development team will carry out formal Component
testing and integration in the small, and then release the delivery to the test team. There will
be entry/exit criteria in place between the test levels and usually the delivery would be signed
off by the development team and be accompanied by a release note, build statement,
installation and back out instructions, and a component test report. The independent test
team would be responsible for System test (functional and non-functional), Integration in the
large, and may assist the users with Acceptance testing in certain instances. In larger projects
there may be numerous teams controlling system test or integration testing on a number of
different systems/deliveries for the same project. Independent testing tends to formalize the
process and prevent low quality/untested deliverables from getting into the live environment.

Internal test consultants


The use of internal test consultants can be very useful to an organization for whom the
development and test process is either new or is in need of improvement and formalization.
The documented development models are fairly well defined however implementing the
required working practices and processes can be difficult and time consuming. For a smooth
adoption of the new working practices the changes will need to be well planned and carried
out as smoothly as possible. The use of onsite consultants to assist and guide the
organizations management, teams, and business areas on the how, why, and wherefore is
very important. Change is always adopted more readily if the change is explained in detail,
the reason for the change can be justified, and the benefits can be identified. This is learning
form the consultants experience, they guide us so as not to fall victim to the known problem
areas when implementing a formal test and development process. Part of the plan must
include the phasing out of the consultant as the skills transfer takes place so that the

© 2022 Keytorc Software Testing Services ISTQB AL TM – 72


organization adopts the required practices as a ‘business as usual’ process that can stand on
its own two feet.

Independent test organization


Where independent test organizations are used the test process is at its most formal. There
will be strict quality gates between the levels of testing that must be met. This way of
organizing your test function tends to be reserved for high priority or safety critical
developments due to the costs involved in producing high quality and complete deliveries.
The fact that a third party will be scrutinizing the complete delivery (code, user guides,
instructions, documentation, requirement etc.) tends to drive up the quality from the outset.
This also ensures that the closure of the project is carried out i.e. all outstanding faults
accounted for, all design documentation changes propagated through the baseline
documents, post implementation review actions dealt with. These are often the things that get
left as the delivery goes live and the next project arrives. When independent testing is
employed everyone knows that they have to work to the required level of quality or there will
be comeback, but in order for this to be done properly, sufficient time must be allocated on
the plan for the tasks to be completed to the required level.

Testing relationship with other project areas


The test function needs to talk to everyone involved in the project from the requirements
stage through to post implementation. If its project related we need to know.
- Testers and developers
We are different functions within the same delivery team; we either succeed or fail
together so let’s acknowledge that fact.
- Testers and project managers
The project manager is our source of project information, time, and resource. We
need to work closely to ensure that we have got the balance right so we can do a
quality job, with the allotted resource, within the allotted time.
- Testers and users
The users are our customers, they are the reason IT exists. The IT delivery team is a
service provider to the business, we deliver the systems that allow them to carry out
their business in a more efficient and reliable manner. If this is not true then expect
the business to seek this relationship elsewhere from someone who can deliver. Talk
to the users, ask for their help in delivering the solution they really need, educate
them regarding testing and its functions, show them the benefits.

Any member of the project team should always feel comfortable coming to talk to the test
team. Testing need everyone’s input and help to do the job well, so let’s break the ice, listen
to what they say, be friendly and supportive, and get the job done together.

10.5 Motivation
In general, people like to do their job well. Job satisfaction is usually high up on everybody’s
employment wish list. People tend to be motivated by different things, however there are
some constants that apply to nearly everyone; recognition, respect, and feeling valued are
very strong motivators in the work place.

People need to know what work they are doing, what their role is, how they fit into the big
picture, and what they need to do in order to progress in their chosen career (career path).

Recognition for testers will be achieved by the other project areas realizing the value that
testing adds to the project. Anyone can talk about meeting requirements and specifications,
achieving higher quality, less faults, quicker time to market, cheaper etc. but testing is one of
the few areas that can actually make it happen. By early involvement the test team can
encourage early project communication and the very effective early project reviews.

Read chapter 22 and 23 in the book for further information on the above topics.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 73


10.6 Interview questionnaire production guide
Introduction
Ever since cavemen adopted tool ‘employment’ we have striven to find a better (the best!) tool
for the job. This is also true for companies in their search for the right employees. In order to
be successful we need the right people, with the right skills, working comfortably, in a happy
team. Strange then that we often interview and hire on the basis of individuals image,
preconceived perceptions, and general requirements not necessarily applicable to the
position on offer. The job descriptions often read as a print out from a buzz word generator:
self-starter, excellent interpersonal skills, a motivated individual, able to work without
supervision, to highlight but a few. These are all subjective statements and are impossible to
validate in an objective manner during the recruitment process. Job titles can also be fairly
intangible as they often bear little or no resemblance to the actual tasks being carried out for
that role. The interviewer often starts out thinking they know what specific skills are required
for the role, but end up making important decisions on subjective matters, opinion, and
personal likes and dislikes. Other than the customer, what can be more important to a
company than hiring the right people? The best way to ensure that you can deliver to your
customer is by having the right people in place.

This guide is intended to help in production of a questionnaire aimed at taking the subjectivity
out of the interview process and replacing it with a scoring process to enable comparison of
interviewees across the full range of requirements for each post. This will involve analysis of
the role in question, the technical skills required, the soft skills required, the level of
experience required, and the existing team dynamics. Each of these aspects will carry a
different weighting factor, as the requirements for each role are different.

This guide will explain how to prepare the required information for input into a test paper for
completion by the candidate. There will be a self-assessment section and a questions and
answers section, which will cover the following topics:
• Technical skills
• Personal experience
• Soft skills
• Team dynamics

This question paper is not intended to replace the traditional interview technique which is a
valuable tool in assessing any individual, but can be used in addition to interviews to assist in
the employment process where required.

How to produce the questionnaire


Self-assessment
The self-assessment section of the form is to gain details of the candidates’ actual knowledge
and experience in the required skills areas. This form can be sent to candidates prior to
attendance at an interview as a means of limiting interviewees. People tend to be harder on
themselves when it comes to assessing their own knowledge and experience of a particular
subject. A CV can be misleading and they are often pumped up in certain areas and can be
economical with the truth. Any father can claim to have been involved with, and experienced,
the birth process yet they haven’t really had a baby have they?

Technical skills
List the technical skills required for the post.
Be specific not general.
For a System Test Analyst role in the technical skills section you may want to include:
- Relevant coding languages
- Relevant automated test tools (capture replay, performance)
- Relevant tools such as fault reporting, CM tools, change management, etc.
- Relevant desktop PC packages and applications
- SQL, COBOL etc.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 74


You can mark these as essentials or as nice to haves if you wish. All essentials will be
mandatory for the candidate to be successful.

Experience
List the specific experience that you are looking for. For a test coordinator you may want the
following experience:
- Knowledge of the V model development lifecycle,
- Knowledge of black and white box testing techniques,
- Knowledge of ‘Prince’
- System test and integration testing experience
- Progress tracking and reporting
- Risk identification, analysis, and management
- Test Planning
- Test environment management
- Test team management
- Experience of relevant hardware or operating systems
- Related business knowledge i.e. Unit Trust Investment
- Experience using third party products or deliverables

Do not ask for things that are not required for the role.

Skills and experience questionnaire


Section 1. Technical Skills

Technical questions
For questions of a technical nature, assistance may be required producing the required
questions. Assistance should be requested from other project areas as required. The
questions should be aimed at proving that the candidate has the skills required for the
position and claimed on their CV and the self-assessment form.

Problem Solving Skills


You may wish to include some questions of a problem solving nature to establish the
candidates’ success rate at thinking through problems and arriving at the solution (some
examples are provided at the end of this section).

Section 2. Personal Experience

Testing knowledge
To establish the candidates’ testing knowledge you may wish to include a section on testing
awareness. This will confirm, or not, the level of skill and experience that the candidate is
claiming on their CV and in the self-assessment paper. This section can be waived if the
candidate holds the required ISEB qualification in software testing. The required questions
can be drawn from the Foundation or Practitioners syllabus as appropriate to the role in
question.

Section 3. Soft Skills

Any of the following skills could be relevant to the role. Select questions to cover the required
areas. Questions must be appropriate to the role.
- Leadership
- Team working
- Communication Skills – Presentation, oral and written skills.
- Negotiation Skills and influencing people
- Time management and personal effectiveness
- Appraisal and Counseling
- Managing confrontation
- And many, many more

© 2022 Keytorc Software Testing Services ISTQB AL TM – 75


Section 4. Team dynamics
Natural Characteristic Traits
In order to establish if the candidate has the required characteristics to complement the
existing team you will need to be aware of the characteristics of the people already in the
team. To use Belbin, all existing team members must take the Belbin test, this will give the
team profile. By analyzing the team results it should be clear where the existing team
strengths and weaknesses lie. A balance of strengths within the team is the goal.

Weightings
The weighting for the technical skills, experience, and team characteristics should be
completed prior to the assessment form being used. These weightings will vary between roles
based on factors such as:
- Position within the organization (management, team leader, worker etc.)
- Permanent role or contract (hire for attitude-train for skills)
- Nature of the role (Technical specialist, consultant, cleric, support etc.)
- Timeframes (expertise required now or can we train?)
- Existing team (Existing team dynamics- compliment or antagonize?)

If an expert is required to assist a project in a technical area for a short period of time, then
the technical skills and experience would be expected to outweigh the team dynamics
attribute.

For a junior testing position, then team dynamics would be expected to carry a higher
weighting than the skills or experience. With the right attitude and a smooth transition into the
team the junior tester will gain the skills and the experience quickly, assuming the right
training and mentoring is given.

For a senior position or management role a balance of experience, team dynamics, and
technical skills would be required. Do not forget that the technical skills list is job specific, and
for a management role may include, project planning, resource management, presentation
skills, performance monitoring and counseling etc.

Conclusions
There are many organizations that can provide assessment materials for use in establishing
an individuals’ ability. For team dynamics there is Belbin, for IQ testing Mensa, there are also
WEB sites that contain tests for many different types of assessment. Call on these where
necessary to assist you in identifying the required qualities that are key to the position that
you are trying to fill.

The key is to identify the skills qualities that you are looking for in advance.
Ensure that the questions are complete and appropriate. Produce a questionnaire that allows
these qualities to be measured. Set your required achievement levels in advance to prevent
taking the best of a bad bunch.

10.7 The Play

HOW TEST MANAGERS TALK


AND
WHY PROJECT MANAGERS DON’T LISTEN
Introduction
The popular book called “Why men don’t listen and women can’t read maps” has inspired this
presentation. The book describes how communication breakdown can occur within
relationships. This presentation consists of five project meetings between three people: the
Project Manager (David), the new Test Manager (Chris) and the Development Manager
(Erik). In each scene, we would like you to observe how the test manager talks, and why the
project manager doesn’t seem to get the message.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 76


Scene 1: Project Initiation Meeting (3rd week October)

David Well we come now to the point in this meeting where we are to discuss the delivery of the
next project to the customers. Erik, how long will it take Development to develop and hand
over the system?
Erik Well it’s a challenging system. There are 3 major features which will each take 2 weeks to
develop and unit test. There are a number of minor features which will take us another 13
days to develop. So we reckon we can finish development in 2 months, so that’s when
hand-over will be – the 3rd week in December.
David OK. Chris – how long do you and your team need to test this application?
Chris Well, based upon the scope of the requirements and the new technology being used we
believe that we need 2 months also.
David WHAT! The same amount of time as Development – I don’t want you to build the system
again – just test it!
Erik Yeah, that’s ridiculous. Anyway there won’t be much testing to do – we have a great team
and we do great work.
David Come on Chris, what’s a realistic estimate?
Chris Well we might be able to do it in 6 weeks
David Look, we need to get this software out by the end of January – that gives your team 4
weeks even if we don’t count the Christmas period, which should be plenty of time.
Chris Well, maybe. But it’s really important that the Development team must hand over the
software fully complete on the date that they’ve promised.
David Development have promised to deliver the software in 2 months – I am sure they won’t let
you down
Erik Of course we’ll deliver the system when we said! Don’t you trust us? We’re absolutely
confident about that delivery date. We might even be early – after all, we will be using that
new XP agile Java development environment.
Chris 4 weeks is very tight – I am not sure that we can complete the testing unless we work extra
hours. And as for the new XP agile Java development environment…
David Well that’s settled then. Do thank the team from me for their willingness to go the extra
mile.
Erik (to David) Those testers are always complaining, aren’t they?
All A few moments later, on the telephone…
David {telephones MD…}
Yes John, no problem. We’ll get the system delivered in January. The test team is very
happy to put extra hours into the project – unpaid. So I think we can implement this system
on time…
Erik <on phone> Hi guys. It’s OK – I didn’t even have to offer to shorten the two-month
development, so we can relax a bit now.
Chris {to himself…}
There is no chance that we’ll get this system tested in 4 weeks and the team is definitely
not going to like working extra hours.
<Picks up the phone> Well guys, I have some bad news and some really bad news…

Scene 2: Hand-over meeting (3rd week of December)

David Welcome to the handover meeting. Erik, have you now handed over the completed
system?
Erik Well the progress we have made to date has been really good, and the software is almost
complete. There were a number of problems that were outside our control, but we have
now handed over most of the software.
David How complete is the system?
Erik About 90% complete. The 3 major features are all complete, there’s just a little bit of work
to do on the additional minor features. I suggest the test team starts testing that now as a
phase 1 delivery and we’ll deliver the rest in a few weeks.
Chris A few weeks! We only have 4 weeks to complete all our testing. You promised to give us
the whole system. {sarcastically} And what about this new XP agile development
environment then…
Erik Well you can start by testing the major features which are ready now, and finish testing

© 2022 Keytorc Software Testing Services ISTQB AL TM – 77


these few minor features in the last week.
Chris That’s not possible. We need more than a week to test the system as a whole once you’ve
given us all of it.
David Well you do have a point there Chris. Erik, do you think your team could deliver phase 2 of
the system in 2 weeks rather than 3?
Erik Possibly, though we won’t be able to do as much of our own testing as we would like.
David That’s OK isn’t it Chris? Your team will test it for them. What about doing some
“Exploratory Testing”, I am told that this should save time.
Chris David, I am not comfortable with this situation. The developers said they would deliver the
whole system, and it would have been properly unit tested before they gave it to us.
Erik <defensive> Look, we had problems affecting this development that no-one anticipated –
it’s not our fault. Anyway we’re not that late, and we HAVE tested the parts that we have
finished and they are OK!
David Now, Chris don’t be pessimistic – we will achieve the end date if we all pull together
Chris <becoming angry>
I am not being pessimistic – just stating some facts. We always seem to end up in this
situation – testing time being squeezed. Corners are being cut – Development using our
time as their contingency…
Erik Now wait a minute Chris, that’s not fair. We’ve been working really hard to produce a really
good system and all you do is complain. You never said anything about this earlier – why
are you being so antagonistic now? Why don’t you grow up and just learn to cope!
David Ok, let’s draw this meeting to a close – Erik you said you can deliver Phase 1 now and
Phase 2 in 2 weeks’ time. Chris I want you to start testing on phase 1 and then take the
second phase when it is ready. Let’s meet in January to discuss progress. I am off on
holiday tomorrow for 2 week - Merry Christmas everyone.
All A few moments later, on the telephone…
David {telephones MD…}
Hi John, yes everything is going fine – system has been handed over to the test group. It
just needs some final tests run on it…
Erik <on phone to Dev team> No it’s ok we don’t have to test the system. The Project Manager
has agreed that the testers can do all of that…
Chris <Picks up the phone to the test team> Hi team, I have some bad news and some really
bad news. Firstly we will not be receiving the whole system today, but must start running
our tests tomorrow. We have been promised the rest of the system in two weeks’ time.
And secondly is it possible for anyone to come in over Christmas – you won’t get paid, but I
will really appreciate it!

© 2022 Keytorc Software Testing Services ISTQB AL TM – 78


Scene 3: Progress Meeting (2nd week January)

David Welcome to the progress meeting. I really hope you have some good news for me. Erik?
Erik Well, we are ready to deliver phase 2 of the project to the test team. It took a lot of effort
but my team has really come through – they’re a great bunch. They even came in during
the Christmas break. The users are going to love this system.
David That’s great. What about the testing, Chris?
Chris Yeah, we saw you on the one day you came in, and only stayed for 2 hours – the test team
worked a lot over Christmas and we’re not happy about it.
David Enough complaining, Chris. I hope you have done good work in the time and produced a
good quality release for us. How was the Exploratory Testing?
Chris Yes it was exploratory enough – I felt like Neil Armstrong stepping onto the moon for the
first time. No one had been on the moon before him and no-one had been in the system
before us!!
David What do you mean?
Chris This release is really bad. We found loads of bugs in just the first few days of testing the
major features. Most of them are showstoppers. I don’t want to take Phase 2 if it’s as bad
as Phase 1.
Erik What do you mean by loads of bugs - we only have 2 outstanding bugs which my team is
currently working on.
Chris That’s not true. You and your team keep sending them back as ‘can’t recreate’ – but they
are still problems – we have just re-raised 30 of them.
Erik Well, that’s going to make sure we get further behind! What you’ve raised aren’t software
bugs – they are problems with your environment. They work OK in our environment.
David Do you have some examples of these bugs
Chris Well… no not on me, I can’t remember the exact details
David I suggest you have a look at your environment and data before creating anymore bugs
Chris We are not creating these bugs – we’re just finding them
David Why are you finding all these bugs anyway? Surely your job is to show the system works
ok. Anyway, why didn’t you raise this issue earlier?
Chris I left you messages, but you never got back to me. You weren’t available over Christmas at
all. Hang on a minute, our job is to find bugs and test as much as we can.
Erik Well what about that new automation tool – why don’t you use that during the next 2 weeks
to speed the testing up?
David Yes, that’s a good idea – I’d like to see the test automation tool running every evening
Chris We don’t have time for that now. According to this test automation book I’ve started
reading (I got it for Christmas) …
Erik So you had chance to read an Automation Book – you could have spent that time testing…
David Look, the deadline is in 2 weeks’ time. We have to meet that deadline. It’s up to the two of
you to work together to make sure that we do.
Erik I think we can – so long as Chris’s team doesn’t keep raising unimportant and
irreproducible bugs that just waste our valuable time.
David OK, that sounds good to me – is this ok with you Chris? It’s really important that we meet
the deadline.
Chris Yes I guess so.
All A few moments later, on the telephone…
David {telephones MD…}
Hi John, yes happy new year to you too. I had a great Christmas – thank you.
I am pretty confident that we shall meet the deadline – you’ll be pleased to know that all the
team were in over Christmas.
Yes I’ll have a drink with you tonight to celebrate…
Erik <on phone to Dev team> Hi guys. Yes the progress meeting went well. Thank you for
closing those issues just before the meeting. Any chance of closing a few more? Some of
them must be duplicates…
Chris <Picks up the phone to the test team> Hi team, I have some bad news and some really
bad news. Firstly we will not be paid for the overtime we worked - sorry
And secondly the deadline cannot be moved!

© 2022 Keytorc Software Testing Services ISTQB AL TM – 79


Scene 4: Release Meeting (end of January)

David Well we have all done really well – we have made the deadline, well done team! All we
need to do now is to get your sign-offs for the audit records. Erik has already signed.
Chris?
Chris Hang on boss, there are still 11 high severity and 9 high priority issues outstanding. We
can’t sign-off yet, we said we wouldn’t implement with any high priority or high severity
bugs outstanding.
Erik Oh come on. You testers are always crying “wolf”. Most of those are not “high” – I am sure
we can reduce them to medium or low priority
Chris And what about that bug we found yesterday that stops us from logging on?
Erik Oh that again – we know about that. I’ve got somebody on to it. It won’t take long to fix it.
It’s not important. It’s certainly not high priority, its low priority.
David Will you two please stop arguing about these details? We’re here to sign off this system
and to celebrate the end of the project.
Erik Yeah, I think it’s been a great project. By the way, my team came in last night and made a
few minor changes so we have a new release of the system which is the one that should
be shipped to the customer
Chris Hang on – you can’t do that! We haven’t tested that at all! I suppose you expect me to run
our regression tests on this new release in half a day?
Erik No that won’t be necessary – they were only minor bugs and they won’t affect anything –
trust me! Anyway if you had been using your test automation tool, it would only take you 10
minutes to test it all.
David Chris, I have prepared a release document for you to sign. You are surely not going to
delay the project because of a few minor problems, are you?
Chris I am not happy with this. I predict that there are still around 25 high priority and 35 high
severity bugs to find – the users will not be happy with the system
Erik How can you possibly predict bugs you haven’t found? That’s just being pessimistic. Why
can’t you be a team player and be positive?
David Chris, you know we need to release this system today, so are you going to sign it off or
not?
Chris <reluctantly>
Well OK, I guess I will. But I’m not happy.
All A few moments later, on the telephone…
David {telephones MD…}
John – good news, Chris has signed the system off. By the way when will my bonus
appear in my salary?
Erik <on phone to Dev team> Hi guys, excellent news – the test team have just signed the
system off. We need to fix that problem with the log-on. Can you do it by lunchtime?
…excellent!
You can all have the afternoon off – you deserve it!
Chris <Picks up the phone to the test team> Hi team, I have some bad news and some really
bad news.
Firstly we have signed the system off and secondly we have another build tonight.

Scene 5: Post Project Review (end of February)

David OK, team, this project has not been as well received by the users as it should have been.
There have been complaints about a large number of faults found in operation by the
users, and they are not being fixed quickly enough.
Erik Well, we knew all along that the testing wasn’t up to scratch. How could you have let all
those faults slip through into production? Quality is your responsibility.
Chris What? We didn’t put the faults in – you did! What do you mean; the testing wasn’t up to
scratch? We worked so hard under really difficult circumstances. We did a good job of
testing the system.
Erik You’re only bragging – you have no way of being able to tell whether you did a good job of
testing or not. I don’t think you did. I think you’re trying to cover up your bad testing.
Chris We did do good testing! I can’t prove it, but we did. Anyway we didn’t get a chance to test

© 2022 Keytorc Software Testing Services ISTQB AL TM – 80


the system very well because you delivered it so late! And you put in all those last-minute
changes! And you didn’t fix the faults we had found!
David Now, now, let’s not rake up the past. We’re here to solve the problem we have at the
moment, not to hurl insults at each other. Let’s have some constructive suggestions for
how we can move forward.
Erik Good idea, boss. Well, I think I will be able to get my team to put in some extra effort by
working a couple of weekends, if the pay is right. As you know, they are already very busy
developing the next system.
Chris Developing the next disaster, more like! Anyway, the testers put in loads of overtime and it
wasn’t paid. What about us?
David Well, good developers are hard to find, and we wouldn’t want any of them to leave, so we
will have to pay overtime if we expect them to work overtime.
Erik OK, thanks boss. We should be able to help turn around this situation within a couple of
days, and get this backlog of bugs fixed.
David Thanks very much Erik. Your efforts are much appreciated.
Now there is one other thing I need to discuss with you, Chris. The users are complaining
about the number of faults they have found and questioning whether we did any testing at
all on this system!
Chris Of course we did! We did lots of testing! We found lots of bugs during the testing!
David Well, I need to go back to them and provide some assurance that this situation won’t
happen again. How could you let so many bugs get out into production?
Chris We did the best we could, honest! But there were so many last-minute changes, and the
system wasn’t delivered when it was promised. If only you’d given us enough time to test it
properly…
Erik You can’t say that now – you agreed to the timescales. You’re just trying to cover up your
incompetence. I really don’t know why we have a test team at all.
David Well, Erik, the reason we have a test team is to ensure quality, but it doesn’t seem to have
worked this time. I hope the test team will do better next time, Chris.
All A few moments later, on the telephone…
David {telephones MD…}
Hi John, yes yes – I know you have had some complaints about the quality of the system –
I have spoken to the Test Manager about this. The Developers have offered to fix the
problems in the next couple of days.
By the way – this won’t affect my bonus will it?
Erik <on phone to Dev team> Hi guys – you know you wanted some extra cash, well I have
managed to have paid overtime authorized for you all for the next two weeks…
Chris <Picks up the phone to the test team> Hi team, I have some bad news and some GOOD
NEWS.
Firstly the customers are complaining about the quality of the software and the testing that
we have done!
The good news is – I AM RESIGNING!

© 2022 Keytorc Software Testing Services ISTQB AL TM – 81


11. Reviews

For background reading on “Reviews” refer to chapters 8 “Formal Review


Types” and 10 “Making inspections work” of The Testing Practitioner.

There are a number of ways to improve a document. Reviewing documents, or other software
elements, is a commonly used process that is described by IEEE, as: “an evaluation of
software elements or project status to ascertain discrepancies from planned results and to
recommended improvement”.

11.1 Why reviews and what can be reviewed


The two main reasons for conducting any type of review are first of all the high percentage of
defects that have already been made before a single line of code has been created, and
secondly the importance of these early defects. Note that strictly speaking, the term fault can
be used here, instead of defect. For the sake of clarity, the term defect is used throughout this
review reader.

Early defects often multiply themselves top-down. A single defect in a requirement document,
can lead to multiple defects in a design document, which in turn can cause different defects in
code. Moreover, the costs of rework of these defects grows exponentially. In 1981 Barry
Boehm already described in his book “Software engineering economics”, that a defect in a
requirement document could be solved with 5 minutes rework. If not found that early, the
resulting defect in the software product could lead to hours of rework. That is, if the defect is
found before the product is shipped to the customer. A defect that is found after release can
cause serious costs, in addition to the embarrassment and possible damage to the company’s
image.

With this in mind, it is obvious that the objective of a review is not just to find defects. It is also
used to find defects as early as possible in the life cycle and to remove the causes from the
development process.

11.2 Types of product reviews


Formal versus informal review
Reviews come in a variety of flavors and can be divided into formal and informal reviews. In
an informal review, colleagues are asked to provide comments (“please review this”) and the
process rarely involves a meeting. Formal reviews follow a documented procedure which
prescribes, amongst others, different roles for the participants and entry & exit criteria for the
review material and the review meeting. This can greatly improve the results of a review.
However, a formal review is not always in place. Using, for instance, a very formal review type
in an organization which is not ready for this formality, will most probably lead to contra-
productive effects and diminish all possible positive results of the review.

Although informal reviews do not follow a document procedure, they do have added value.
The challenge is to use both formal reviews and informal ones, based on a documented
strategy, to improve the efficiency and effectiveness of the review and development process.
See chapter 8 of the Testing Practitioner Handbook for more information.

Formal review types defined by IEEE


Every “software element” can be reviewed. The product related documents like requirements
documents, functional specifications, designs and code, but also test designs, test scripts and

© 2022 Keytorc Software Testing Services ISTQB AL TM – 82


items like a user manual. Project related documentation can also be reviewed, e.g. project
plans, test plans and procedures.

All the different review types have a different focus and are applicable at a different life cycle
phase. The types of defects that are found also differ per type of review. Using the right type
of review at the right place in the software life cycle ensures a more effective and efficient
review process. See chapter 8 of the Testing Practitioner Handbook, and chapter x of this
reader for more information on the specific differences between the different types and when
to use them.

The IEEE standard on software reviews (IEEE standard 1028, 1998), distinguishes three
types of formal reviews:

- Inspection
Inspection is a formally defined and rigorously followed review process. The process
includes individual and group checking, using sources and standards, according to
detailed and specific rules (checklists) in order to support the author by finding as many
defects as possible in the available amount of time.

- Technical review
The objective of a technical review process is to reach consensus of technical, content
related issues. Domain or technical experts check the document-under-review, prior to
the meeting, based on specific questions of the author. In this meeting the approach to be
taken is discussed by the experts, under guidance of a moderator or technical leader.

- Walkthrough
In a walkthrough the author guides a group of people through a document and his or her
thought processes in order to gather information and to reach consensus. No formal
preparation is required, defects are found during the meeting. People from outside the
software discipline can participate in these meetings. In walkthroughs dry runs and task
scenarios are often applied.

More review types


In addition to formal reviews, the IEEE standard mentions two other types of reviews.
- Management review
A systematic evaluation of a project performed by or on behalf of management in which
the project status is evaluated against plan (resources, costs, progress, and quality) and
appropriate actions are defined.
- Audit
An audit is the independent examination of a product or process against an agreed
reference base. The examination is based on a defined problem definition and/or set of
questions and carried out from a certain point of view (e.g. engineering, management or
customers).

The goals of these two review types are entirely different from the three review types
mentioned before. Key characteristics of management reviews are:
- Conducted by or for managers having direct responsibility for the project or system
- Conducted by or for a stakeholder or decision maker, e.g. a higher level manager or
director
- Check consistency with and deviations from plans
- Check adequacy of management procedures
- Assess project risks
- Evaluate impact of actions and ways to measure these impacts
- Product lists of action items, issues to be resolved and decisions made

Key characteristics of audits are:


- Conducted and moderated by a lead auditor
- Evidence of compliance collected through interviews, witnessing and examining
documents
- Documented results include observations, recommendations, corrective actions and a

© 2022 Keytorc Software Testing Services ISTQB AL TM – 83


pass/fail assessment

11.3 Review process overview


The review process consists of a few straightforward steps. First of all, the effort needed to
successful conduct every step, should be planned. What is the focus in this specific review?
How many people should participate in this review? These sorts of questions are answered in
the planning phase. The next, optional but recommended, step is the kick-off. Here the team
ensures that it is “ready to play” and fully understands what is expected from every participant
in the review process. During the individual preparation every participant tries to find as many
defects as possible in the document(s) at hand. The review meetings have a different format
for each type of review. The outcome is the same however; rework for the author. In the last
step, follow-up & exit, the updated document is checked and leaves the review process.

11.4 Implementation steps


The key most important step towards success is beginning to inspect, keeping in mind the
goal or central effect and being aware of possible pitfalls.

Before a project can start with formal reviews the involved project members, project leaders
and project sponsors need basic information on reviews. A presentation on reviews is a good
start to both provide knowledge and create momentum. After the start-up, select documents
for inspection that matter. A thoroughly inspected higher level document will have a positive
effect throughout the entire project, including side effects on the inspection process itself.

Engineers and moderators should be trained to get the most out of the inspection process.
Engineers will not only learn how to inspect, but are indirectly also trained in how to write
better documents. Moderators should be additionally trained. Handling “heated” meetings,
supporting shy authors or participants, developing a review strategy, improving the process
based on inspection metrics, etc., all these activities ask for special skills.

Inspections need to be supported by a master review plan, as presented in the case Review
Strategy. Planning inspections, based on a strategy, creates awareness and emphasizes the
need to make well founded trade-offs at the start of a project. During the project the plan can
be used to track the progress of inspections and help engineers to plan their work (including
re-rework).

As someone once said, every great journey starts with the first step. Doing inspections is this
first step. After all the preparing, informing and planning activities the inspections have to be
carried out. It’s very important for a project to stress that people are allowed to make mistakes
in the inspection process and of course their documents. The willingness to learn from these
mistakes is perhaps the second key to success.

If the process has started and inspections are carried out, it’s necessary to keep trying to
improve the process. Improvements must be based on the data that is collected during every
step in the inspection process. The inspection metrics must be presented to the people who
provided the data, in order to make a correct interpretation. These feedback sessions are
essential, not only for the continuous improvement of inspections but the meetings are also
needed to keep inspections going in general.

The central effect


Occasionally remind involved project members of the ultimate goal of inspections: influencing
the existence of early defects which have the potential to become large downstream costs.
Measuring and providing feedback helps authors to write better documents, helps to improve
the software process and helps the project to learn about quality. Inspections help to make
document quality more tangible and take the necessary steps for improvement.

It will take some time before the improvement of the inspection and software process
becomes visible. When starting with inspections it looks like the removal of defects is the

© 2022 Keytorc Software Testing Services ISTQB AL TM – 84


most important factor. When the team, project or organization becomes more experienced,
the focus of inspections will shift from finding to avoiding defects.

Engineers’ opinion
When engineers are asked what their opinion is on inspections, they are mostly positive and
thereby support the enormous amount of numerical proof of successful projects.
In general engineers feel that the quality of products in improved, the software process itself
is improved, and that their project is better controlled. Furthermore they emphasize that a
logging meeting teaches them how to specify and how to check, creates a common
understanding and motivates them to do a good job.

Review principles
Tom Gilb has described 10 principles that can be seen throughout this reader and the review
presentations. The most important message is perhaps to keep the process practical and to
learn as much as possible while inspecting.

11.4 Metrics for reviews


Review leaders can use measurements to determine the effectiveness and efficiency of the
reviews. Metrics must be available to evaluate the quality of the reviewed item and the costs
of the review. Furthermore the downstream benefit of the review should be measured.

Typical metrics can measured for a reviewed item:


• Product evaluation:
o Work-product size (pages, lines of code)
o Defects, like number and types of defects found and their severity, defect
clusters, average defect density, estimated residual defects
o Types of review, like informal review, walkthrough, inspection, etc.
• Process evaluation:
o Defect detection effectiveness
o Review process metrics, like preparation time, review meeting duration,
rework time, review process as a whole
o Percent coverage of planned work products
o Participant surveys about review process
o Metrics for review metrics versus dynamic test defects
o Correlation between review effectiveness (review type versus defect
detection effectiveness)
o Estimated project time and cost saved
o Defects found per work-hour
o Number of reviewers

11.5 Final remarks


When reviews will not work
A large number of organizations have started inspections, but are not doing it anymore or in a
very informal manner. This will happen if quality is not important enough and management is
not interested in the document (and process and product) quality. The sponsors of the project
need to be fully convinced, because inspection is a costly activity and it will take some time
before the changes are reflected in a positive return on investment.

Another hazard to inspections are untrained engineers and moderators. They can frustrate
the process and vice-versa. Training on the other hand is not implementation. An experienced
moderator can be a very valuable asset to a project starting with inspections. It’s a job that not
to be carried out by everyone in the project.

It is obvious that the information on the quality of a document may never be used (by
management) to evaluate individual performance. This will immediately make all data
collected useless and most probably terminate all inspections.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 85


Finally, a project that implements inspections in a too formal or theoretical manner is doomed
to fail. The level of formality should fit the project, and it should be practical.

Conclusion
There is a lot of proof and knowledge available to show that reviews are an effective and
efficient means to improve the quality of a software product. To get the most out of the review
process, a clear distinction has to be made between the different review types. To be able to
get anything at all out it the process, reviews must be started in a practical and common
sense manner.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 86


12. Test Process Improvement

For background reading on “Standard & Test Improvement Process” refer to


chapters 2 “Testing and Standards”, 18 “Test Maturity Model” and 19 “Test
Process Improvement” of The Testing Practitioner.

12.1 Implementation - Change process


Awareness

Strategy,
scope and
approach

Assessment Define
improvements

Evaluation Planning

Implementation

Creating awareness
The reason for improving the test process generally arises from experiencing a number of
problems with testing. The desire is to solve these problems. Improvement of the test process
is regarded as the solution. Important in this phase is that all parties involved become aware
of the following points;
- The purpose of, and the need for, improvement of the test process
- The fact that a formal change process using a test improvement model is the way to
do it.

This awareness implies that the parties mutually agree on the outlines of, and give their
commitment to, the change process. Commitment should not only be acquired at the
beginning of the change process, but should be retained throughout all phases of the
process. It is important in this activity that people see that senior management supports the
change process. The awareness phase should not be regarded as a detached step in the
change process, but rather as an essential precondition. Presentations or brainstorming
sessions can be used to obtain the required awareness.

Change and resistance

© 2022 Keytorc Software Testing Services ISTQB AL TM – 87


The phenomenon of resistance is often underestimated. Where changes take place, there will
be some people bluntly against these changes. The change team should be able to handle
this; they should actively reduce resistance.
Resistance can only increase if the change plans and their influence are made known.
Continuous support convinces the testers in this phase of the use of changes. By giving
information, not too late but certainly not too soon, the resistance curve can be influenced.

The following behavior of the change team can reduce the resistance:
- Inform; At the start of the structuring process, only a few people are informed. The
resistance will increase if the change plans and their influence are announced.
- Support; During the application and the accompanying support, the test personnel
bring proposals for improvement. These should be listened to carefully and eventually
negotiated. A sensitive ear and acceptance of the proposals reduces the resistance
considerably. A steady continuation of support in this phase convinces the testers of
the usefulness of changes.
- Negotiate; negotiate with people involved in the change process
- Convince; convince the testers of the use of changes, it will improve their work
- Enforce; finally the few last remaining people who disagree and cannot be convinced
need to be enforced by management to change their way of working.

Strategy, scope and approach


In this activity the strategy, scope and approach of the change process are defined at a global
level. The ultimate target of test improvement is optimizing the required time, money, and
quality of testing in relation to the total information services for the organization. This target is
hard to specify; yet attempts should be made to define the target in a way which is specific,
achievable, mutually consistent, and measurable.
A rough indication of the milestones and costs for the target is also given.

The scope of the change process can have several possibilities:


- One test level in a project (for example, the system test in project X)
- All test levels in a project
- All tests of a certain test level in the entire organization (for example, all acceptance
tests)
- All test levels in the entire organization.

Although in all cases the consecutive steps of the change process have to be taken, the
interpretation of each step is largely dependent on the chosen (short-or long-term) targets and
on the scope. In a change process with limited targets and scope it is possible to implement
the change within a short time frame.
To control the change process it is vital that the change takes place in fairly small steps.
Using a test maturity model gives support in choosing these improvement steps. Also the
change process should be guided: how is the change process organized, who is responsible,
how progress monitoring will take place.

Test Engineering Process Group (TEPG)


The change team must have a mixture of knowledge and skills:
§ Social skills such as:
- ability to advise;
- conflict handling;
- ability to negotiate;
- enthusiasm and persuasiveness;
- honest and open attitude;
- panic-and criticism-proof (shock-proof);
- patience;
§ profound knowledge about the organization in general;
§ profound knowledge about the test process in the organization;
§ profound knowledge about and expertise of change processes;
§ profound test expertise;
§ good knowledge of the test maturity model.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 88


This mix will almost never be present in one person. So in composing the group it must be
ensured that the appropriate knowledge and skills are present in the team. A point of interest
in the composition of the change team is the availability of the people.

Assessment
In the assessment activity, research is done to establish the strong and weak points of the
current situation. Based on the target defined earlier and the current situation, the change
actions are determined in the next activity.

An assessment consists of a number of steps, which are explained below:

§ Preparation
The person or group of persons who will perform the assessment determine who will
participate in the assessment (e.g., testers, test managers, project leaders, developers,
system managers, and end users), which documentation is to be used (e.g., test plans,
reports, test scripts, defect administration, and procedures, norms, and standards for
testing), and in which form and when the assessment is to take place. In the preparation
of interviews it is determined who is to be asked about which key areas. Management
participation in the assessment is important in order to get commitment.

§ Collecting information
By interviewing the participants, studying the documentation, and optionally by witnessing
the process, the necessary information is collected. All information gathered from
interviewers will be treated confidentially.

§ Analysis
On the basis of the collected data, the levels per key area of the TPI model or the key
process areas of the TMM are examined and it is determined which they are met, not
met, or only partially met.

§ Reporting
The analysis results are recorded. This will show the strong and weak aspects of the test
process in the form of assigned levels of key areas.

Improvement actions
On the basis of the improvement targets and the results of the assessment, the improvement
actions are determined. The actions are determined in such a way that a gradual and step-by-
step improvement is possible. Test maturity models helps to set up these improvement
actions. Depending on the targets, the area of consideration, the lead-time, and the
assessment results, the choice to carry out improvements for one or more areas can be
made.

Criteria that can be used in determining improvement actions are:


- Fast, visible results;
- Low costs;
- Easiest actions first;
- Acceptance level in the organization;
- Best cost/profit ratio;
- Decrease highest risks.

The improvement actions should be in accordance with and lead to the achievement of the
targets set earlier for the improvement of the test process.
How can it be determined that the implementation of a number of actions leads to the
achievement of previously defined targets? For this reason it is important that the defined
targets can be measured in some way or another and that periodically measurements are
taken to see whether the improvement actions give the desired result and to what extent the
targets are met. The division into improvement cycles is intended to keep the entire change
process controllable. A cycle goes through the phases of planning, implementation, and
evaluation, so that when a cycle ends, the next planned cycle can start or adjustments can be
made.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 89


12.2 Deployment
The plan is executed (realistic targets). Because the consequences of the change process
have the most impact in this phase, much effort should be spent on communication.
Procedures, templates and standards should be used if they are available, if there are no
templates or procedures available someone should write them, so they can be used.

The execution actions (during the pilot project) have to be measured to determine to what
extent they have been executed. Based on these results, a statement can be made about the
progress of the change process. Also, a vital part of this phase is consolidation. Steps should
be taken to prevent the implemented improvement actions having a once only effect. The
organization must continue to use the changed working method. Communication of the
results, courses, training, and a quality system can support this.

Planning
A plan is drawn up to implement (a part of) the improvement actions in the short term. The
objectives are recorded in this plan and the plan indicates which improvements have to be
implemented at what time to realize these items.

The plan has to answer the following questions:


- Who is the customer?
- Who took on the assignment and/or is responsible for the implementation?
- What is (the area of consideration of) the assignment?
- Which improvement actions are to be taken?
- How will these actions be implemented?
- What are the milestones for implementing the improvement actions?
- Who and what is needed and when?
- How must does it cost?
- What results should the action produce?
- How often and at what times will progress be measured?
- What are the risks and what is done to make them controllable?

In the plan should also describe the activities divided into the following groups:

Test specific
- Select a pilot project: is the pilot project suitable, choose not only one pilot project
- Training: the team should follow a good training in how to work in a change project
- Procedures and manuals: Books and procedures must actually be used
- Tools: the purchasing of tools should not be regarded as redemption money

Change specific
- Presentations: all sections of the organization involved must be informed about the
changes. Presentations are a suitable form of communication for this;
- Discussion meetings: in this meetings, those involved can, on the one hand, be
convinced of the use of a change and, on the other, be a source of ideas and
problems which had not been thought about;
- Kick-off meetings: a kick-off meeting is organized with the group of people directly
involved. By doing this everyone has a clear view of what should be happen, which
makes co-ordination and co-operation a lot easier;
- Publications: are often used to reach a far larger audience than can be reached using
presentations.
- Measurements: test performance indicators (derived from business goals)

Evaluation
In this phase the aim is to see to what extent the actions were implemented successfully as
well to what extent the initial targets were met (are the described goals achieved?). Based on
these observations, the change process can continue in any number of ways.
- The next improvement cycle is started
- The improvement actions are adjusted

© 2022 Keytorc Software Testing Services ISTQB AL TM – 90


- A new assessment is executed
- New targets or areas of consideration are determined
- Further improvement of the test process is stopped

12.3 Metrics & Measurement


A variety of metrics (numbers) and measures (trends, graphs, etc.) should be applied
throughout the software development life cycle (e.g. planning, coverage, workload, etc.). In
each case a baseline must be defined, and then progress tracked with relation to this
baseline.

Possible aspects that can be covered include:


1. Planned schedule, coverage, and their evolution over time
2. Requirements, their evolution and their impact in terms of schedule, resources and tasks
3. Workload and resource usage, and their evolution over time
4. Milestones and scoping, and their evolution over time
5. Costs, actual and planned to completion of the tasks
6. Risks and mitigation actions, and their evolution over time
7. Defects found, defect fixed, duration of correction

Usage of metrics enables testers to report data in a consistent way to their management, and
enables coherent tracking of progress over time. Three areas are to be taken into account:
• Definition of metrics: a limited set of useful metrics should be defined. Once these metrics
have been defined, their interpretation must be agreed upon by all stakeholders, in order
to avoid future discussions when metric values evolve. Metrics can be defined according
to objectives for a process or task, for components or systems, for individuals or teams.
There is often a tendency to define too many metrics, instead of the most pertinent ones.
• Tracking of metrics: reporting and merging metrics should be as automated as possible to
reduce the time spent in producing the raw metrics values. Variations of data over time for
a specific metric may reflect other information than the interpretation agreed upon in the
metric definition phase.
• Reporting of metrics: the objective is to provide an immediate understanding of the
information, for management purpose. Presentations may show a “snapshot” of the
metrics at a certain time or show the evolution of the metric(s) over time so that trends can
be evaluated.

12.4 Business Value of Testing


While most organizations consider testing valuable in some sense, few managers, including
test managers, can quantify, describe, or articulate that value. In addition, many test
managers, test leads, and testers focus on the tactical details of testing (aspects specific to
the task or level, while ignoring the larger strategic (higher level) issues related to testing that
other project participants, especially managers, care about. Testing delivers value to the
organization, project, and/or operation in both quantitative and qualitative ways:
• Quantitative values include finding defects that are prevented or fixed prior to release,
finding defects that are known prior to release, reducing risk by running tests, and
delivering information on project, process, and product status.
• Qualitative values include improved reputation for quality, smoother and more-predictable
releases, increased confidence, building confidence, protection from legal liability, and
reducing risk of loss of whole missions or even lives.

Test managers and test leads should understand which of these values apply for their
organization, project, and/or operation, and be able to communicate about testing in terms of
these values. A well-established method for measuring the quantitative value and efficiency of
testing is called cost of quality (or, sometimes, cost of poor quality). Cost of quality involves
classifying project or operational costs into four categories:
o Costs of prevention
o Costs of detection
o Costs of internal failure

© 2022 Keytorc Software Testing Services ISTQB AL TM – 91


o Costs of external failure

A portion of the testing budget is a cost of detection, while the remainder is a cost of internal
failure. The total costs of detection and internal failure are typically well below the costs of
external failure, which makes testing an excellent value. By determining the costs in these
four categories, test managers and test leads can create a convincing business case for
testing.

12.5 Successes and Failures


Failure factors
- Exclusive top-down or bottom-up improvement: the improvement process should be
supported actively by the management as well as having sufficient basis in the other
sections of the organization;
- Confine to training: The knowledge gained in education should be applied in an
organization that meets the minimal preconditions for improving the test process;
- Unsuitable pilot: the pilot must not provide too many risks to the primary process of
the organization, but on the other hand, it should not be too free of obligations;
- Unbalanced improvement: the four cornerstones of a structured test approach (life
cycle, techniques, infrastructure, and organization) must be mutually balanced;
- Looking upon test tools as the solution: Test tools can give support in a test process.
This means firstly that there must be a certain degree of maturity of the test process
before test tools are applied, and secondly that tools can give support and no more
than that. The test process cannot be done entirely by tools!
- Underestimation of the implementation: points of interest are: measure the progress
and the results, present visible results, prevent taking too large steps at once, prevent
the changes ending up as “shelf ware”

Success factors
- Management commitment: Probably the most important success (and failure) factor is
management commitment to the change process. The impatience of management
helps in getting commitment to change an organization, but can have a wrong effect
when the expectations created are not realized fast enough. When it is not clear that
the management supports the change process, this has to be changed (be sure that
management supports the change project)
- Clarity of the required situation: The change process should have a clearly defined
target, so that it is clear to everyone what must be achieved. These targets can differ
for each target group. The different target groups should have in view the targets that
are relevant to them.
- Change team participants: Using the right people to control and guide the change
process is of great importance for good progress of the process. These people must
create an open atmosphere, in which there are no inhibitions about giving ideas or
criticism. They are preferably employed full-time in the change process and have no
other activities.
- Support: Support the testers during the whole change process so that all involved
people will stay motivated and know that there are people to whom they can turn in
case of a problem or question. Note that training is not implementation and
subsequent support (training-on-the job) is needed.
- Provide regular feedback on results: No organization will remain motivated for a year
without seeing clear and tangible results. Make sure that results are defined for both
short term and long term. As soon as results become available make them visible to
the stakeholders within the organization.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 92


13. Test Tools

For background reading on “Test Tools & Automation” refer to chapters 20


“Test tool overview” and 21 “Tool evaluation and selection” of The Testing
Practitioner.

13.1 Test Tool Types


Tools can be bought from commercial vendors or open-source tools can be used. In some
cases the testing organization has a specific need such that developing a custom tool must
be considered. This may be the case when using proprietary hardware, environment or
process.

In case of a custom tool the functionality can precisely meet the team’s needs. The tool can
be developed such that it can interact with other tools and generate reports in the exact form
as needed. In addition, the tool may be used outside the specific project.
There are important drawbacks also. The tool should be adequately documented, such that it
can be maintained after the creator left. As with every software product it should be designed
and tested to ensure that it work as expected.

The Test manager must ensure that all tools add value to the team’s work and can show a
positive Return on Investment. A cost-benefit analysis should be performed before acquiring
or buying a tool. Both recurring and non-recurring costs should be considered to calculate the
ROI. Costs can be quantitative (needed budget for development, acquiring, maintenance,
license) as well as qualitative, like shorter lead-times, more defects found, more effective way
of working.
Examples of non-recurring costs are defining tool requirements, purchasing or developing the
tool, training.
Examples of recurring costs are licenses, maintenance, helpdesk, migration and adaptation
for future use.

13.2 Tool Selection and Implementation


A test tool is an automated resource that offers support to one or more test activities, such as
planning and control, specification, constructing initial test files, execution of tests, and
analysis.

The emphasis is on support. The use of the test tool must make it possible to achieve higher
productivity levels and/or greater effectiveness.

Test tools per life cycle phase


A structured test process exists of the following phases:
- Preparation
- Specification
- Execution
- Planning and control

For all four phases there are different tools to use:


- Preparation: requirements management, static analysis, and inspection tools
- Specification: test design, data preparation
- Execution: comparators, coverage, dynamic analysis, hyperlink testing, monitoring,
performance (load and stress), record and playback, simulator, and stubs and drivers

© 2022 Keytorc Software Testing Services ISTQB AL TM – 93


- Planning and control (Test management): test management, progress tracking &
monitoring, scheduling, configuration management, defect management

Selection
When selecting a tool the different viewpoints of several stakeholders must be considered. To
the business a positive ROI is required. To the project, the tool must be effective, e.g. avoid
mistakes during manual testing. To the user, the tool must support them to do their tasks in a
more efficient and effective way.

Before buying a tool, you should first consider and possibly carry out the following:
- Do you really need a tool?
- The need for a formal evaluation
- Identify and document requirements
- Conduct market research and compile a short-list
- Organize supplier presentations
- Formally evaluating the test tool
- Post evaluation activities.

Implementation
Start with a small-scale project. The implementation team should work full time on the pilot
project. Team members may undertake specific roles;
- Champion: driving force
- Change agent: plans and manages
- Tool Custodian: responsible for technical support.

The results are assessed against the business case and if successful the use of the tool is
progressively rolled out to other projects and teams using the approach developed during the
pilot project.

Implementation process
If the process of tool selection is one of gradually narrowing down the choices, the
implementation process is the reverse: it is a process of gradually widening the tool’s
acceptance, use, and benefits, as illustrated in the figure hereafter:

Phased
Management Implementation
Commitment Pilot

Assemble Publicity
Team
Post
Internal Pilot Implementation
Marketing Evaluation Review

Assemble team
The following roles can be given the some people of the team:
- Tool “champion”: the driven force behind the day to day implementation, understand the
people issues, is able to work well with people, enthusiastic about the potential benefit
- Change agent: plans and manages the day to day uptake (implementation) of the tool
(including the pilot project), testing expert, technical background, analytical skills
- Management sponsor: who visible supports the tool implementation process

© 2022 Keytorc Software Testing Services ISTQB AL TM – 94


- Tool custodian: is responsible for technical tool support, implementing upgrades from the
vendor, and providing internal help or consultancy in the use of the tool.

Implementation team
The team that selected the tool may also be the team that helps to implement it. Ideally it
would include representatives from the different parts of the organization that would be
expected to use the tool. It has two tasks, inward facing and an outward-facing one;
- Inward facing: gathering information from their own part of the organization (find out
what people need, want, and expect from the tool, and feed this information back to the
rest of the implementation team and the change agent.
- Outward facing: each team member should act as a mini change agent, they need to
keep people informed about what is happening, help to raise enthusiasm while tempering
unrealistic expectations, and help to solve problems which arise when the tool begin to
be used in earnest within their groups.

Start-up phase

Management commitment
In order to gain initial management commitment the champion or change agent will present
the business case for the selected tool, summarize the tool selection and evaluation results,
and give realistic estimates and plans for the tool implementation process.
The change agent must have adequate support from management in at least two ways: first,
visible backing from high-level managers; and second, adequate time, funding, and
resourcing (this may mean adversely impacting other projects in the short term).

Realistic expectations
In selling the idea of test automation, the champion does need to generate enough
enthusiasm so that management will be willing to invest in it. However, if the picture painted is
unrealistically optimistic, the benefits will not be achieved. The champion must find a good
balance point between achievable and saleable benefits. You will be seen in a better light if
you are successful in achieving a lower target than if you fail to achieve a more ambitious
target.

Publicity
Once you have the management commitment, both verbal and financial (which may just be
time allowed to work on the implementation), the change agent needs to begin putting a
continuing and highly visible publicity machine. All those who will eventually be affected need
to be informed about the changes that will be coming their way. People are not convinced by
one presentation, and even if they are, they don’t stay convinced over time. So the role as
change agent is to provide a constant drip-feed of publicity about the tool, who is using it,
success stories, and problems overcome.

Raising initial interest


The first step to raise interest in the new tool is to give internal demonstrations, or just going
to talk to people about it.

Continuing publicity
The most important publicity is from the earliest real use of the tool, for example from the pilot
project. The benefits gained on a small scale should be widely publicized to increase the
desire and motivation to use the tool. It is also important to give relevant bad news to keep
expectations at a realistic level.
Throughout the implementation project, it is important to continue to give a constant supply of
information about the test automation efforts.

Test your demonstrations


It is also important to make sure that when you do organize a demonstration you have
carefully tested it beforehand, or you will very quickly lose all credibility and this will endanger
the whole test automation initiative.

Internal market research

© 2022 Keytorc Software Testing Services ISTQB AL TM – 95


The change agent and the change management team need to do a significant amount of
internal market research, talking to the people who are the targeted users of the tool. Find out
how the different individuals currently organize their testing and how they would want to use
the tool, and whether it can meet their needs, either as it is or with some adjustment.

Pilot project
It is best to try out the tool on a small pilot project first. This insures that any problems
encountered in its use are ironed out when only a small number of people are using it. It also
enables you to see how the tool will affect the way you do your testing, and gives you some
idea about how you may need to modify your existing procedures or standards to make best
use of the tool. The pilot project should be start by defining a business case for the use of the
tool on this project, with measurable success factors. For example, you may want to reduce
the time to run regression tests from a week to a day.

The pilot project should be neither to long nor to short, say between two and four months.

Assess the changes to your testing processes


The use of the testing tool will change your testing procedures in ways that you will probably
not expect. The implementation of a new testing tool will also have effects that are not
anticipated. For example, using a test automation tool may make debugging more difficult;
previously, when testing manually, you knew where you were when something went wrong.
Using the tool, you only know afterwards that something went wrong, and you then have to
spend time recreating the context of the bug before you can find it. So there is an extra job to
do which you did not have to de before.
You need to make sure that the negative effects of the tool, anticipated or not, real or
perceived, do not outweigh the realized benefits of test automation.

Evaluation of results from pilot


After the pilot is completed, the results should be compared to the business case for this
project. The lessons learned on the pilot project will help to make sure that the next project
can gain even greater benefits. If the objectives have not been met, then either the tool is not
suitable or it is not yet being used in a suitable way (assuming that the objectives were not
overly optimistic). Determine why the tool was not successful, and decide the next steps to
take. Very important is to be honest about the results.

Planned phased installation or roll-out


Assuming the pilot project was successful, the use of the tool in the rest of the organization
can now be planned. This is a major activity in any organization, and without careful planning,
it will not be successful. It is also important that the plans react to any problems encountered,
so they need to be flexible.
The following are the tasks to be done in the roll-out-period:
- Publicize the success of the pilot project as widely as possible;
- Modify company policies and strategies to take test automation into account;
- Insure that project managers take note of test automation issues in project plans,
quality plans, and test plans;
- Continue to build a good infrastructure for your regime;
- Aim to make it easier to use the tool than to test manually;
- Schedule when different groups will get involved in test automation;
- Monitor test automation efficiency;
- Train both direct and indirect users of the test automation.

The change agent and change management team can act as internal consultants to the new
tool users, and can perform a very useful role in coordinating the growing body of knowledge
about the use of the tool within the organization.

Provision of training in tool use


Every tool user should be trained in the way that is appropriate for them. For those who will
use the tool directly, this usually means the training given by the vendor of the tool. There
may be scope for a brief introductory course to start with, followed by more detail after a few
months of use, with regular technical updates and expert tips.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 96


Monitoring test automation efficiency
Test automation can only be a real benefit if the effort saved in using automation significantly
outweighs the effort put into using it. To insure real success with any automation project the
effort put in and the effort saved must be measured.

People issues
Managing change is all about people issues, for it is the way people work that is being
changed. A good manager will be sensitive to these issues, but often technical people are not
aware of the effects a technical change can have on people emotionally and psychologically.

The change equation


Change only occurs when three things are greater than the fourth: f (a,b,c) > z. Where a is
dissatisfaction with the current state, b is a shared vision of the future, and c is concrete
knowledge about the steps to get from a to b. These three things taken together must be
greater than z, the psychological or emotional cost to the individual of changing the way they
work.

How to persuade people to change the way they work


It is very difficult to change someone’s psychological, i.e. to alter the value of z in the above
equation. However, the way to encourage people to change the way they work is to
concentrate on the other three things (a,b,c). Make them more dissatisfied with the current
way testing is done (you mean you’ are still doing all that testing by hand). Give them a fuller
vision of the way things could be in the future (come and see what we’ve been able to do in
the pilot, and see if some of this might be of interest to you). Explain the first easy step toward
a complete change (let us help you automate a few key tests for you to try on the next project)

The most important thing you can do: give them more detailed information about the steps
they need to take from where they are now to where you want them to be in the future. Don’s
let them make large steps, but let them make small steps forward. Note this one reason why
you need to plan the implementation, so that you have these steps mapped out in advance.

13.3 Tool Lifecycle


There are four stages in a tool’s lifecycle a Test Manager must manage:
• Acquisition
The tool must be acquired following the selection tool described in the previous section.
An administrator must be assigned to manage the tool regarding accounts, storage,
conventions, etc. Also training must be given to the users of the tool.
• Support and maintenance
The administrator must maintain the tool or it can be assigned to a tools team.
Maintenance procedures must be put in place, like backup and restore, support helpdesk
and interfacing with other tools.
• Evolution
As time goes on, user wishes, business needs or vendor issues may require to change
the tool. Changes can be to improve the functionality of the tool, to change interface
protocols or to migrate the tool to other hardware or operating system. The Test Manager
must ensure the continuity of service.
• Retirement
When the maintenance and support cost more than the tool delivers, the tool will need to
be retired gracefully. Another tool may be acquired fulfilling the functionality and data will
be need to be preserved and archived.

© 2022 Keytorc Software Testing Services ISTQB AL TM – 97

You might also like