Professional Documents
Culture Documents
3 - Test Management Reader
3 - Test Management Reader
3 - Test Management Reader
http://www.keytorc.com
https://keytorc.com/blog/
• Manage a testing project by implementing the mission, goals and testing processes
established for the testing organization
• Organize and lead risk identification and risk analysis sessions and use the results of
such sessions for test estimation, planning, monitoring and control
• Create and implement test plans consistent with organizational policies and test
strategies
• Continuously monitor and control the test activities to achieve project objectives
• Assess and report relevant and timely test status to project stakeholders
• Identify skills and resource gaps in their test team and participate in sourcing adequate
resources
• Identify and plan necessary skills development within their test team
• Propose a business case for test activities which outlines the costs and benefits expected
• Ensure proper communication within the test team and with other project stakeholders
• Participate in and lead test process improvement initiatives
1.2 Homework
There will be written, practical, and background-reading exercises set as homework during
the course. This will ensure that the materials have been understood and will enable the
student to practice the application of the theory and techniques learnt in a safe environment. It
is recommended that students do at least the minimum level of homework indicated during
the course.
Spend the available time and resources testing in order to mitigate the highest business risks.
Risk Assessment
We cannot test everything so we spend the available time and resource focusing testing
around the business high priority risk areas. “No Risk No Test”. We need to consider relative
risk based on:
- Criticality
- Use
- Visibility.
Run tool trials before deciding whether or not to purchase them and consider the license
costs. Do you need to buy? Could you rent the tool? Do you really have an ongoing need?
Plan your tool implementation as a project in its own right, and trial it on a small project first.
Implement the tool during a quiet period when you are not testing.
Reviews
– Walkthrough, Inspection, Technical Review
Choosing techniques
– Risk, Test Basis, Knowledge of Testers, etc.
Defect management
– Incident template, Defect management process
For further reading and ISTQB Foundation revision, the book “Foundations of
Software Testing” by Dot Graham, Erik van Veenendaal, Isabel Evans and
Rex Black is highly recommended.
The development process adopted for a project will depend on the project aims and goals.
There are numerous development lifecycles that have been developed in order to achieve the
required objectives. These lifecycles range from lightweight and fast methodologies (Scrum,
DSDM, XP), where time to market is of the essence, through to the fully controlled and
documented methodologies (V-model) where quality and reliability are key drivers. Each of
these methodologies has its place in modern software development and the most appropriate
development process should be applied to each project.
The lifecycle model that is adopted for the project will have a big impact on the testing that is
carried out. It will help to define the what, where, and when of our planned testing, regression
testing, integration testing, and our testing techniques. Testing must fit in around the lifecycle
or it will fail to deliver maximum benefit.
All documents in the baseline should be verified against their source documentation and
validated against the requirements. This ensures that no progressive distortion is occurring as
the documentation grows during the design phase.
There are various lifecycle models that have been produced in order to meet specific
development needs. The models specify the various stages of the process and the order in
which they are carried out.
Design flows through into development, which in turn flows into build, and finally on into test.
Testing tends to happen towards the end of the project lifecycle so faults are detected close
to the live implementation date. With this model it has been difficult to get feedback passed
backwards up the waterfall and there are difficulties if we need to carry out numerous
iterations for a particular phase.
The delivery is divided into builds with each build adding new functionality. The initial build will
contain the entire infrastructure required to support the initial build functionality. Subsequent
builds will need to be tested for the new functionality, regression testing of the existing
functionality, and integration test of both new and existing parts. This means that more testing
will be required at each subsequent delivery phase which must be allowed for in the project
plans. This lifecycle can give early market presence with critical functionality, can be simpler
to manage, and reduces initial investment although may cost more in the long run.
Spiral
Function 4
Define
Function 3
Develop
Function 2 Build
Test
Function 1
0 5 10
DSDM is a refined RAD process that allows controls to be put in place in order to stop the
RAD process from getting out of control. Remember we still need to have the essentials of
good development practice in place in order for these methodologies to work. We need to
maintain strict configuration management of the rapid changes that we are making in a
number of parallel development cycles. From the testing perspective we need to plan this out
very carefully and update our plans regularly as things will be changing very rapidly.
From extreme programming explained by Kent Beck “Testing Strategy- Oh yuck”. “Nobody
wants to talk about testing. Testing is the ugly stepchild of software development. The
problem is, everybody knows that testing is important. Everybody knows that we don’t do
enough testing”.
Kent Beck says that the developers write every test case they can think of and automate
them. Every time a change is made in the code it is component tested then integrated with the
existing code which is then fully integration tested using the full set of test cases. This gives
continuous integration and all test cases must be running at 100%.
XP is not about doing extreme activities during the development process, it is about doing the
known value add activities in an extreme manner.
Scrum
Scrum is an iterative and incremental agile software development framework for managing
software projects and product or application development. Its focus is on "a flexible, holistic
product development strategy where a development team works as a unit to reach a common
goal" as opposed to a "traditional, sequential approach".
Roles
Product Owner
The Product Owner represents the stakeholders.
Development Team
The Development Team is responsible for delivering potentially shippable product increments
at the end of each Sprint. A Development Team is made up of 3–9 people.
Scrum Master
Scrum is facilitated by a Scrum Master, who is accountable for removing impediments to the
ability of the team to deliver the sprint goal/deliverables. The Scrum Master is not the team
leader, but acts as a buffer between the team and any distracting influences.
Process
Evolutionary Lifecycle
This lifecycle is more delivery focused than development focused. Each evolutionary cycle
delivers a live, working system, with functional increments over the previous release.
The evolutionary process encourages active customer feedback. The customer gets early
visibility of the product, can feedback into the design, and can decide based on the existing
functionality whether to proceed with the development, decide what functionality to include in
the next delivery cycle, or even to halt the project if it is not delivering the expected value. An
early business focused solution in the market place gives an early return on investment (ROI)
and can provide valuable marketing information for the business.
Here is an example of how a breakdown of the phase characteristics might look (maybe!) It is
up to you to decide the most effective approach, aims, and best techniques to apply at each
stage of any specific project.
Objective – To show the system meets the functional specification, and all the non-functional
requirements.
Responsibility – Test team (within development)
Scope – Functionality, non-functional attributes such as performance, security, installation,
error handling, recovery etc. etc.
Who does it – Independent Test Team & technical experts / development responsibility
Entry Criteria – System passed integration test phase, development sign off, test team
acceptance (intake / confidence test), release notes available
Exit Criteria – All system test cases run and complete, no high priority defects outstanding,
Mean Time Between Failures (MTBF), # of defect per test hour under threshold, requirements
coverage
Test Deliverables – System test plan, test report, test results, test specifications and -
procedures, test evaluation report (recommendations for product and project)
Typical test techniques – Black box techniques (e.g. equivalence partitioning, state transition
testing, cause/effect graphing), specialist non-functional test techniques (e.g. error-guessing).
Metrics - Tests passed/failed/run, number of faults and priority, environment log, test logs,
progress reports, time & effort planned v spent, requirements coverage, test effectiveness.
Test Tools – Performance monitoring, data generators, capture/replay, test management
tools, incident management tools.
Applicable testing Standards – BS7925-2 Component Testing Standard, TMap Next, IEEE
829 for test documents, ISO 9126 for non-functional exit criteria (will be replaced by ISO
25000)
Typical non-functional test types – reliability, performance, usability, portability
Acceptance testing
Formal testing conducted to enable a user, customer, or other authorized entity to determine
whether to accept a system or component (BS7925-1).
Objective – To show the delivered system meets the business need (validation), formal
acceptance of the product.
Responsibility – User / Customer or representative
Scope – Requirements based testing. The whole system, test cases based on requirements
document, covers functionality, usability, help, user guides etc.
Who does it – Users, business representatives, live support, or, test (user/customer
responsibility).
Entry Criteria – Sign off from System Test/Integration test phase (system test report
available), user requirements reviewed and approved, test plan reviewed and approved
Exit Criteria – All acceptance test cases completed, no category A or B business priority
defects outstanding, list with known defects, business sign off for live implementation, MTBF,
acceptance test report approved
Test Deliverables – Acceptance test report, results, test logs, problem reports, change
requests, test specifications and test procedures
Typical test techniques – Equivalence partitioning, exploratory testing, use Case testing, error
guessing, process cycle test.
Metrics – Number of faults, business priority, tests passed/failed/run
Test Tools – Capture Replay, comparators, performance, defect management, test
management
Applicable testing Standards – BS7925-2 Component Testing Standard, TMap Next, IEEE
829 for test documents, ISO 9126 for non-functional exit criteria (will be replaced by ISO
25000)
Typical non-functional test types – usability, performance, security
Development and testing are intrinsically linked regarding timescales and effort. But this does
not mean that the effort and elapsed time for development and testing have a linear
relationship.
The bigger the project the more time will be required to develop and test the system. True.
The development approach adopted will also have a large impact on the time and effort taken
to test.
An example of this would be a phased release where each phase contains increased
functionality over the previous release. The effort, and time taken, for system and integration
testing would be increased due to the “extra” regression testing required to ensure that the
new functionality has not impacted the existing functionality.
A small change that may take a developer one hour to code may take two weeks to test if the
change is in a module that is business critical, highly visible and often used (Hans Schaffer
priority spreadsheet).
Be aware of this effort anomaly if asked to supply testing timescales based purely on a
development estimate to fix/change/develop.
Project Communication
Historically testing has been carried out at the end of the development phase. Testers were
grey characters that sat in a room somewhere and the perception was that testers halted the
projects progress by finding problems just before the scheduled implementation date.
In order to do this successfully we need to know, be known by, and be trusted by, the key
stakeholders on the project. We need to sell them testing by showing them the value of
carrying out the activities that we KNOW will add value. We need to educate them and prove
that we can remove errors, and the causes of errors, early in the project.
Team Interaction
The testing team will need to interact with the following project areas:
- Customer- Project sponsor/Manager
- System Users
- Project Managers
- Design Team
- Development Team
- Technical Support team
- Technical authors
- Configuration Management Team
- Change Control Board
- Fault Management Area
- Etc. etc.
A typical set of entry criteria to the system test phase may look something like this:
- Sign off document from development owner authorizing product release to system test
team.
- Component test report.
- Integration in the small test report.
- Release note detailing release content, details of any new functionality, and the status of
any known bugs.
- Build statement detailing the components and versions comprising the release.
- The product itself (code, GUIs, etc.).
- Installation and backup instructions.
- Support documentation.
- User documentation.
The project is not necessarily concerned with the actions required to achieve the goal, but
needs to know that the task is on track, and when it has been completed.
Managing dependencies
Dependencies must be shown on the task list. In order to work productively some tasks must
be completed before others can start. Specialist skills and resource are often required for
Resource Planning
Allocate the required resource to the specific tasks. The planning tool shows:
- Task Usage – how much resource is allocated to each task (spend)
- How much time is required to complete the task (effort required)
- How long each task will take to complete (time elapsed)
- Who is doing what and when (workflows/stacks)
This allows visibility of over commitment at individual or team level, and where spare capacity
is available to carry out any further work.
Symptoms of bad CM
- Faults cleared earlier re-appear.
- Expected fixes are not included in the release.
- Unexpected changes are included.
- Features and functions disappear between releases.
- Variations in code functionality on different environments.
- Wrong code versions delivered.
- Different versions of baseline documentation in use.
- Code delivered does not match release note.
- Can’t find the latest version of the build.
- No idea which user has which version.
- Wrong functionality tested.
- Wrong functionality shipped.
- Etc. Etc.
How do we improve our CM? In order to improve the process we must employ:
- Status Accounting (data collection, analysis, and reporting)
- Configuration Auditing (procedural conformance)
- Post Implementation Reviews (what went well, what can we do better?)
Wise words
CM is control over change.
It can be a full time job and is difficult to do well.
Build Management
This is a very important part of the development process. A build statement should always
accompany each release from development.
The build statement identifies:
- The name and version of the build (unique identifier).
- A comprehensive list of the components that make up the release, including individual
component version numbers.
- Technical information such as component file sizes, to allow simple verification of the
component files.
- Any other specific details of the build that would be useful to the test team, such as
modules excluded for any reason.
Release management
Release management handles the transmittal of items between teams or phases of the
development lifecycle.
Establishes and controls the roll out
- When are releases scheduled?
- Is there any overlap?
- What test environments will I need?
- What resource will I need?
- What needs to be in each release?
Release notes
A release note identifies:
- The name and version of the release (unique identifier).
- Installation and back out instructions
- Any environment changes required including database changes.
- Any configuration changes required.
- Any fault fixes contained in that release.
- Any change requests incorporated in that release.
- Details of any known faults.
- Details of the Component and Integration (link) test results.
- Sign off authority for the release from the “owner”.
- Any other information that may be useful to the test or implementation teams.
Change Management
We must distinguish between faults and changes. Faults are the responsibility of the
development team and need to be corrected as a development cost. Changes were either not
specified or incorrectly specified and are a project cost.
Changes need to be managed through a change control process to capture the change
details and business benefits, the impact on the design; development; test; and
implementation teams, costs and timeframes. Once this information has been gathered
informed decisions can be made regarding proceeding with a change, deferring it, or rejecting
it.
Configuration Items
When planning your testing decide on your documentation control processes and naming
conventions.
Documentation Control
Decide the level of control required and define the minimum details that will be included in all
documentation. You may decide that in your documentation you will have the following
configuration details:
- Item name
- Document Author details
- Document Owner details
An initial draft of a document may be called draft 0.1 with subsequent drafts called 0.2, 0.3
etc. When the document is issued it may be called version 1.0 and when re-issued it may be
version 1.1, 1.2 etc.
Configuration Control
- Code: Decide on your naming convention for releases of code, numerical designations
etc.
- Environments: Define a naming convention for your test environments. Decide how you
are going to record the environment hardware, software, configurations, and the code
version for each environment
Code
The decision could be made that:
- A full release of code might be called version 1.0, 2.0 etc.
- A fault fix release or minor enhancement might be 1.1, 1.2 etc.
- A patch to existing functionality might be 1.0.A, 1.0.B, etc.
Every time we make a change to the code the version must change in order that we can
identify the version and will know what has changed.
Environments
For our environments we may produce the following table to help us with our Environment
CM.
Environment Name Owner Hardware Software Config Code
System Test SYS01 T Man Server xyz Oracle xx 123 V1.2
Console NT abc
Workstation Win2000 standard
Printer
It is essential that we maintain control of these details in order to give us controlled repeatable
test results.
We need to ensure we have the contacts in place between the test team, the other project
teams and project roles, and the business representatives.
We need to ensure that the project has the correct processes and controls in place such as:
- Release Management
- Build Management
- Configuration Management
- Incident Management
We need to plan our approach, document it, and get it agreed by all key stakeholders.
Within systems of systems, a testing level must be considered at that level of detail and at
higher levels of integration. For example “system testing level” for one element can be
considered as “component testing level” for a higher level component. Usually each individual
system (within a system of systems) will go through each level of testing, and then be
integrated into a system of systems with the associated extra testing required.
Compliance to Regulations
Safety critical systems are frequently subject to governmental, international or sector specific
regulations or standards. Those may apply to the development process and organizational
structure, or to the product being developed. To demonstrate compliance of the organizational
structure and of the development process, audits and organizational charts may suffice. To
demonstrate compliance to the specific regulations of the developed system (product), it is
necessary to show that each of the requirements in these regulations has been covered
adequately. In these cases, full traceability from requirement to evidence is necessary to
demonstrate compliance. This impacts management, development lifecycle, testing activities
and qualification /certification (by a recognized authority) throughout the development
process.
4.1 Introduction
Although executing tests is important, we also need a plan of action and a report on the
outcome of testing. Project and test plans should include time to be spent on planning the
tests, designing test cases, preparing for execution and evaluating status. The idea of a
fundamental test process for all levels of test has developed over the years. Whatever the
level of testing, we see the same type of main activities happening, although there may be a
different amount of formality at the different levels, for example, component test might be
carried out less formally than system test in most organizations with a less documented test
process. The decision about the level of formality of the processes will depend on the system
and software context and the level of risk associated with the software.
So we can divide the activities within the fundamental test process into the following basic
steps:
o Test planning, monitoring and control;
o Test analysis;
o Test design;
o Test implementation;
o Test execution;
o Evaluating exit criteria and reporting;
o Test closure.
These activities are logically sequential, but, in a particular project, may overlap, take place
concurrently and even be repeated. This process is particularly used for dynamic testing, but
the main headings of the process can be applied to reviews as well. For example, we need to
plan and prepare for reviews, carry out the reviews, and evaluate the outcomes of the
reviews. For some reviews, such as inspections, we will have exit criteria and will go through
closure activities. However, the detail and naming of the activities will be different for static
testing.
Test planning has the following major tasks, given approximately in order, which help us build
a test plan:
o Determine the scope and risks, and identifying the objectives of testing; we consider what
software, components, systems or other products are in scope for testing, the business,
Management of any activity does not stop with planning it. We need to control and measure
progress against the plan. So, test control is an ongoing activity. We need to compare actual
progress against the planned progress, and report to the project manager and customer on
the current status of testing, including any changes or deviations from the plan. We’ll need to
take actions where necessary to meet the objectives of the project. Such actions may entail
changing our original plan, which often happens. When different groups perform different
review and test activities within the project, the planning and control needs to happen within
each of those groups but also across the groups to coordinate between them, allowing
smooth hand-offs between each stage of testing. Test planning takes into account the
feedback from monitoring and control activities which take place throughout the project.
Test design has the following major tasks, in approximately the following order:
o Design the tests using techniques to help select representative tests that relate to
particular aspects of the software which carry risks or which are of particular interest,
based on the test conditions and going into more detail. For example, our driving
instructor might look at her list of test conditions and decide that junctions need to include
T junctions, cross roads and so on. In testing, we’ll define the test case and test
procedures.
o Design the test environment set-up and identify any required infrastructure and tools; this
includes testing tools and support tools such as spreadsheets, word processors, project
planning tools, and non-IT tools and equipment – everything we need to carry out our
work.
Test implementation has the following major tasks, in approximately the following order:
o Develop and prioritize our test cases, using the techniques, and create test data for those
tests. We will also write instructions for carrying out the tests (test procedures). For the
driving examiner this might mean changing the test condition “junctions” to “take the route
down Mayfield Road to the junction with Summer Road and ask the driver to turn left into
Summer Road and then right into Green Road, expecting that the driver checks mirrors,
signals and maneuvers correctly, while remaining aware of other road users.” We may
need to automate some tests using test harnesses and automated test scripts.
o Create test suites from the test cases for efficient test execution. A test suite is a logical
collection of test cases which naturally work together. Test suites often share data and a
common high-level set of objectives. We’ll also set up a test execution schedule.
o Implement and verify the environment; we make sure the test environment has been set
up correctly, possibly even running specific tests on it.
In order to identify and produce the required tests, execute them, and manage the process we
produce a detailed plan set. These plans vary in strategic aim, use, level of detail, and
content.
Test Policy statements will reflect the nature of the business, the risks associated with the
products and market place, and the business attitude regarding the required quality of
products and deliverables. The test policy will dictate the overall approach to testing and the
strategies employed.
The test policy is the first document that is produced in the test documentation tree. The test
policy is a short, concise, high-level document usually created and owned by the
organizations IT department (or equivalent). This document will define the organizational
approach to testing and the aims and objectives of the testing.
A test policy provides the direction for the lower level test documentation, helping to keep the
testing focused on the test objectives stated. During the early test planning stages of the
project, when the lower level documentation is being produced (strategy, test plans etc.), the
policy is the guiding factor that provides the aims, objectives, and targets that the testing will
be expected to achieve. It provides:
- The philosophy for testing within the organization
- The definitions of what testing means
- A guide to what will need to be covered in the lower level documents
- The framework under which the testing will be carried out.
The test policy will in principle cover all testing activity within the organization including new
developments, maintenance activities, or 3rd party developed/bought in software, although the
way in which these are covered may be dealt with in separate sections of the document itself.
The document must address all test activities and should be agreed by all parties involved in
the development process to promote understanding and a shared vision of the testing aims
and objectives.
Definition of testing
This is the definitive statement detailing what the organization understands by the term
testing. This will state what testing will be required and what the testing is meant to achieve.
- Testing will confirm that the delivered software solves a business problem
(Acceptance).
- Testing will confirm that the software functions as detailed in the product design
documentation (Functional).
- Testing will confirm that no existing systems, system processes, functions, or facilities
are impacted by any enhancement, or changes to an existing system (Regression).
- Testing will confirm that the functionality of any networked system or component will
not be impacted as a result of the introduction of a new component/system and or any
changes to an existing system (Integration).
- Testing will consider and address all non-functional attributes of each software
development/release.
The test strategy details the overall testing approach and what will be done in order to satisfy
the criteria detailed in the test policy. This document is a strategic document and as such it
must complement the other IT strategic working practices and development procedures.
Testing is a part of the development process and must be fully integrated with the other
project teams in order to be successful. Test strategies cannot be developed in isolation, they
must have buy in from the other project areas and work in conjunction with the other teams –
In order to meet the companies test policy on quality deliverables within tight timescales the
company has adopted the extreme programming lightweight development methodology (XP).
All WEB development projects will be developed using the XP methodology.
- The independent test function will provide the XP coach to the development teams for
each project.
- All project baseline documentation and code will be subject to appropriate review and
sign off (project budget up to 15%)
- The independent test function will conduct acceptance testing against the business
stories specified by the customer.
- The independent test function will specify, create and execute the acceptance test
cases in conjunction with the customer.
- Auto mated test tools will be considered at the start of (and throughout) each project
and will be used wherever advantage can be identified.
- etc.
They can also be helpful in avoiding the temptation to promote a product to the next stage
before it is ready (quality development and testing takes a finite time, to promote a product to
the next stage before it is ready, in order to try and gain time, is false economics and is likely
to result in a greater delay at a later stage).
Approach to testing:
Each phase will have a defined approach to testing along with a rational for its selection:
- Top-down
- Bottom-up
- Priority driven
Recycling these items between test levels can save on resource and will prevent repetition
and duplication of test preparation activities across the levels of testing. However, beware
recycling the test data and scripts between the test levels as any errors in the data or the
scripts that have not been discovered are carried forward. Subsequent test levels may then
Incident management:
The strategy dictates how incidents will be managed, what the process will be, who is
responsible and if any tools will be used.
The test strategy for an organization must reflect the business arena and physical attributes of
the organization itself in order to be of maximum use. The company test strategy is not
always a single document, but can made up of a number of other strategy documents
comprising the complete strategy document set. This document set may be made up of such
documents as:
- Corporate test strategy
- Specific site/location test strategy
- Program test strategy (for a series of projects)
- Project test strategies
And, in some instances, may also be presented as part of the test plans. Different strategies
may be required for testing different applications. The strategy sets the framework for the
testing therefore it is not surprising that a different strategy may be required for testing a
safety critical application compared to a non-critical web-based application.
Project
Plan
Project Project
Project Project
Development Implementation
Design Plan Test Plan
Plan Plan
Master test plans are specific to one particular project and are closely related to the
associated project plan for that development. The project plan will detail the project critical
path showing where the testing elements of the project are on the critical path.
Some organizations will have all of the above documents; some will amalgamate or split the
documentation set. What is important is that all the required information is covered.
Risk is all about prioritization of the test we can run within a series of limitations of time, cost
and quality goals. One must therefore ask if all testing is indeed or should be ‘Risk Based’?
We must be aware that risk based testing is applicable to both new products and
maintenance packages. The risk of a new product failing may have less impact than that of a
maintenance activity. For example: what is the risk of the package of change failing vs. the
package of change regressing the current live system? Maintenance fixes come in two forms,
planned and emergency. Are you able to establish risks associated with emergency fixes or
should we just assume that if it were not a major problem then it would not be an emergency.
Therefore the risk of putting the fix live may have no more impact than the current failure. In
other words perhaps we should test an emergency fix with a view to ensuring it does not
make the current system any worse.
Just because something can go wrong does not mean that it will. However, experience shows
us that not only will it go wrong; it will probably be the first thing that goes wrong and in a way
we had never considered. Often known as ‘Murphy’s Law’
Objectives
The key objective is to provide the student with sufficient information to be able to introduce
the concepts and basic principles of risk based testing and risk management into their
organization.
What is Risk?
Perhaps a value only definable subjectively and even that would vary depending on the
circumstance or the perspective. Risk is what is taken when balancing the likelihood of an
event vs. the impact if it does. In effect what are we willing to leave to chance? When we
cross the road we must consider the likelihood of falling over half way across against the
distance and speed of the traffic. If the chance of falling over is low, we will risk crossing when
the distance is short and the speed is high. If the likelihood of falling is high, the distance must
be greater and the speed less, we would then have a chance to get up again without suffering
any impact [sic]. The hard part is judging how many times you will fall vs. the speed and
distance of the car!! In most cases the level of risk we [the business] are willing to take is
dependent on the amount of time we have and available budget. The issue with testing and
risk is that the testing activity is often squeezed between a late development activity and a
fixed time to market.
The more that is available of both the more testing can be done and as a result, either directly
or indirectly, risk is reduced. In order to be able to quantify the level of risk in a system or
subsystem then analysis of the situation is first required
Product Risks
The intent is to consider not just the risk of the product itself not working but the impact it
could have on existing systems. Implementations of web sites are a prime example. Web
sites are designed to have impact on the volume of business a company does. This in turn
has a direct impact on the ability of the business as usual [BAU] or legacy systems.
The web site itself may be risk free in that it operates at the functional and non-functional level
perfectly well. The risk consideration is will the legacy systems stand up to the change of
operational profile. To place BAU at risk is a recipe for failure of not just your systems but
probably your business as well.
Project Risks
The integrity of the system design the project management plan and the business case for a
system do not guarantee success. It is a good start but without test effort to support the
project then the chances diminish significantly
The project is reliant on a number of key factors within the test team, some of which can be
equally applicable to the development and design effort;
Ø The ability of the staff involved
Ø The availability of the right tools
Ø A suitable environment upon which to test the system
Ø The support activities such as fault fixing and delivery
Ø The ease with which the system can be maintained.
Again web sites can be a prime example. The speed of technology change means that few
have much if any relevant experience of a specific architecture or the components that make
it up. It is therefore much more likely that a significant amount of errors will be made and the
risk of the project not being a success as a result is increased.
You may well have a very full risk register, but when did YOU last look at it?
However, by definition we can be certain that strong change control processes and the
minimization of change and scope creep will reduce the level of risk.
Risk Analysis
Identification of risks is not sufficient. Risks must be analyzed to establish the type of threat
they pose and to establish what, if anything, can be done about them. In many cases just
because a risk has been identified does not mean that a test or series of tests can or should
be devised to establish the level of that risk.
For Example: For many, risk is subjective and often perceived rather than measurable. The
risk of a meteorite hitting you on the head whilst you line up the winning putt of the British
Open Golf championship.
Firstly, too many there is no risk at all, because they will never be in that position.
Secondly is it possible to simulate a test? Thirdly would we want to? Fourthly, how big would
the meteorite need to be? They can be specs of dust up to any size we could imagine. The
outcome of being hit by a spec of astral dust would probably mean you would not notice, in
fact we are being hit by them all the time. Could you therefore blame the miss on a speck of
dust that you do not notice? If the meteorite were a mile across would anyone care about the
golf?
When analyzing the risk perhaps we should consider separate classes dependent on the
likely ‘public view’ of the error. Could I suggest we have e-business, front office, back office
and batch ‘classes’ of risk?
Risk Identification
Risk identification is an exercise driven by priorities within the system or project under test. It
is easy to identify risk; the art is to identify risks that matter.
Risk Identification is often mistakenly considered to be a ‘one off’ task that takes place during
the early stages of the project. In part this is true; however, we must continue the identification
exercise throughout the project as any change may uncover another risk.
The users are ultimately responsible for the risk. They will have been involved in the
identification stage, the prioritization, the analysis and mitigation exercise, as indeed will most
of the key stakeholders. The only difference being that the users will have to work with the
system in a live environment.
The final decision to implement the system and move from the comparative safety of the test
environment into the live environment is not one to be taken lightly. Often a large amount of
money will have been earmarked for a marketing budget and new products to enhance or
replace existing ones will define the company’s position in the market for the next few years.
Whilst the system is in test there is a chance that any remaining problems will be found and
resolved. The onus at this point is with the test team. However, once the system has gone live
the spotlight is on those who made the decision to ‘go live’.
The tester’s role is to ensure that the information presented to the users at any point during
the test phases is objective, accurate and has focused on the ‘right’ aspects of the system.
Does the product deliver the required service? Where ‘service’ is defined as the activity of the
product.
Ø Commercial
o Process the product
o Support customers
Ø Safety Critical
o Failure rate
o Redundancy
Ø Embedded Systems
o Real Time transactions
o Failover mechanism
From this the impact on any individual or group will differ depending on how often the function
is used and how much of the function is dependent on that part of the system.
Business/User View:
Ø Will the new system provide the business benefits identified in the system proposal
document?
Ø I know it is unlikely to happen, but what if it does?
Ø What impact to the current business profile
Testers View:
Ø What is the priority of each of the risks?
Ø How much test coverage is needed to identify if the risk is still there?
Ø What volume of faults will be found and how fast will they be turned around?
Developers View:
Ø How complicated is the processing?
Ø Do the skills and toolsets exist to support this product
Ø Are the skills and tools available to the team
Ø What are the delivery timescales
Reviews
Ø Risks to test design techniques
Ø Less intuition or ‘gut feel’ more business reasons
Ø Applicable for different test levels
Ø Supports customers to allow them to make a choice
Ø Provides the mechanism for communication between key stake holders
Ø Make the test process more manageable
Ø Better test coverage in the right areas – targeted testing
Ø No risk – no test
The project team members are best placed to assess the likelihood of an error occurring
Ø Designers
Ø Programmers
Ø Project manager
Ø QA staff
Ø Test manager
Ø Test team
As an example:
A new web interface is introduced that will be required to provide 100,000 customers with
account information within 3 seconds of the enquiry.
There is a risk that this requirement will not be met because the communications channel may
fail, that other system ‘traffic’ will impede this function or the application layer will provide the
wrong information.
The risk of the communications channel not being of a suitable specification should have
been mitigated during the architecture design stage of the system.
However, it is essential that the coverage is monitored and managed as testing progresses. It
is simple enough to agree at the outset what the perceived level of risk may be. It is much
harder to ensure that as risks change then the testing to mitigate those changes as well. It
must be borne in mind that coverage can be reduced as well as increased.
Even if we had used any of the methods below would the risk to aircraft and the passengers
have been identified?
These methods are dependent on the maturity of the systems development process, the skill
and experience of the staff, the available time and when in the cycle risk is considered.
The interviewer should ensure that the interviewee has received written objectives of the
review in advance.
The major disadvantage of these interviews is the lack of cross-fertilization of ideas. The other
rules are pretty much the same as for a brainstorm in that the objective is to generate a
quantity of data rather than analyzed output.
Independent Assessment:
As the title suggests the use of external experts can be used in place of or alongside internal
resource. Independence does not mean that those involved must come from outside the
company. However, the key is that whoever is involved is experienced in the identification of
risk, but has no vested interest in the viability, delivery or commerciality of the product.
Independence is a state of mind not necessarily a physical state. The assessment must be
held within a series of guidelines and those taking part must first have been given sufficient
information about the product, its priorities and use to be able to focus on at least the major
areas of concern.
It may be that a similar product is available for comparison in which case some basic
understanding of its operation should also be provided to the assessors.
Risk Templates:
Once a risk has been identified an amount of information must be recorded. Risk templates
should be made available to all team members. It is not expected that one person or indeed
one team will provide all information. The content of a risk template is specific to the type of
product being produced, the market sector, the safety criticality or the level of compliance
required. It is therefore impossible to provide a generic definitive template.
Lessons Learned:
Relies on recording issues as they occur in other aspects of the system or project under test.
Typically the risks identified in Stage #1 of a project can be reviewed at Stage #2 etc.
Experience is gained at two levels, individual and company. Individuals are subject to direct
experience and gain insights and learning via reflection, at company level the experiences of
a group are more diverse and require a more disciplined approach to review and analysis.
Direct recording [metrics] of data related to risks such as the number of faults, the system
down time, actual cost to repair and estimated business cost would provide objective
examples. Analysis of these risks will provide input to the risk identification activities of the
future.
It should be remembered that the risk identification process itself should also be subject to
review on a periodical basis at no less than an annual basis. Lessons learned should focus on
establishing areas where certain types of risk occurred that were not previously identified or
considered.
It is not always necessary to learn just from the experience. It should be remembered that you
could always learn from someone else. Similarly companies can learn from each other. In a
commercial environment this may not be the result of a direct relationship, however, public
bodies such as regional education and health authorities, police forces and other government
agencies could set up formal Risk Review Bodies.
Risk Workshops:
Organized events managed by a facilitator than can be carried out as ‘one-offs’ or a series of
progressive meetings. The objective is to take a view that combines the freestyle of a
Using a series of workshops with differing groups of resource and therefore perspectives,
risks will evolve over a series of events and will become a complete picture rather than a
single point of view. This type of event promotes a project wide series of priorities but has the
effect of watering down or compromising in areas where agreement cannot be reached.
Risk Poker:
Another approach for the risk workshops is to use Risk Poker. Risk Poker is based on
Planning Poker and invented by Improve. All stakeholders can give their estimated risks in
one meeting based on user stories as used in Agile development.
In the meeting each stakeholder has a desk of Risk Poker cards. These cards are
comparable to Planning poker, but in addition some have a colored dot. The risk poker cards
have the following colored dots: light blue, green, yellow, orange, red, purple. For every user
story each stakeholder selects a card indicating the technical risk. In case the stakeholder is
not involved in technical issues, he will not give an indication. The color indicates the risk
involved where light blue is the lowest risk and purple the highest risk. One can also decide to
only use 3 or 4 colors for convenience.
Of all given business risk values the highest risk differ and lowest risk are discussed. When
they differ a lot, the stakeholders who gave these risks must explain their reasoning for that
risk. The purpose of the discussion is that everybody knows and understands the different
viewpoints and ultimately come to a consensus risk value (i.e. color). If not such value is
reached, it can be a decision to take the highest risk value or just the average.
Once the business risks are discussed, the same is done with the technical risk values for the
same user story.
When both risk types for this user story have been agreed upon, the next user story is being
evaluated using the Risk poker cards, until all user stories are evaluated.
Brainstorming:
Is driven via a series of meetings where communication is the key. No one person in a large
project will have the complete picture of how a system works. Based on these 2 main
premises the level of mutual understanding will increase and it promotes a ‘no blame’ culture.
Multi-disciplinary teams with a wide range of specialist skills are essential to the integrity of
the process, where no one aspect of the project is in the majority. The objective of the team is
to provide a wide range of perspective and perceptions on all aspects of the system and its
operation.
Brainstorming sessions should last no more than 2 hours and focus on the identification of
risks; the analysis will follow later to rationalize and consolidate the information.
Like all formal activities a facilitator for the meetings will be required, whose role it is to ensure
a high level agenda is available and followed and to ensure the meeting does not get stuck on
details.
The output from a brainstorm is a long list of issues that will be later rationalized into data that
can be acted upon by appropriate members of the management team.
For example:
Headings could possibly include;
Ø Compliance to specific standards and regulations
Ø Possible internal and external threats to hardware, software, data or human resource
Ø Objectives and acceptance criteria, such as performance criteria, virus protection,
usability, reliability measures
Ø Long term benefits e.g. staff level reduction
Checklists are a more complete and detailed listing of specific risks that must be addressed.
For example:
Fault Prediction:
“If a guy tells me that the probability of failure is 1 in 105, I know he is full of crap.” Richard P.
Feynmann, Nobel Laureate commenting on the NASA Challenger disaster.
In order to establish the likelihood of an error in the components of the system and perhaps
the type of error we can expect to encounter we should consider some predictive techniques.
Fault trees provide both a quantitative and qualitative analysis of certain failure situations of a
system and can be used for either hardware or software components or indeed a combination
of both. Predictive techniques and models [e.g. Piwowarski et al. and the Rivers-Vouk Model]
provide historical statistical analysis of fault trends that can be used as a guide to the likely
number of type of error in an identified system type. [see further reading]
Self-Assessment:
This method is pretty much self-explanatory. Using the outcome from the lessons learned
technique the level of accuracy of the risk assessment process from a single or series of
previous stages can be assessed. The limitation of this approach is that you are limited by
your own knowledge base, if you were off target last time you may choose a different
approach this time that could be just as way off but in another direction.
Risk reduction can be achieved in either [probability or impact] reducing the probability or by
reducing the severity. However, predicting the impact of an issue in one part of a system pre-
supposes an understanding of the whole system, e.g. a change in volume in one area affects
performance of another that may cause a timeout failure in yet another.
Note:
Risks may increase or decrease in priority as changes to system states occur.
E.g. an embedded safety system is at little risk of failure all the time the two failover systems
are operable, as backups fail so the risk to the main system increases. However, the
likelihood of all 3 systems failing together is much lower than a single system failure. The risk
analysis therefore requires that both impact and likelihood be taken into account.
If so then the tests devised must focus on achieving a certain level of quality. Once that
agreed level has been achieved then the risk will be deemed to have been mitigated.
Note:
Risk analysis requires consideration of system states other than the initial state.
As situations occur other risks may be invoked or nullified.
For example if a fighter plane has 2 engines and both fail - at that point the ejector seat
mechanism is invoked. The failure of the ejector seat mechanism is not a risk at normal state
but is Critical at the engine failure state.
It can be seen that a valuable asset in the analysis of risk is the State Transition test case
design technique.
How does one type of risk impact different groups of users? If we understand this then we
may need to run different tests or provide different types or levels of prevention. Much like
Fault Analysis it is essential that over classification is not used as a method of ensuring a
personal favorite is considered above all others.
4 levels of category are usually favored and the terms used can vary:
Ø Critical
Ø High
Ø Medium
Ø Low
There is no point in having a ‘no risk’ category, if it is no risk it won’t be on the risk register in
the first place ☺
Perhaps we should consider the issue of risk compensation as a method of mitigating risk?
A little drastic I think. However, the problem is not so simple. In order to mitigate the risk we
make crash helmets compulsory and perhaps all-in-one leather suits, knee and elbow pads.
The problem has not gone away, why? Because with all this new padding the riders will feel
safer and now take more or different risks. The death rate stays as it is and we have to think
of new mitigating actions. We may have solved one risk, but we will have introduced others!!!
At the end of each session the test manager holds a debriefing meeting with the team. During
debriefing the manager reviews the session sheets, improves the charters, gets feedback
from the testers and estimates and plans further sessions.
The agenda for debriefing session is abbreviated PROOF for the following:
• Past: What happened during the session?
• Results: What was achieved during the session?
• Outlook: What still needs to be done?
• Obstacles: What got in the way of good testing?
• Feelings: How does the tester feel about all this?
The following issues are associated with the test management of safety-critical systems:
• Industry-specific (domain) standards normally apply (e.g. transport industry, medical
industry, and military). These may apply to the development process and organizational
structure, or to the product being developed.
Failure to plan for non-functional tests can put the success of an application at considerable
risk. Many types of non-functional tests are, however, associated with high costs, which must
be balanced against the risks.
There are many different types of non-functional tests, not all of which may be appropriate to
a given application.
The following factors can influence the planning and execution of non-functional tests:
• Stakeholder requirements
• Required tooling
• Required hardware
• Organizational factors
• Communications
• Data security
Stakeholder Requirements
Non-functional requirements are often poorly specified or even non-existent. At the planning
stage, testers must be able to obtain expectation levels from affected stakeholders and
evaluate the risks that these represent.
It is advisable to obtain multiple viewpoints when capturing requirements. Requirements must
be elicited from stakeholders such as customers, users, operations staff and maintenance
staff; otherwise some requirements are likely to be missed.
The following essentials need to be considered to improve the testability of non-functional
requirements:
• Requirements are read more often than they are written. Investing effort in specifying
testable requirements is almost always cost-effective. Use simple language, consistently
and concisely (i.e. use language defined in the project data dictionary). In particular, care
is to be taken in the use of words such as “shall” (i.e. mandatory), “should” (i.e. desirable)
and “must” (best avoided or used as a synonym for ”shall”).
• Readers of requirements come from diverse backgrounds.
• Requirements must be written clearly and concisely to avoid multiple interpretations. A
standard format for each requirement should be used.
• Specify requirements quantitatively where possible. Decide on the appropriate metric to
express an attribute (e.g. performance measured in milliseconds) and specify a
bandwidth within which results may be evaluated as accepted or rejected. For certain
non-functional attributes (e.g. usability) this may not be easy.
Required Tooling
Commercial tools or simulators are particularly relevant for performance, efficiency and some
security tests. Test planning should include an estimate of the costs and timescales involved
Hardware Required
Many non-functional tests require a production-like test environment in order to provide
realistic measures. Depending on the size and complexity of the system under test, this can
have a significant impact on the planning and funding of the tests. The cost of executing non-
functional tests may be so high that only a limited amount of time is available for test
execution.
For example, verifying the scalability requirements of a much-visited internet site may require
the simulation of hundreds of thousands of virtual users. This may have a significant influence
on hardware and tooling costs. Since these costs are typically minimized by renting (e.g. “top-
up”) licenses for performance tools, the available time for such tests is limited.
Performing usability tests may require the setting up of dedicated labs or conducting
widespread questionnaires. These tests are typically performed only once in a development
lifecycle.
Many other types of non-functional tests (e.g. security tests, performance tests) require a
production-like environment for execution. Since the cost of such environments may be high,
using the production environment itself may be the only practical possibility. The timing of
such test executions must be planned carefully and it is quite likely that such tests can only be
executed at specific times (e.g. night-time).
Computers and communication bandwidth should be planned for when efficiency-related tests
(e.g. performance, load) are to be performed. Needs depend primarily on the number of
virtual users to be simulated and the amount of network traffic they are likely to generate.
Failure to account for this may result in unrepresentative performance measurements being
taken.
Organizational Considerations
Non-functional tests may involve measuring the behavior of several components in a
complete system (e.g. servers, databases, networks). If these components are distributed
across a number of different sites and organizations, the effort required to plan and co-
ordinate the tests may be significant. For example, certain software components may only be
available for system testing at particular times of day or year, or organizations may only offer
support for testing for a limited number of days. Failing to confirm that system components
and staff from other organizations are available “on call” for testing purposes may result in
severe disruption to the scheduled tests.
Communications Considerations
The ability to specify and run particular types of non-functional tests (in particular efficiency
tests) may depend on an ability to modify specific communications protocols for test
purposes. Care should be taken at the planning stage to ensure that this is possible (e.g. that
tools provide the required compatibility).
Common across all such test efforts is the need for clear channels of communication and
well-defined expectations for missions, tasks, and deliverables. The project team must rely
less on informal communication channels like hallway conversations and colleagues spending
social time together. Location, time-zone, cultural and language differences make these
issues even more critical. Also common across all such test efforts is the need for alignment
of methodologies. If two test groups use different methodologies or the test group uses a
different methodology than development or project management, that will result in significant
problems, especially during test execution.
For distributed testing, the division of the test work across the multiple locations must be
explicit and intelligently decided. Without such guidance, the most competent group may not
do the test work they are highly qualified for. Furthermore, the test work as a whole will suffer
from gaps (which increase residual quality risk on delivery) and overlap (which reduce
efficiency).
Finally, for all such test efforts, it is critical that the entire project team develop and maintain
trust that each of the test team(s) will carry out their roles properly in spite of organizational,
cultural, language, and geographical boundaries. Lack of trust leads to inefficiencies and
delays associated with verifying activities, apportioning blame for problems, and playing
organizational politics.
7.1 Introduction
Many managers estimate testing at 10% to 15% of development effort. This is in reality a
severe under-estimate. Figures collected across the industry suggest that the actual test effort
on most projects is between 40% and 50% of development effort. This can get hidden if staff
is embarrassed by their real estimates and actuals …
Taking this as a starting point you must also consider the risk associated with the project. The
higher the risks the greater the amount of testing is needed. Conversely if you have a stable
component which has been used in the field (proved by use) it may not need to be tested
again, except as part of a regression test.
10% phase 1
10% phase 2
30% phase 3
20% phase 4
30% phase 5
Bottom up estimating
Test scheduling should be done in close co-operation with development, since testing heavily
depends on the development (delivery) schedule.
Since, by the time all the information required to complete a test plan arrives, the ability to
capitalize on these potential benefits might have been lost, test plans should be developed
and issued in draft form as early as possible. As further information arrives, the test plan
author (typically a test manager), can add that information to the plan. This iterative approach
to test plan creation, release, and review also allows the test plan(s) to serve as a vehicle to
promote consensus, communication, and discussion about testing.
When the test planning stage is complete and the risk assessment and coverage
requirements have been decided a test preparation schedule can be produced (plan). A test
case, test script and test data production schedule will have owners assigned to the required
tasks and completion dates. This will allow the test ware production rates to be measured
against the plan to ensure that the team is still on target.
Test reports can contain updates presented in a number of different formats. Tables can be
used to show percentages or ratios, or progress can be shown graphically.
Regular meetings should be scheduled in order to track progress against the plan. Any
milestones that are missed, or are in danger of being missed, need to be addressed.
9.2 Definitions
What is IEEE 1044-1993?
The IEEE Standard for the Classification of Software Anomalies
Dictionary Definition - Anomaly “Irregularity or deviation form rule”.
IEEE Standard Definition of anomaly
Classification Process: The classification process is a series of activities, starting with the
recognition of an anomaly through to its closure.
Optional Category: A category that provides additional details that are not essential but may
be useful in particular situations.
Supporting Data Item: Data used to describe an anomaly and the environment in which it
was encountered.
What is an anomaly?
The term anomaly has been chosen for its more neutral connotation rather than:
- Error
- Fault
- Failure
- Incident
- Problem
- Defect
- Bug etc.
What is provided?
This Standard provides the following:
This data can also help to identify when in a projects lifecycle most problems are introduced.
Anomaly data can also assist in the evaluation of reliability and productivity measures.
By classifying anomalies, they are naturally grouped together by type. This allows easier
manipulation of the data collected in order to identify weaknesses in any area of the
development process.
Classification
How do we classify an anomaly using the standard?
Classification process
The classification process is a series of activities, starting with the recognition of an anomaly
through to its closure. The process is divided into four sequential steps interspersed with
three administrative activities. The steps are as follows:
Step 1: Recognition
Step 2: Investigation
Step 3: Action
Step 4: Disposition
Administrative activities
The three administrative activities applied to each step are as follows:
- Recording
- Classifying
- Identifying impact
Recognition
The recognition step occurs when an anomaly is found. Recognition of an anomaly may be
made by anyone regardless of where in the software lifecycle the anomaly was discovered.
Identifying impact
The person identifying the anomaly shall record their perception of the impact:
- Severity (Mandatory) – Urgent, high, medium, low, none
- Priority - Urgent, high, medium, low, none
- Customer value – Priceless, high, medium, low, none, detrimental
- Mission Safety - Urgent, high, medium, low, none
- Project schedule (Mandatory) - High, medium, low, none
- Project cost (Mandatory) - High, medium, low, none
- Project risk - High, medium, low, none
- Project quality/reliability - High, medium, low, none
- Societal - High, medium, low, none
Investigation
Following recognition, each anomaly shall be investigated. The investigation shall be
sufficient to identify all known related issues and propose solutions or indicate that the
anomaly requires no action.
Identifying impact
Previous impact classification shall be reviewed and updated on based on the results of the
analysis.
Action
A plan of action shall be established based on the results of the investigation. The action
includes all activities necessary to resolve the immediate anomaly and those activities
required to revise processes, policies, or other conditions necessary to prevent the
occurrence of similar anomalies.
Identifying impact
Previous impact classification shall be reviewed and updated on based on the results of the
analysis.
Disposition
Following completion of either all required resolution actions or at least identification of long
term corrective actions, each anomaly shall be disposed of by:
- Recording the Disposition
- The following data items for each anomaly are recorded:
- Action implemented
- Date report closed
- Date document update complete
- Customer notified
- Reference document number
Previous impact classification shall be reviewed and updated based on the results of the
analysis.
The standard provides comprehensive lists of data items for inclusion at each step of the
process, such as:
- Project Activity
- Project Phase
- Cause (Actual and suspected)
- Possible source
- Type of anomaly
Classification codes
All supporting data items are assigned a 5 character alphanumeric classification code, 2
alphabetical followed by 3 numerical.
We can now see statistics of our fault finding success rates for these activities.
Project phase
During the recognition step we capture the project phase
- Requirements
- Design
- Testing
- Implementation
- Live.
System Attributes
During the recognition step we capture the system attributes of the anomaly
We can now see statistics that may point towards anomaly nurseries.
Anomaly source
During the Investigation step we capture the Anomaly Source
- Specification
- Code
- Database
- Manuals and guides
- Plans and procedures
- Reports etc.
We can now see statistics that may point towards weaknesses in our development process.
Anomaly type
During the Investigation step we capture the Anomaly type
- Logic/computational problems
- Interface/Timing problems
- Data problems
We can now see statistics that may point towards common problems.
Anomaly type
During the Resolution step we capture the resolution details
Software fix
- Update documentation
- User Training
- Etc.
We can now see statistics on the costs associated with each anomaly
We close the loop and improve our processes to prevent similar anomalies from occurring.
Quick overview
Quick guide to Anomaly management using IEEE1044:1993
This information has been reprinted with permission from IEEE Std. 1044-1993* *,"Standard
Classification for Software Anomalies" Copyright1993*, by IEEE. The IEEE disclaims any
responsibility or liability resulting from the placement and use in the described manner.
Each of the steps or activities being classified is assigned a two-character alpha prefix
- RR for the Recognition Step
- IV for the Investigation Step
- AC for the Action Step
- IM for the Impact Identification Activity
- DP for the Disposition Step
Three digits, identifying the categories and classifications, follow the prefix. Where further
clarification is needed a decimal number is assigned.
For example classification code IV321.1 first guides the user to the Investigation Step (IV).
Secondly the category the classification belongs to is type (IV300). The type of anomaly is
identified as a computational problem (IV320), and is further identified as an equation
insufficient or incorrect (IV321), and is more specifically defined as a missing computation
(IV321.1).
Planning implementation
- Determine how to incorporate the anomaly tracking system into your environment
- Plan how you are going analyze the data and distribute the results of that analysis
- provide training on the scheme to your users and to management
Development methodologies
IEEE Std. 1044.1 contains example classifications for the different development
methodologies and business models
- Waterfall
- Phased
- Spiral
- DoD (defence systems software development)
- Etc.
The standard advises it does not dictate. You can implement and control your anomaly
reporting system using any of the following:
- A commercial tracking product
- In house tracking software development
- Paper based system (difficult to analyze data)
Statistical analysis
We must have sufficient anomalies logged or results are meaningless
Project management
Analysis of the anomalies can help us to:
- Identify the impact of enhancements against the project plan.
- Compare project costs, risks, and impact to the project quality or reliability. This will
help us to make informed decisions regarding fix/no fix impact as we approach the
live implementation date.
Process improvement
By looking at the cause of anomalies we can identify weakness in our development process.
Where a large number of anomalies are attributed to the classification “requirements error”,
we may wish to allocate more resource to producing the requirements or introduce tighter
review processes. This is prevention rather than cure!
Product assessment
To err is human
To err repeatedly is stupid
To learn from your mistakes is good
To learn from other peoples mistakes is excellence!
Product assessment
In order to assess the product we can provide management with details of anomalies found
and their severity/priority. By analyzing the types of anomalies found we can pinpoint weak
functions or code modules that may give further problem in live. This may allow further testing
to focus on these areas if time allows. We can also use the anomaly database to show that
our phase input and output criteria have been met prior to promoting the product to the next
phase
Conclusions
IEEE Std. 1044-1993 satisfies the kinds of anomaly tracking, reporting, analysis, and anomaly
prevention encouraged by the CMMI.
IEEE Std 1044-1993 provides a solid foundation for information required to be tracked for
ISO9001.
IEEE Std. 1044-1993 also satisfies the anomaly tracking and classification requirements of
DoD Software.
Individual skills are acquired through the learning process. There are two main ways in which
we learn, we can learn from our own experiences, what worked well, what didn’t work so well
etc. We then apply our learning to the same, or a similar, problem the next time around in
order to improve the way we do things. Learning is something that we all do naturally, as a
baby curiosity is the driver and sensations such as feeling, smell, taste, and pain, are some of
the information gatherers that input to the learning experience. If you take a large mouthful of
scalding hot tea and burn your mouth then it is understandable that the next time you have
tea you will remember. The “blow to cool” approach is one solution that you may want to
adopt in order to prevent you making the same mistake. So you have learnt how to manage
hot tea!
The second main way to learn is through training. Training teaches us how to do things
correctly. Training can be delivered in many different forms from talking, reading, mentoring,
etc. through to formal training courses. The training materials will have been written and
produced by “experts” that will have already learnt from their experience. Industry standards
and approaches tend to be derived from a combination of experience by numerous people
when addressing a problem, so training gives us recourse to a wider range of experience. We
all know that two heads are better than one, so the more experience that goes into producing
the training the better that training should be.
As can be seen the skills required to execute the above levels of testing well will vary
significantly.
For Acceptance testing a tester will need in depth business knowledge, be familiar with the
system requirements, and have an understanding of the principles of testing.
All parties will need to be aware of requirements analysis and prioritization, fault reporting,
change control, configuration management, test management, standards, etc. They will also
benefit from an understanding of risk, they will need to know how to design and execute test
cases, check results and report on progress.
All of the above deal with specific “technical” skills or experience, but a good tester must also
possess strengths in the ‘so called’ soft skills. Testers need to be able to communicate
effectively and efficiently with all parties, be diplomatic, give and receive criticism in a positive
manner, influence people both within, and external to, the test team, and negotiate
successfully when required to do so.
The IT arena is far too complex for any individual to experience everything from project
inception to completion. Testers have been categorized as experienced in the roles of system
testers or UAT testers for some time, now we are starting to see the emergence of test
specialists in areas such as test tools or specific non-functional test areas such as
performance or security.
For further reading on this subject read chapter 22 of the book “The Testing Practitioner”
Work carried out in the field of test team dynamics has shown that natural personality traits
can be used to group types of people together based on their behavior. Dr Meredith Belbin
carried out research in this field and found 9 distinct role types. Belbin’s work and other
research into teams can be used to conclude that:-
- Teams with a Common Purpose achieve Synergy, i.e. 1+1 = 11.
- Activities/Projects have several phases requiring different skills.
- No Individual has everything.
- Belbin identified 9 roles each embodying a subset of those skills.
- During Research, Teams with a balance of Belbin roles outperformed others.
- Put Teams together on a meaningful task and the results can be extraordinary.
Note that Belbin’s 9 roles is one of many examples of team building dynamics. Hereafter we
will also briefly discuss the Myers Briggs Indicator Type.
Plant
The natural creator/innovator within the team. The source of many of the team’s best and
most radical ideas, which is often coupled with an unusual sense of humor. This person is
imaginative and solves difficult problems. On the negative side, they tend to disengage if their
ideas are rejected (taking their ball home!), and can’t stop coming out with ideas even when
the project is finished. Other weaknesses for people having this role are they sometimes
ignore details and may be too preoccupied to communicate effectively.
Co-ordinator (CO)
The natural people organizer. The person who gets things done through other people. Tend
to be calm, confident and ‘in control’. Confident chair person, promotes decision making and
delegates well. On the negative side the CO may become manipulative, and is frequently
referred to as a ‘political’ role.
Monitor-evaluator (ME)
Typically the most thoughtful and methodical member of the group. Strategic and discerning,
judges accurately. The person who prevents teams making mistakes, and ensures that
decisions are taken for the right reasons. On the negative side, they can appear to be
negative about new ideas, and hold back progress on trivial points. Other weaknesses for
people having this role are they lack drive and are overly critical.
Completer-Finisher (CF)
The person who ensures that every job is finished properly with no outstanding commitments.
Seek out errors and omissions, delivers on time. Works extremely well to deadlines and is
unlikely to let people down. On the negative side, CF’s can be obsessed by time, numbers
and anything you can measure, have a tendency to be great worriers and are reluctant to
delegate.
Shaper (SH)
The enthusiastic, energetic driver within a team. This person tends to be very good at
formulating and envisioning objectives, and wants to get going straight away. Challenging,
thrives on pressure. Has the drive to overcome obstacles. On the negative side, the SH might
be prone to temperamental outbursts, and readily shows his/her contempt for those who
appear to hold back progress and may thus provoke others or hurt their feelings.
Specialist (SP)
The person who a team of experts turn to for expert advice! A knowledgeable professional
who always knows his brief extremely well, and can provide the best of expert advice. Single
minded, self-starting. On the negative side, SP’s tend not to contribute when discussions fall
outside their area of expertise, and they will often give the technical reasons why a particular
proposal won’t work, rather than giving an alternative solution. They have problems
overlooking the big picture.
Implementer (IMP)
Disciplined, well-organized, reliable and dedicated to completing work assigned to them. The
person who often provides structure to meetings, projects and administration. Turns ideas into
practical actions. On the negative side, they can be inflexible, and too focused on the current
task to see the bigger picture. They often respond slowly to new possibilities.
Resource-Investigator (RI)
The natural relationship builder. Extrovert, enthusiastic, communicator. This person often
seems to know lots of people, be friendly with many of them, and know where/to who they
should go to get anything. They are good in the early stages of a project, but tend to lose
interest as work becomes more mundane. Other negatives are that they frequently miss
deadlines because of the time they spend chatting to others. Other weaknesses for people
having this role are they may be overoptimistic and tend to lose interest once the initial
enthusiasm has passed.
In order to build a team with the correct balance of roles you will need to identify the existing
roles in the test team. Do not just recruit to fill a technical skills need, also consider the
individuals role within the team and whether that individual will complement the existing team
role types or not. A team containing individuals with the various team roles will naturally have
conflict at times, but that is the road to gaining the best solution to a specific problem. If this
conflict gets out of hand it is the manager’s job to resolve the issue, if possible without
detriment to any of the team member’s feelings or standing within the team.
The first preference describes the source of your energy—introvert or extrovert. An introvert
draws energy internally, from his own thoughts and ideas. An extrovert draws energy from
interactions with others. The extrovert might ask everybody if they want to go to lunch. The
introvert may prefer having lunch by himself.
How one processes information is the next preference—sensing or intuitive. A sensing person
is visual and fact oriented, while an intuitive person is open and instinctual. When an intuitive
person looks at a menu, he tries to get a general idea of the type of food and the price range.
The sensing person might read every line of the menu before deciding if he wants to eat
there.
Decision making is the third preference— thinking or feeling. The thinking person uses logic
and standards in making decisions. A feeling person is more concerned with feelings and
personal relationships when making a decision. The thinking person might figure out which
restaurant is the closest or which one has the cheapest food. The feeling person might
suggest not going to a particular restaurant because someone in the group recently ended a
relationship there.
The fourth preference —judging or perceiving— deals with how an individual relates to the
external world. The judging individual is organized and structured. The perceiving person is
spontaneous and flexible. The judging person has the lunch date in his Outlook calendar. The
group will be leaving precisely at 12:00. The perceiving individual may make the lunch date
on the spur of the moment. If it is around noon sometime, he’s OK with that. When you’re
communicating with another person, you will have an easier time if you know his MBTI type.
But since most people do not wear their MBTI classification on their lapels (except for some
particular groups), it’s important to appreciate that each of us looks at the world differently.
So how do you communicate when you are a different type from the other person? The first
thing to do is acknowledge those different types. To assume that another person is being
recalcitrant because he wants to do something his way is not a useful step toward
communicating. Adapting your communication processes to appeal to both styles would
acknowledge the legitimacy of the other person’s style. Often group decision-making
As a side note to individual brainstorming, I’d like to interject the notion of “throwing the cards
on the table.” In communicating ideas, we often get hung up on our own ideas. We may feel
that we need to promote or protect the ideas that we have suggested. One way to avoid this
is to adopt the concept of “throwing the cards on the table.” After ideas are generated in the
individual brainstorming, the index cards are literally thrown on the table. The originator’s
identity is lost (at least theoretically). Each idea then can be considered on its own merits, not
on the relative status of the originator. The best ideas often are compilations of many ideas
that have been thrown on the table.
There are many other areas in which differences in styles may lead to problems in
communication. For example, in many agile environments, story cards capture requirements.
Each card includes a brief description of the requirement and an estimate of the effort to
completely implement the requirement. Judgers might want pre-printed templates for these
story cards and a check that each card has been completely filled in. Perceivers may be
happy with blank cards. The brief description can satisfy intuitive people, while sensing
people may want more details recorded on the cards. The differences between the styles of
judging types who prefer exactness and perceiving types who are comfortable with
inexactness often emerge with different ways to track time estimates and progress on the
completion of the story card implementation. The time needed to implement a requirement
story is commonly estimated in story points. Story points represent the relative effort to
complete a story rather than absolute time. To emphasize that story points do not correspond
to exact times, the values they can take are usually limited to those in a Fibonacci series (i.e.,
1, 2, 3, 5, 8, 13, …). The time to complete a story is estimated based on velocity (approximate
number of story points that can be implemented per iteration). While perceiving types are
comfortable with that inexactness, judging types may prefer actual day or hour values. During
the planning for an iteration, the tasks required to implement each story are typically
estimated in hours. Having two levels of estimates—story points for rough estimates and
hours for detailed estimates on tasks—can help satisfy both personality types.
Progress of story completion can be communicated in ways that suit both intuitive and
sensing types. Agile teams commonly track progress of stories on a large board called the
storyboard. The movement and location of the requirement story cards on the storyboard
demonstrates the progress. An intuitive type can get a picture of progress with just a quick
glance at the board. Sensing types typically prefer seeing a numerical tracking mechanism,
such as a spreadsheet. Using the spreadsheet, they may create intricate measures of
progress using graphs or formulas. Updating a storyboard and entering the details into a data-
manipulation program usually can satisfy the communication needs of both types.
The test function is critical to the successful production of quality software. The higher risk
solutions will require a greater degree of test coverage and therefore will also require more
resource/time in order to plan and execute the required tests. Delivery of reliable, quality
solutions that are usable and fit for purpose is always the aim but we must work within the
constraints of the resources available. There is also a relationship between the quality of the
delivery and the level of independence of the test function. Industry statistics show that the
greater the level of independence the greater the quality of the delivery. Here are some
examples of test structure and some of the advantages and draw backs that are associated:
Any member of the project team should always feel comfortable coming to talk to the test
team. Testing need everyone’s input and help to do the job well, so let’s break the ice, listen
to what they say, be friendly and supportive, and get the job done together.
10.5 Motivation
In general, people like to do their job well. Job satisfaction is usually high up on everybody’s
employment wish list. People tend to be motivated by different things, however there are
some constants that apply to nearly everyone; recognition, respect, and feeling valued are
very strong motivators in the work place.
People need to know what work they are doing, what their role is, how they fit into the big
picture, and what they need to do in order to progress in their chosen career (career path).
Recognition for testers will be achieved by the other project areas realizing the value that
testing adds to the project. Anyone can talk about meeting requirements and specifications,
achieving higher quality, less faults, quicker time to market, cheaper etc. but testing is one of
the few areas that can actually make it happen. By early involvement the test team can
encourage early project communication and the very effective early project reviews.
Read chapter 22 and 23 in the book for further information on the above topics.
This guide is intended to help in production of a questionnaire aimed at taking the subjectivity
out of the interview process and replacing it with a scoring process to enable comparison of
interviewees across the full range of requirements for each post. This will involve analysis of
the role in question, the technical skills required, the soft skills required, the level of
experience required, and the existing team dynamics. Each of these aspects will carry a
different weighting factor, as the requirements for each role are different.
This guide will explain how to prepare the required information for input into a test paper for
completion by the candidate. There will be a self-assessment section and a questions and
answers section, which will cover the following topics:
• Technical skills
• Personal experience
• Soft skills
• Team dynamics
This question paper is not intended to replace the traditional interview technique which is a
valuable tool in assessing any individual, but can be used in addition to interviews to assist in
the employment process where required.
Technical skills
List the technical skills required for the post.
Be specific not general.
For a System Test Analyst role in the technical skills section you may want to include:
- Relevant coding languages
- Relevant automated test tools (capture replay, performance)
- Relevant tools such as fault reporting, CM tools, change management, etc.
- Relevant desktop PC packages and applications
- SQL, COBOL etc.
Experience
List the specific experience that you are looking for. For a test coordinator you may want the
following experience:
- Knowledge of the V model development lifecycle,
- Knowledge of black and white box testing techniques,
- Knowledge of ‘Prince’
- System test and integration testing experience
- Progress tracking and reporting
- Risk identification, analysis, and management
- Test Planning
- Test environment management
- Test team management
- Experience of relevant hardware or operating systems
- Related business knowledge i.e. Unit Trust Investment
- Experience using third party products or deliverables
Do not ask for things that are not required for the role.
Technical questions
For questions of a technical nature, assistance may be required producing the required
questions. Assistance should be requested from other project areas as required. The
questions should be aimed at proving that the candidate has the skills required for the
position and claimed on their CV and the self-assessment form.
Testing knowledge
To establish the candidates’ testing knowledge you may wish to include a section on testing
awareness. This will confirm, or not, the level of skill and experience that the candidate is
claiming on their CV and in the self-assessment paper. This section can be waived if the
candidate holds the required ISEB qualification in software testing. The required questions
can be drawn from the Foundation or Practitioners syllabus as appropriate to the role in
question.
Any of the following skills could be relevant to the role. Select questions to cover the required
areas. Questions must be appropriate to the role.
- Leadership
- Team working
- Communication Skills – Presentation, oral and written skills.
- Negotiation Skills and influencing people
- Time management and personal effectiveness
- Appraisal and Counseling
- Managing confrontation
- And many, many more
Weightings
The weighting for the technical skills, experience, and team characteristics should be
completed prior to the assessment form being used. These weightings will vary between roles
based on factors such as:
- Position within the organization (management, team leader, worker etc.)
- Permanent role or contract (hire for attitude-train for skills)
- Nature of the role (Technical specialist, consultant, cleric, support etc.)
- Timeframes (expertise required now or can we train?)
- Existing team (Existing team dynamics- compliment or antagonize?)
If an expert is required to assist a project in a technical area for a short period of time, then
the technical skills and experience would be expected to outweigh the team dynamics
attribute.
For a junior testing position, then team dynamics would be expected to carry a higher
weighting than the skills or experience. With the right attitude and a smooth transition into the
team the junior tester will gain the skills and the experience quickly, assuming the right
training and mentoring is given.
For a senior position or management role a balance of experience, team dynamics, and
technical skills would be required. Do not forget that the technical skills list is job specific, and
for a management role may include, project planning, resource management, presentation
skills, performance monitoring and counseling etc.
Conclusions
There are many organizations that can provide assessment materials for use in establishing
an individuals’ ability. For team dynamics there is Belbin, for IQ testing Mensa, there are also
WEB sites that contain tests for many different types of assessment. Call on these where
necessary to assist you in identifying the required qualities that are key to the position that
you are trying to fill.
The key is to identify the skills qualities that you are looking for in advance.
Ensure that the questions are complete and appropriate. Produce a questionnaire that allows
these qualities to be measured. Set your required achievement levels in advance to prevent
taking the best of a bad bunch.
David Well we come now to the point in this meeting where we are to discuss the delivery of the
next project to the customers. Erik, how long will it take Development to develop and hand
over the system?
Erik Well it’s a challenging system. There are 3 major features which will each take 2 weeks to
develop and unit test. There are a number of minor features which will take us another 13
days to develop. So we reckon we can finish development in 2 months, so that’s when
hand-over will be – the 3rd week in December.
David OK. Chris – how long do you and your team need to test this application?
Chris Well, based upon the scope of the requirements and the new technology being used we
believe that we need 2 months also.
David WHAT! The same amount of time as Development – I don’t want you to build the system
again – just test it!
Erik Yeah, that’s ridiculous. Anyway there won’t be much testing to do – we have a great team
and we do great work.
David Come on Chris, what’s a realistic estimate?
Chris Well we might be able to do it in 6 weeks
David Look, we need to get this software out by the end of January – that gives your team 4
weeks even if we don’t count the Christmas period, which should be plenty of time.
Chris Well, maybe. But it’s really important that the Development team must hand over the
software fully complete on the date that they’ve promised.
David Development have promised to deliver the software in 2 months – I am sure they won’t let
you down
Erik Of course we’ll deliver the system when we said! Don’t you trust us? We’re absolutely
confident about that delivery date. We might even be early – after all, we will be using that
new XP agile Java development environment.
Chris 4 weeks is very tight – I am not sure that we can complete the testing unless we work extra
hours. And as for the new XP agile Java development environment…
David Well that’s settled then. Do thank the team from me for their willingness to go the extra
mile.
Erik (to David) Those testers are always complaining, aren’t they?
All A few moments later, on the telephone…
David {telephones MD…}
Yes John, no problem. We’ll get the system delivered in January. The test team is very
happy to put extra hours into the project – unpaid. So I think we can implement this system
on time…
Erik <on phone> Hi guys. It’s OK – I didn’t even have to offer to shorten the two-month
development, so we can relax a bit now.
Chris {to himself…}
There is no chance that we’ll get this system tested in 4 weeks and the team is definitely
not going to like working extra hours.
<Picks up the phone> Well guys, I have some bad news and some really bad news…
David Welcome to the handover meeting. Erik, have you now handed over the completed
system?
Erik Well the progress we have made to date has been really good, and the software is almost
complete. There were a number of problems that were outside our control, but we have
now handed over most of the software.
David How complete is the system?
Erik About 90% complete. The 3 major features are all complete, there’s just a little bit of work
to do on the additional minor features. I suggest the test team starts testing that now as a
phase 1 delivery and we’ll deliver the rest in a few weeks.
Chris A few weeks! We only have 4 weeks to complete all our testing. You promised to give us
the whole system. {sarcastically} And what about this new XP agile development
environment then…
Erik Well you can start by testing the major features which are ready now, and finish testing
David Welcome to the progress meeting. I really hope you have some good news for me. Erik?
Erik Well, we are ready to deliver phase 2 of the project to the test team. It took a lot of effort
but my team has really come through – they’re a great bunch. They even came in during
the Christmas break. The users are going to love this system.
David That’s great. What about the testing, Chris?
Chris Yeah, we saw you on the one day you came in, and only stayed for 2 hours – the test team
worked a lot over Christmas and we’re not happy about it.
David Enough complaining, Chris. I hope you have done good work in the time and produced a
good quality release for us. How was the Exploratory Testing?
Chris Yes it was exploratory enough – I felt like Neil Armstrong stepping onto the moon for the
first time. No one had been on the moon before him and no-one had been in the system
before us!!
David What do you mean?
Chris This release is really bad. We found loads of bugs in just the first few days of testing the
major features. Most of them are showstoppers. I don’t want to take Phase 2 if it’s as bad
as Phase 1.
Erik What do you mean by loads of bugs - we only have 2 outstanding bugs which my team is
currently working on.
Chris That’s not true. You and your team keep sending them back as ‘can’t recreate’ – but they
are still problems – we have just re-raised 30 of them.
Erik Well, that’s going to make sure we get further behind! What you’ve raised aren’t software
bugs – they are problems with your environment. They work OK in our environment.
David Do you have some examples of these bugs
Chris Well… no not on me, I can’t remember the exact details
David I suggest you have a look at your environment and data before creating anymore bugs
Chris We are not creating these bugs – we’re just finding them
David Why are you finding all these bugs anyway? Surely your job is to show the system works
ok. Anyway, why didn’t you raise this issue earlier?
Chris I left you messages, but you never got back to me. You weren’t available over Christmas at
all. Hang on a minute, our job is to find bugs and test as much as we can.
Erik Well what about that new automation tool – why don’t you use that during the next 2 weeks
to speed the testing up?
David Yes, that’s a good idea – I’d like to see the test automation tool running every evening
Chris We don’t have time for that now. According to this test automation book I’ve started
reading (I got it for Christmas) …
Erik So you had chance to read an Automation Book – you could have spent that time testing…
David Look, the deadline is in 2 weeks’ time. We have to meet that deadline. It’s up to the two of
you to work together to make sure that we do.
Erik I think we can – so long as Chris’s team doesn’t keep raising unimportant and
irreproducible bugs that just waste our valuable time.
David OK, that sounds good to me – is this ok with you Chris? It’s really important that we meet
the deadline.
Chris Yes I guess so.
All A few moments later, on the telephone…
David {telephones MD…}
Hi John, yes happy new year to you too. I had a great Christmas – thank you.
I am pretty confident that we shall meet the deadline – you’ll be pleased to know that all the
team were in over Christmas.
Yes I’ll have a drink with you tonight to celebrate…
Erik <on phone to Dev team> Hi guys. Yes the progress meeting went well. Thank you for
closing those issues just before the meeting. Any chance of closing a few more? Some of
them must be duplicates…
Chris <Picks up the phone to the test team> Hi team, I have some bad news and some really
bad news. Firstly we will not be paid for the overtime we worked - sorry
And secondly the deadline cannot be moved!
David Well we have all done really well – we have made the deadline, well done team! All we
need to do now is to get your sign-offs for the audit records. Erik has already signed.
Chris?
Chris Hang on boss, there are still 11 high severity and 9 high priority issues outstanding. We
can’t sign-off yet, we said we wouldn’t implement with any high priority or high severity
bugs outstanding.
Erik Oh come on. You testers are always crying “wolf”. Most of those are not “high” – I am sure
we can reduce them to medium or low priority
Chris And what about that bug we found yesterday that stops us from logging on?
Erik Oh that again – we know about that. I’ve got somebody on to it. It won’t take long to fix it.
It’s not important. It’s certainly not high priority, its low priority.
David Will you two please stop arguing about these details? We’re here to sign off this system
and to celebrate the end of the project.
Erik Yeah, I think it’s been a great project. By the way, my team came in last night and made a
few minor changes so we have a new release of the system which is the one that should
be shipped to the customer
Chris Hang on – you can’t do that! We haven’t tested that at all! I suppose you expect me to run
our regression tests on this new release in half a day?
Erik No that won’t be necessary – they were only minor bugs and they won’t affect anything –
trust me! Anyway if you had been using your test automation tool, it would only take you 10
minutes to test it all.
David Chris, I have prepared a release document for you to sign. You are surely not going to
delay the project because of a few minor problems, are you?
Chris I am not happy with this. I predict that there are still around 25 high priority and 35 high
severity bugs to find – the users will not be happy with the system
Erik How can you possibly predict bugs you haven’t found? That’s just being pessimistic. Why
can’t you be a team player and be positive?
David Chris, you know we need to release this system today, so are you going to sign it off or
not?
Chris <reluctantly>
Well OK, I guess I will. But I’m not happy.
All A few moments later, on the telephone…
David {telephones MD…}
John – good news, Chris has signed the system off. By the way when will my bonus
appear in my salary?
Erik <on phone to Dev team> Hi guys, excellent news – the test team have just signed the
system off. We need to fix that problem with the log-on. Can you do it by lunchtime?
…excellent!
You can all have the afternoon off – you deserve it!
Chris <Picks up the phone to the test team> Hi team, I have some bad news and some really
bad news.
Firstly we have signed the system off and secondly we have another build tonight.
David OK, team, this project has not been as well received by the users as it should have been.
There have been complaints about a large number of faults found in operation by the
users, and they are not being fixed quickly enough.
Erik Well, we knew all along that the testing wasn’t up to scratch. How could you have let all
those faults slip through into production? Quality is your responsibility.
Chris What? We didn’t put the faults in – you did! What do you mean; the testing wasn’t up to
scratch? We worked so hard under really difficult circumstances. We did a good job of
testing the system.
Erik You’re only bragging – you have no way of being able to tell whether you did a good job of
testing or not. I don’t think you did. I think you’re trying to cover up your bad testing.
Chris We did do good testing! I can’t prove it, but we did. Anyway we didn’t get a chance to test
There are a number of ways to improve a document. Reviewing documents, or other software
elements, is a commonly used process that is described by IEEE, as: “an evaluation of
software elements or project status to ascertain discrepancies from planned results and to
recommended improvement”.
Early defects often multiply themselves top-down. A single defect in a requirement document,
can lead to multiple defects in a design document, which in turn can cause different defects in
code. Moreover, the costs of rework of these defects grows exponentially. In 1981 Barry
Boehm already described in his book “Software engineering economics”, that a defect in a
requirement document could be solved with 5 minutes rework. If not found that early, the
resulting defect in the software product could lead to hours of rework. That is, if the defect is
found before the product is shipped to the customer. A defect that is found after release can
cause serious costs, in addition to the embarrassment and possible damage to the company’s
image.
With this in mind, it is obvious that the objective of a review is not just to find defects. It is also
used to find defects as early as possible in the life cycle and to remove the causes from the
development process.
Although informal reviews do not follow a document procedure, they do have added value.
The challenge is to use both formal reviews and informal ones, based on a documented
strategy, to improve the efficiency and effectiveness of the review and development process.
See chapter 8 of the Testing Practitioner Handbook for more information.
All the different review types have a different focus and are applicable at a different life cycle
phase. The types of defects that are found also differ per type of review. Using the right type
of review at the right place in the software life cycle ensures a more effective and efficient
review process. See chapter 8 of the Testing Practitioner Handbook, and chapter x of this
reader for more information on the specific differences between the different types and when
to use them.
The IEEE standard on software reviews (IEEE standard 1028, 1998), distinguishes three
types of formal reviews:
- Inspection
Inspection is a formally defined and rigorously followed review process. The process
includes individual and group checking, using sources and standards, according to
detailed and specific rules (checklists) in order to support the author by finding as many
defects as possible in the available amount of time.
- Technical review
The objective of a technical review process is to reach consensus of technical, content
related issues. Domain or technical experts check the document-under-review, prior to
the meeting, based on specific questions of the author. In this meeting the approach to be
taken is discussed by the experts, under guidance of a moderator or technical leader.
- Walkthrough
In a walkthrough the author guides a group of people through a document and his or her
thought processes in order to gather information and to reach consensus. No formal
preparation is required, defects are found during the meeting. People from outside the
software discipline can participate in these meetings. In walkthroughs dry runs and task
scenarios are often applied.
The goals of these two review types are entirely different from the three review types
mentioned before. Key characteristics of management reviews are:
- Conducted by or for managers having direct responsibility for the project or system
- Conducted by or for a stakeholder or decision maker, e.g. a higher level manager or
director
- Check consistency with and deviations from plans
- Check adequacy of management procedures
- Assess project risks
- Evaluate impact of actions and ways to measure these impacts
- Product lists of action items, issues to be resolved and decisions made
Before a project can start with formal reviews the involved project members, project leaders
and project sponsors need basic information on reviews. A presentation on reviews is a good
start to both provide knowledge and create momentum. After the start-up, select documents
for inspection that matter. A thoroughly inspected higher level document will have a positive
effect throughout the entire project, including side effects on the inspection process itself.
Engineers and moderators should be trained to get the most out of the inspection process.
Engineers will not only learn how to inspect, but are indirectly also trained in how to write
better documents. Moderators should be additionally trained. Handling “heated” meetings,
supporting shy authors or participants, developing a review strategy, improving the process
based on inspection metrics, etc., all these activities ask for special skills.
Inspections need to be supported by a master review plan, as presented in the case Review
Strategy. Planning inspections, based on a strategy, creates awareness and emphasizes the
need to make well founded trade-offs at the start of a project. During the project the plan can
be used to track the progress of inspections and help engineers to plan their work (including
re-rework).
As someone once said, every great journey starts with the first step. Doing inspections is this
first step. After all the preparing, informing and planning activities the inspections have to be
carried out. It’s very important for a project to stress that people are allowed to make mistakes
in the inspection process and of course their documents. The willingness to learn from these
mistakes is perhaps the second key to success.
If the process has started and inspections are carried out, it’s necessary to keep trying to
improve the process. Improvements must be based on the data that is collected during every
step in the inspection process. The inspection metrics must be presented to the people who
provided the data, in order to make a correct interpretation. These feedback sessions are
essential, not only for the continuous improvement of inspections but the meetings are also
needed to keep inspections going in general.
It will take some time before the improvement of the inspection and software process
becomes visible. When starting with inspections it looks like the removal of defects is the
Engineers’ opinion
When engineers are asked what their opinion is on inspections, they are mostly positive and
thereby support the enormous amount of numerical proof of successful projects.
In general engineers feel that the quality of products in improved, the software process itself
is improved, and that their project is better controlled. Furthermore they emphasize that a
logging meeting teaches them how to specify and how to check, creates a common
understanding and motivates them to do a good job.
Review principles
Tom Gilb has described 10 principles that can be seen throughout this reader and the review
presentations. The most important message is perhaps to keep the process practical and to
learn as much as possible while inspecting.
Another hazard to inspections are untrained engineers and moderators. They can frustrate
the process and vice-versa. Training on the other hand is not implementation. An experienced
moderator can be a very valuable asset to a project starting with inspections. It’s a job that not
to be carried out by everyone in the project.
It is obvious that the information on the quality of a document may never be used (by
management) to evaluate individual performance. This will immediately make all data
collected useless and most probably terminate all inspections.
Conclusion
There is a lot of proof and knowledge available to show that reviews are an effective and
efficient means to improve the quality of a software product. To get the most out of the review
process, a clear distinction has to be made between the different review types. To be able to
get anything at all out it the process, reviews must be started in a practical and common
sense manner.
Strategy,
scope and
approach
Assessment Define
improvements
Evaluation Planning
Implementation
Creating awareness
The reason for improving the test process generally arises from experiencing a number of
problems with testing. The desire is to solve these problems. Improvement of the test process
is regarded as the solution. Important in this phase is that all parties involved become aware
of the following points;
- The purpose of, and the need for, improvement of the test process
- The fact that a formal change process using a test improvement model is the way to
do it.
This awareness implies that the parties mutually agree on the outlines of, and give their
commitment to, the change process. Commitment should not only be acquired at the
beginning of the change process, but should be retained throughout all phases of the
process. It is important in this activity that people see that senior management supports the
change process. The awareness phase should not be regarded as a detached step in the
change process, but rather as an essential precondition. Presentations or brainstorming
sessions can be used to obtain the required awareness.
The following behavior of the change team can reduce the resistance:
- Inform; At the start of the structuring process, only a few people are informed. The
resistance will increase if the change plans and their influence are announced.
- Support; During the application and the accompanying support, the test personnel
bring proposals for improvement. These should be listened to carefully and eventually
negotiated. A sensitive ear and acceptance of the proposals reduces the resistance
considerably. A steady continuation of support in this phase convinces the testers of
the usefulness of changes.
- Negotiate; negotiate with people involved in the change process
- Convince; convince the testers of the use of changes, it will improve their work
- Enforce; finally the few last remaining people who disagree and cannot be convinced
need to be enforced by management to change their way of working.
Although in all cases the consecutive steps of the change process have to be taken, the
interpretation of each step is largely dependent on the chosen (short-or long-term) targets and
on the scope. In a change process with limited targets and scope it is possible to implement
the change within a short time frame.
To control the change process it is vital that the change takes place in fairly small steps.
Using a test maturity model gives support in choosing these improvement steps. Also the
change process should be guided: how is the change process organized, who is responsible,
how progress monitoring will take place.
Assessment
In the assessment activity, research is done to establish the strong and weak points of the
current situation. Based on the target defined earlier and the current situation, the change
actions are determined in the next activity.
§ Preparation
The person or group of persons who will perform the assessment determine who will
participate in the assessment (e.g., testers, test managers, project leaders, developers,
system managers, and end users), which documentation is to be used (e.g., test plans,
reports, test scripts, defect administration, and procedures, norms, and standards for
testing), and in which form and when the assessment is to take place. In the preparation
of interviews it is determined who is to be asked about which key areas. Management
participation in the assessment is important in order to get commitment.
§ Collecting information
By interviewing the participants, studying the documentation, and optionally by witnessing
the process, the necessary information is collected. All information gathered from
interviewers will be treated confidentially.
§ Analysis
On the basis of the collected data, the levels per key area of the TPI model or the key
process areas of the TMM are examined and it is determined which they are met, not
met, or only partially met.
§ Reporting
The analysis results are recorded. This will show the strong and weak aspects of the test
process in the form of assigned levels of key areas.
Improvement actions
On the basis of the improvement targets and the results of the assessment, the improvement
actions are determined. The actions are determined in such a way that a gradual and step-by-
step improvement is possible. Test maturity models helps to set up these improvement
actions. Depending on the targets, the area of consideration, the lead-time, and the
assessment results, the choice to carry out improvements for one or more areas can be
made.
The improvement actions should be in accordance with and lead to the achievement of the
targets set earlier for the improvement of the test process.
How can it be determined that the implementation of a number of actions leads to the
achievement of previously defined targets? For this reason it is important that the defined
targets can be measured in some way or another and that periodically measurements are
taken to see whether the improvement actions give the desired result and to what extent the
targets are met. The division into improvement cycles is intended to keep the entire change
process controllable. A cycle goes through the phases of planning, implementation, and
evaluation, so that when a cycle ends, the next planned cycle can start or adjustments can be
made.
The execution actions (during the pilot project) have to be measured to determine to what
extent they have been executed. Based on these results, a statement can be made about the
progress of the change process. Also, a vital part of this phase is consolidation. Steps should
be taken to prevent the implemented improvement actions having a once only effect. The
organization must continue to use the changed working method. Communication of the
results, courses, training, and a quality system can support this.
Planning
A plan is drawn up to implement (a part of) the improvement actions in the short term. The
objectives are recorded in this plan and the plan indicates which improvements have to be
implemented at what time to realize these items.
In the plan should also describe the activities divided into the following groups:
Test specific
- Select a pilot project: is the pilot project suitable, choose not only one pilot project
- Training: the team should follow a good training in how to work in a change project
- Procedures and manuals: Books and procedures must actually be used
- Tools: the purchasing of tools should not be regarded as redemption money
Change specific
- Presentations: all sections of the organization involved must be informed about the
changes. Presentations are a suitable form of communication for this;
- Discussion meetings: in this meetings, those involved can, on the one hand, be
convinced of the use of a change and, on the other, be a source of ideas and
problems which had not been thought about;
- Kick-off meetings: a kick-off meeting is organized with the group of people directly
involved. By doing this everyone has a clear view of what should be happen, which
makes co-ordination and co-operation a lot easier;
- Publications: are often used to reach a far larger audience than can be reached using
presentations.
- Measurements: test performance indicators (derived from business goals)
Evaluation
In this phase the aim is to see to what extent the actions were implemented successfully as
well to what extent the initial targets were met (are the described goals achieved?). Based on
these observations, the change process can continue in any number of ways.
- The next improvement cycle is started
- The improvement actions are adjusted
Usage of metrics enables testers to report data in a consistent way to their management, and
enables coherent tracking of progress over time. Three areas are to be taken into account:
• Definition of metrics: a limited set of useful metrics should be defined. Once these metrics
have been defined, their interpretation must be agreed upon by all stakeholders, in order
to avoid future discussions when metric values evolve. Metrics can be defined according
to objectives for a process or task, for components or systems, for individuals or teams.
There is often a tendency to define too many metrics, instead of the most pertinent ones.
• Tracking of metrics: reporting and merging metrics should be as automated as possible to
reduce the time spent in producing the raw metrics values. Variations of data over time for
a specific metric may reflect other information than the interpretation agreed upon in the
metric definition phase.
• Reporting of metrics: the objective is to provide an immediate understanding of the
information, for management purpose. Presentations may show a “snapshot” of the
metrics at a certain time or show the evolution of the metric(s) over time so that trends can
be evaluated.
Test managers and test leads should understand which of these values apply for their
organization, project, and/or operation, and be able to communicate about testing in terms of
these values. A well-established method for measuring the quantitative value and efficiency of
testing is called cost of quality (or, sometimes, cost of poor quality). Cost of quality involves
classifying project or operational costs into four categories:
o Costs of prevention
o Costs of detection
o Costs of internal failure
A portion of the testing budget is a cost of detection, while the remainder is a cost of internal
failure. The total costs of detection and internal failure are typically well below the costs of
external failure, which makes testing an excellent value. By determining the costs in these
four categories, test managers and test leads can create a convincing business case for
testing.
Success factors
- Management commitment: Probably the most important success (and failure) factor is
management commitment to the change process. The impatience of management
helps in getting commitment to change an organization, but can have a wrong effect
when the expectations created are not realized fast enough. When it is not clear that
the management supports the change process, this has to be changed (be sure that
management supports the change project)
- Clarity of the required situation: The change process should have a clearly defined
target, so that it is clear to everyone what must be achieved. These targets can differ
for each target group. The different target groups should have in view the targets that
are relevant to them.
- Change team participants: Using the right people to control and guide the change
process is of great importance for good progress of the process. These people must
create an open atmosphere, in which there are no inhibitions about giving ideas or
criticism. They are preferably employed full-time in the change process and have no
other activities.
- Support: Support the testers during the whole change process so that all involved
people will stay motivated and know that there are people to whom they can turn in
case of a problem or question. Note that training is not implementation and
subsequent support (training-on-the job) is needed.
- Provide regular feedback on results: No organization will remain motivated for a year
without seeing clear and tangible results. Make sure that results are defined for both
short term and long term. As soon as results become available make them visible to
the stakeholders within the organization.
In case of a custom tool the functionality can precisely meet the team’s needs. The tool can
be developed such that it can interact with other tools and generate reports in the exact form
as needed. In addition, the tool may be used outside the specific project.
There are important drawbacks also. The tool should be adequately documented, such that it
can be maintained after the creator left. As with every software product it should be designed
and tested to ensure that it work as expected.
The Test manager must ensure that all tools add value to the team’s work and can show a
positive Return on Investment. A cost-benefit analysis should be performed before acquiring
or buying a tool. Both recurring and non-recurring costs should be considered to calculate the
ROI. Costs can be quantitative (needed budget for development, acquiring, maintenance,
license) as well as qualitative, like shorter lead-times, more defects found, more effective way
of working.
Examples of non-recurring costs are defining tool requirements, purchasing or developing the
tool, training.
Examples of recurring costs are licenses, maintenance, helpdesk, migration and adaptation
for future use.
The emphasis is on support. The use of the test tool must make it possible to achieve higher
productivity levels and/or greater effectiveness.
Selection
When selecting a tool the different viewpoints of several stakeholders must be considered. To
the business a positive ROI is required. To the project, the tool must be effective, e.g. avoid
mistakes during manual testing. To the user, the tool must support them to do their tasks in a
more efficient and effective way.
Before buying a tool, you should first consider and possibly carry out the following:
- Do you really need a tool?
- The need for a formal evaluation
- Identify and document requirements
- Conduct market research and compile a short-list
- Organize supplier presentations
- Formally evaluating the test tool
- Post evaluation activities.
Implementation
Start with a small-scale project. The implementation team should work full time on the pilot
project. Team members may undertake specific roles;
- Champion: driving force
- Change agent: plans and manages
- Tool Custodian: responsible for technical support.
The results are assessed against the business case and if successful the use of the tool is
progressively rolled out to other projects and teams using the approach developed during the
pilot project.
Implementation process
If the process of tool selection is one of gradually narrowing down the choices, the
implementation process is the reverse: it is a process of gradually widening the tool’s
acceptance, use, and benefits, as illustrated in the figure hereafter:
Phased
Management Implementation
Commitment Pilot
Assemble Publicity
Team
Post
Internal Pilot Implementation
Marketing Evaluation Review
Assemble team
The following roles can be given the some people of the team:
- Tool “champion”: the driven force behind the day to day implementation, understand the
people issues, is able to work well with people, enthusiastic about the potential benefit
- Change agent: plans and manages the day to day uptake (implementation) of the tool
(including the pilot project), testing expert, technical background, analytical skills
- Management sponsor: who visible supports the tool implementation process
Implementation team
The team that selected the tool may also be the team that helps to implement it. Ideally it
would include representatives from the different parts of the organization that would be
expected to use the tool. It has two tasks, inward facing and an outward-facing one;
- Inward facing: gathering information from their own part of the organization (find out
what people need, want, and expect from the tool, and feed this information back to the
rest of the implementation team and the change agent.
- Outward facing: each team member should act as a mini change agent, they need to
keep people informed about what is happening, help to raise enthusiasm while tempering
unrealistic expectations, and help to solve problems which arise when the tool begin to
be used in earnest within their groups.
Start-up phase
Management commitment
In order to gain initial management commitment the champion or change agent will present
the business case for the selected tool, summarize the tool selection and evaluation results,
and give realistic estimates and plans for the tool implementation process.
The change agent must have adequate support from management in at least two ways: first,
visible backing from high-level managers; and second, adequate time, funding, and
resourcing (this may mean adversely impacting other projects in the short term).
Realistic expectations
In selling the idea of test automation, the champion does need to generate enough
enthusiasm so that management will be willing to invest in it. However, if the picture painted is
unrealistically optimistic, the benefits will not be achieved. The champion must find a good
balance point between achievable and saleable benefits. You will be seen in a better light if
you are successful in achieving a lower target than if you fail to achieve a more ambitious
target.
Publicity
Once you have the management commitment, both verbal and financial (which may just be
time allowed to work on the implementation), the change agent needs to begin putting a
continuing and highly visible publicity machine. All those who will eventually be affected need
to be informed about the changes that will be coming their way. People are not convinced by
one presentation, and even if they are, they don’t stay convinced over time. So the role as
change agent is to provide a constant drip-feed of publicity about the tool, who is using it,
success stories, and problems overcome.
Continuing publicity
The most important publicity is from the earliest real use of the tool, for example from the pilot
project. The benefits gained on a small scale should be widely publicized to increase the
desire and motivation to use the tool. It is also important to give relevant bad news to keep
expectations at a realistic level.
Throughout the implementation project, it is important to continue to give a constant supply of
information about the test automation efforts.
Pilot project
It is best to try out the tool on a small pilot project first. This insures that any problems
encountered in its use are ironed out when only a small number of people are using it. It also
enables you to see how the tool will affect the way you do your testing, and gives you some
idea about how you may need to modify your existing procedures or standards to make best
use of the tool. The pilot project should be start by defining a business case for the use of the
tool on this project, with measurable success factors. For example, you may want to reduce
the time to run regression tests from a week to a day.
The pilot project should be neither to long nor to short, say between two and four months.
The change agent and change management team can act as internal consultants to the new
tool users, and can perform a very useful role in coordinating the growing body of knowledge
about the use of the tool within the organization.
People issues
Managing change is all about people issues, for it is the way people work that is being
changed. A good manager will be sensitive to these issues, but often technical people are not
aware of the effects a technical change can have on people emotionally and psychologically.
The most important thing you can do: give them more detailed information about the steps
they need to take from where they are now to where you want them to be in the future. Don’s
let them make large steps, but let them make small steps forward. Note this one reason why
you need to plan the implementation, so that you have these steps mapped out in advance.