Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 9

Risk Based Testing

an Introduction

Richard Donovan
25th Jan 2010
v 1.1
Key principles

– Why Risk Based Testing?


• Because testing is always under pressure; Testing effort is limited by time
and money available.
• Testing is the last thing done in a project
• You must be able to cut down on the least important things
• Complements Agile projects and feature scoring

– What is Risk Based Testing?


• Prioritisation of test effort based on risk scores: find the most important
defects as early as possible at the lowest price
• “The goal is to reach an acceptable level of risk. At that point, quality is
automatically good enough”*

* “The Challenge of Good Enough Software”, James Bach, American Programmer, 1995
Definition of Risk

– Risk = Probability X Damage*


• Risk is defined on the basis of two factors: the probability of a fault occurring and the
damage a fault would cause.
• Damage determined by: financial, loss of clients, corporate identity, usage frequency
• Probability determined by: complexity, size, prior knowledge, experiences gained
during testing (defects tend to clump together), newly developed product vs. “off-the-
shelf”

– How is it done?
• Test risk scores are obtained through Risk Assessment Sessions

* “Risk Based Testing – Strategies for Prioritising Tests Against Deadlines”, Hans Schaefer, Keynote, STAR West 1998
Risk Assessment Sessions

– Objective
• Stakeholders provide their risk interpretation based on a
scoring technique.

– Variables
• Work packages split in Components.
• Quality Characteristics (small subset chosen from 27
mentioned in ISO 9126)
– These are: Performance, Usability and Reliability.
In the Session

– Step 1:
• Score risk per component. Use 1 to 5 and multiply. Example:
In the Session - continued

– Step 2:
• score importance of the quality criteria per component. Use H, M
and L; Example:
In the Session - continued

– Scoring risk
• The idea is to reach consensus
– All attendees score probability and damage for each item by placing a
score card upside down in front of them
» 1 – 5 probability (low to high)
» 1 – 5 damage (low to high)
– On revealing scores, the reasons for lowest and highest scores are
discussed until consensus is reached.

– This meeting is minuted in detail in order to create an audit


trail.
After the Session

– Step 3:
• determine mitigation strategy

• Choose test level based on scores.


– more risk and more importance -> heavier test level
– technical risk usually mitigated by Unit tests / System tests / POC
– business risk usually mitigated in later phases like Acceptance Tests, although a POC
can be useful (for instance Picture Quality tests early on in PDL programme).

• Choose integration strategy based on scores


– more risk and more importance -> earlier start
– Test integration through “cross-workstream functional slices”
Further considerations

– Advantages of this approach


• Utilitarian view of quality assurance: right or wrong depends on how it affects
people. Key people are made to feel important, feel involved, sense of ownership.
Quality is a perception to a large extent. The ones who will judge whether we have
done a good job are the ones that will be involved in setting the priorities.
• Most important faults are found earlier
• More flexible than traditional methods:
– Test focus can shift during test cycles based on changed risk perception
– Tests can be dropped if time runs out based on informed decision about the risk it
introduces

– Questions?

You might also like