Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

KPIs to be measured at each test level across all projects

and transition from a stage to another must adhere to the

explicit exit criteria.

Process Improvement

TMMi and CTP to be used for process improvement with

ab objective of TMMi 4 across the organization and Plan,

Prepare, Perform, Perfect activities  running at operational

level.

Test Strategy
= A high-level description of the test levels to be

performed and the testing within those levels for

an organization or programme (one or more projects).

describes the organization’s general test methodology

includes the way in which testing is used to manage

product and project risks

Includes the division of testing into levels

Includes the high-level activities associated with testing

should provide the generic test entry and exit criteria

for the organization or for one or more programs

The test strategy should be consistent with the test

policy.

The same organization may have different strategies for

different situations:

different software development lifecycles

different levels of risk

different regulatory requirements


A test strategy provides a generalized description of the

test process, usually at the product or organizational level.

Common types are:

Analytical strategies (such as risk-based testing)

based on an analysis of some factor (e.g.,

requirement or risk).

the test team analyzes the test basis to identify the

test conditions to cover

test analysis derives test conditions from the

requirements, tests are then designed and

implemented to cover those conditions

The tests are subsequently executed, often using the

priority of the requirement covered by each test to

determine the order in which the tests will be run

Test results are reported in terms of requirements

status, e.g., requirement tested and passed,

requirement tested and failed, requirement not yet

fully tested, requirement testing blocked, etc.

Model-based strategies (such as operational

profiling)

tests are designed based on some model of some

required aspect of the product, such as a function, a

business process, an internal structure, or a non-

functional characteristic
the test team develops

a model (based on actual or anticipated

situations) of the environment in which the

system ex

the inputs and conditions to which the system is

subjected

how the system should behave

in model-based performance testing one might

develop models of:

incoming and outgoing network traffic

active and inactive users

resulting processing load

based on current usage and project growth

Models might be developed considering the current

production environment’s:

hardware

software

data capacity

network

infrastructure

Models may also be developed for ideal, expected,

and minimum throughput rates, response times, and

resource allocation.
Methodical strategies (such as quality characteristic-

based)

relies on making systematic use of some predefined

set of:

tests or test conditions, such as a taxonomy of

common or likely types of failures

a list of important quality characteristics

company-wide look-and-feel standards

the test team can use a predetermined set of test

conditions, such as:

a quality standard [IOS25000]

a checklist or a collection of generalized logical

test conditions

uses that set of test conditions from one iteration to

the next or from one release to the next

in maintenance testing of a simple, stable e-

commerce website, testers might use a checklist that

identifies the key functions, attributes, and links for

each page and  cover the relevant elements of this

checklist each time a modification is made to the

site.

Reactive strategies (such as using defect-based

attacks)

testing is reactive to the component or system being

tested, and the events occurring during test

execution, rather than being pre-planned


the test team waits to design and implement tests

until the software is received, reacting to the actual

system under test

Tests are designed and implemented, and may

immediately be executed in response to knowledge

gained from prior test results.

Exploratory testing is a common technique employed

in reactive strategies.

Testers periodically report results of the testing

sessions to the Test Manager, who may revise the

charters based on the findings.

Process- or standard-compliant strategies (such as

medical systems subject to U.S. FDA standards)

involves analyzing, designing, and implementing

tests based on external rules and standards specified

by:

industry-specific standards

process documentation

the rigorous identification

use of the test basis

any process or standard imposed on or by the

organization
the test team follows a set of processes defined by a

standards committee or other panel of experts where

the processes address:

Documentation

The proper identification and use of the test basis

and test oracle(s)

the organization of the test team

in projects following Scrum Agile management

techniques, in each iteration testers:

analyze user stories that describe particular

features

estimate the test effort for each feature as part of

the planning process for the iteration

identify test conditions (often called acceptance

criteria) for each user story

execute tests that cover those conditions

report the status of each user story (untested,

failing, or passing) during test execution

Regression-averse testing strategies (such as

extensive automation)

motivated by a desire to avoid regression of existing

capabilities.

Includes reuse of existing testware (especially test

cases and test data), extensive automation of

regression tests and standard test suites.

the test team uses various techniques to manage the

risk of regression, especially functional and/or non-

functional regression test automation


when regression testing a web-based application,

testers can use a GUI-based test automation tool to

automate the typical and exception use cases for the

application. Those tests are then executed any time

the application is modified

Consultative/Directed strategies (such as user-

directed testing)

driven primarily by the advice, guidance, or

instructions of stakeholders, business domain

experts, or technology experts

the test team relies on the input of one or more key

stakeholders to determine the test conditions

in outsourced compatibility testing for a web-based

application, a company may give the outsourced

testing service provider a prioritized list of browser

versions, anti-malware software, operating systems,

connection types, etc. configuration options that

they want evaluated against their application.

The testing service provider can then use techniques

such as pairwise testing (for high priority options)

and equivalence partitioning (for lower priority

options) to generate the tests.

An appropriate test strategy is often created by combining

several of these types of test strategies. The specific

strategies selected should be appropriate to the

organization’s needs and means or even tailor strategies to

fit particular operations and projects.


The test manager should keep in mind that:

Different test strategies for the short term and

the long term might be necessary

Different test strategies are suitable for

different organizations and projects

test strategy also differs based on development models

The Test Strategy may also describe the test levels to be

carried out and it should give guidance on the entry and

exit criteria for each level, but also on the relationships

among levels.

While the test strategy provides a generalized

description of the test process, the test approach tailors

the test strategy for a particular project or release.

The test approach is the starting point for selecting the test

techniques, test levels, and test types, and for defining the

entry criteria and exit criteria.

Besides the above, a Test Strategy may also include:

Integration procedures

Test specification techniques

Independence of testing

Mandatory and optional standards

Test environments

Test automation

Test tools

Reusability of software work products and test work

products
Confirmation testing (re-testing) and regression testing

Test control and reporting

Test measurements and metrics

Defect management

Configuration management approach for testware

Roles and responsibilities

Example

Test Approach

Test Actors

Types of Testing

Test Planning and Execution

Entry and Exit Criteria

Test Environments

Release Control

Test Automation

Test Tools

Risk Analysis

Review and Approvals

You might also like