Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Testing

The process consisting of all life cycle activities, both and dynamic, concerned with planning,
preparation and evaluation of software products and related work products

Verification

Confirmation by examination and through provision of objective evidence that specified


requirements have been fulfilled.

Validation

Confirmation by examination and through provision of objective evidence that the requirements for
a specific intended use or application have been fulfilled.

Test objective

A reason or purpose for designing and executing a test.

Test object

The component or system to be tested.

Debugging

The process of finding, analyzing and removing the causes of failures in software.

Quality assurance

Part of quality management focused on providing confidence that quality requirements will be
fulfilled.

Quality

The degree to which a component system or process meets specified requirements and/or
user/customer needs and expectations.

Error (mistake)

A human action that produces an incorrect result.

Defect (bug, fault)

An imperfection or deficiency in a work product where it does not meet its requirements or
specifications.

Failure

An event in which a component or system does not perform a required function within specified
limits.

Root cause

A source of a defect such that if it is removed, the occurrence of the defect type is decreased or
removed.
Testing principles

Principle 1:

Testing shows the presence of defects, not their absence.

Testing can show that defects are present, but cannot prove

that there are no defects. Testing reduces the probability of

undiscovered defects remaining in the software but, even if no

defects are found, testing is not a proof of correctness.

Principle 2:

Exhaustive testing is impossible

Testing everything (all combinations of inouts and preconditions) is

not feasible except for trivial cases. Rather than attempting to test

exhaustively, risk analysis, test techniques and priorities should be

used to focus test efforts.

Principle 3:

Early testing saves time and money.

To find defects early, both static and dynamic test activities should be

started as early as possible in the software development life cycle.

Early testing is sometimes referred to as ‘shift left’. Testing early

in the software development life cycle helps reduce or eliminate

costly changes (see Chapter 3, Section 3.1).

Principle 4:

Defects cluster together.

A small number of modules usually contains most of the defects

discovered during pre-release testing, or they are responsible for

most of the operational failures. Predicted defect clusters, and

the actual observed defect clusters in test or operation, are an

important input into a risk analysis used to focus the test effort

(as mentioned in Principle 2).

Principle 5:

Beware of the pesticide paradox

If the same tests are repeated over and over again, eventually these

tests no longer find any new defects. To detect new defects,


existing tests and test data are changed and new tests need to be

written. (Tests are no longer effective at finding defects, just as

pesticides are no longer effective at killing insects after a while.)

In some cases, such as automated regression testing, the pesticide

paradox has a beneficial outcome, which is the relatively low

number of regression defects.

Principle 6:

Testing is context dependent

Testing is done differently in different contexts. For example,

safety-critical software is tested differently from an e-commerce

mobile app. As another example, testing in an Agile project is

done differently to testing in a sequential life cycle project (see

Chapter 2, Section 2.1).

Principle 7:

Absence-of-errors is a fallacy

Some organizations expect that testers can run all possible tests and

find all possible defects, but Principles 2 and 1, respectively, tell us

that this is impossible. Further, it is a fallacy to expect that just finding

and fixing a large number of defects will ensure the success of a

system. For example, thoroughly testing all specified requirements

and fixing all defects found could still produce a system that |s

difficult to use, that does not fulfil the users’ needs and expectations

or that is inferior compared to other competing systems.

Coverage

The degree to which specified coverage items have been determined to have been exercised by

a test suite expressed as a percentage.

Test basis

The body of knowledge used as the basis for test analysis and design.

Test planning

The activity of establishing or updating a test plan.


Test plan

Documentation describing the test objectives to be achieved and the means and the schedule

for achieving them, organized to coordinate testing activities.

(Note that we have included the definition of test plan here, even though it is not listed

in the Syllabus as a term that you need to know for this chapter; otherwise the definition

of test planning is not very informative.)

Test monitoring

A test management activity that involves checking the status of testing activities, identifying

any variances from the planned or expected status and reporting status to stakeholders.

Test control

A test management task that deals with developing and applying a set of corrective actions to get

a test project on track when monitoring shows a deviation from what was planned.

Test analysis

The activity that identifies test conditions by analyzing the test basis.

Test condition (charter)

An aspect of the test basis that is relevant in order to achieve specific test objectives. See also:
exploratory testing.

Test design

The activity of deriving and specifying test cases from test conditions.

Test case

A set of preconditions, inputs, actions (where applicable), expected results and postconditions,

developed based on test conditions.

Test data

Data created or selected to satisfy the execution preconditions and inputs to execute

one or more test cases.

Test implementation

The activity that prepares the testware needed for test execution based on test

analysis and design.

Test procedure

A sequence of test cases in execution order, and any associated actions that may be required

to set up the initial preconditions and any wrap-up activities post execution.
Test suite (test case suite, test set)

A set of test cases or test procedures to be executed in a specific test cycle.

Test execution schedule

A schedule for the execution of test suites within a test cycle.

Test execution

The process of running a test on the component or system under test, producing actual result(s).

Testware

Work products produced during the test process for use in planning, designing, executing,

evaluating and reporting on testing.

Test completion

The activity that makes test assets available for later use, leaves satisfactory condition and
communicates th results of testing to relevant stakeholders.

Test oracle (oracle)

A source to determine expected results to compare with the actual result of the system under test.

You might also like