Chapter 1 - Fundamentals of Testing (8) Chapter 4 - Test Techniques

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Chapter 1 - Fundamentals of Testing (8) Chapter 4 - Test Techniques (11)

1 x K1 - Keywords - 31 Keywords
1 x K1 - Keywords - 16 Keywords

1 x K1 - 111 Typical Objectives of Testing


5 x K2 - 411 Categories of Test Techniques
151 Human Psychology and Testing
and Their Characteristics

6 x K2 - 112 Testing and Debugging


425 Use Case Testing

121 Testing’s Contributions to 431 Statement Testing and Coverage

Success
432 Decision Testing and Coverage

122 Quality Assurance and Testing


433 The Value of Statement and
123 Errors, Defects, and Failures
Decision Testing

124 Defects, Root Causes and 441 Error Guessing

E ects
442 Exploratory Testing

131 Seven Testing Principles


443 Checklist-based Testing

141 Test Process in Context


5 x K3 - 421 Equivalence Partitioning

142 Test Activities and Tasks 
 422 Boundary Value Analysis

143 Test Work Products 
 423 Decision Table Testing

144 Traceability between the Test 424 State Transition Testing

Basis and Test Work Products

152 Tester’s and Developer’s Chapter 5 - Test Management (9)


Mindsets

2 x K1 - 512 Tasks of a Test Manager and


Chapter 2 - Fundamentals of Testing (5) Tester

525 Factors In uencing the Test E ort

1 x K1 - 212 Software Development Lifecycle 531 Metrics Used in Testing

Models in Context 
 551 De nition of Risk

232 Non-functional Testing 
 5 x K2 - 511 Independent Testing

4 x K2 - 211 Software Development and 521 Purpose and Content of a Test


Software Testing
Plan

221 Component Testing


522 Test Strategy and Test Approach

231 Functional Testing


523 Entry Criteria and Exit Criteria
233 White-box Testing
(De nition of Ready and Done)

241 Triggers for Maintenance


526 Test Estimation Techniques

242 Impact Analysis for Maintenance


532 Purposes, Contents, and
Audiences for Test Reports

Chapter 3 - Static Testing (5) 541 Con guration Management

552 Product and Project Risks

1 x K1 - 311 Work Products that Can Be 553 Risk-based Testing and Product
Examined by Static Testing 
 Quality

322 Roles and responsibilities in a 2 x K3 - 524 Test Execution Schedule

formal review
561 Defect Management

3 x K2 - 312 Bene ts of Static Testing

313 Di erences between Static and Chapter 6 - Tool Support for Testing (2)
Dynamic Testing

321 Work Product Review Process

1 x K1 - 612 Bene ts and Risks of Test


323 Review Types

Automation

325 Success Factors for Reviews

613 Special Considerations for Test


1 x K3 - 324 Applying Review Techniques

Execution and Test Mang. Tools

621 Main Principles for Tool Selection

622 Pilot Projects for Introducing a


Tool into an Organization

623 Success Factors for Tools

1 x K2 - 611 Test Tool Classi cation

Page 1 of 16
ff
ff
fi
fi
fi
fi
fi
fl
fi
ff
Chapter 1 - Fundamentals of Testing (8)

111 Typical Objectives of Testing 


1. To prevent defects by evaluate work products such as requirements, user stories, design, and
code

2. To verify whether all speci ed requirements have been ful lled

3. To check whether the test object is complete and validate if it works as the users and other 

stakeholders expect

4. To build con dence in the level of quality of the test object

5. To nd defects and failures thus reduce the level of risk of inadequate software quality

6. To provide su cient information to stakeholders to allow them to make informed decisions,


especially regarding the level of quality of the test object

7. To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the
test object’s compliance with such requirements or standards

151 Human Psychology and Testin

1. Start with collaboration rather than battles. Remind everyone of the common goal of better
quality systems.

2. Emphasize the bene ts of testing. For example, for the authors, defect information can help
them improve their work products and their skills. For the organization, defects found and xed
during testing will save time and money and reduce overall risk to product quality.

3. Communicate test results and other ndings in a neutral, fact-focused way without criticizing
the person who created the defective item. Write objective and factual defect reports and review
ndings.

4. Try to understand how the other person feels and the reasons they may react negatively to the
information.

5. Con rm that the other person has understood what has been said and vice versa.

112 Testing and Debugging



Executing tests can show failures that are caused by defects in the software.

Debugging is the development activity that nds, analyzes, and xes such defects.

121 Testing’s Contributions to Success

1. Having testers involved in requirements reviews or user story re nement could detect defects in
these work products. The identi cation and removal of requirements defects reduces the risk of
incorrect or untestable features being developed. 

2. Having testers work closely with system designers while the system is being designed can
increase each party’s understanding of the design and how to test it. This increased
understanding can reduce the risk of fundamental design defects and enable tests to be identi ed
at an early stage. 

3. Having testers work closely with developers while the code is under development can increase
each party’s understanding of the code and how to test it. This increased understanding can
reduce the risk of defects within the code and the tests. 

4. Having testers verify and validate the software prior to release can detect failures that might
otherwise have been missed, and support the process of removing the defects that caused the
failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs
and satis es requirements. 


122 Quality Assurance and Testing



Quality Management = Q. Assurance (con dence) + Q. Control (testing)

123 Errors > Defects (Bug, Fault) > Failures + false positive/negative 


124 Defects, Root Causes and E ects



The root causes of defects are the earliest actions or conditions that contributed to creating the
defects.

Page 2 of 16
fi
fi
fi
fi
fi
ffi
fi
fi
fi
ff
g
fi
fi
fi
fi
fi
fi
fi
fi
Root cause: lack of knowledge > Defect:improper calc. > Failure:incorrect i. rate > E ects:
customer complain

131 Seven Testing Principles 


1. Testing shows the presence of defects, not their absence 



2. Exhaustive testing is impossible

3. Early testing saves time and money

4. Defects cluster together

5. Beware of the pesticide paradox

6. Testing is context dependent

7. Absence-of-errors is a fallacy

141 Test Process in Context

Software development lifecycle model and project methodologies being used

Test levels and test types being considered

Product and project risks

Business domain

Operational constraints: Budgets & resources ,Timescales, Complexity, Contractual requirements


Organizational policies and practices

Required internal and external standards

142 Test Activities and Tasks & 143 Test Work Products

Test planning: t.objectives > t.plan


Test monitoring and control: tracking progress, exit-coverarage criteria > t.progress report
Test analysis: what to test, t.basis-t.level, prio t.cond. > t.condition
Test design: how to test, t.data > t.case

Test implementation:do we have everything, prep t.data > t.suite, t.procedure, t.exe.schedule,
exploratory all together, t.oracle

Test execution: testware, compare actual & expected > defect reports, status of
t.case&procedure

Test completion: all def. reports closed, nalise, archive, maintenance > t. summary report

144 Traceability between the Test Basis and Test Work Products

1. Analyzing the impact of changes

2. Making testing auditable

3. Meeting IT governance criteria

4. Improving the understandability of test progress reports and test summary reports to include
the status of elements of the test basis (e.g., requirements that passed their tests, requirements
that failed their tests, and requirements that have pending tests)

5. Relating the technical aspects of testing to stakeholders in terms that they can understand

6. Providing information to assess product quality, process capability, and project progress
against business goals

152 Tester’s and Developer’s Mindsets

Curiosity, Professional pessimism, A critical eye, Attention to detail, Experience, Good


communication skills

Chapter 2 - Fundamentals of Testing (5)

211 Software Development and Software Testing (K2)


1. For every development activity, there is a corresponding test activity

2. Each test level has test objectives speci c to that level

Page 3 of 16
fi
fi
ff
3. Test analysis and design for a given test level begin during the corresponding development
activity

4. Testers participate in discussions to de ne and re ne requirements and design, and are


involved in reviewing work products (e.g., requirements, design, user stories, etc.) as soon as
drafts are available

Iterative : more regression, defects outside scope, less through testing


212 Software Development Lifecycle Models in Context (K1) COTS

Project goal, type of product, business prios, risks

1. Di erence in product risks of systems (complex or simple project)

2. Many business units can be part of a project or program (combination of sequential and agile 

development)

3. Short time to deliver a product to the market (merge of test levels and/or integration of test
types in test levels)

2.2 Test Levels : Component testing | Integration testing | System testing | Acceptance testing
Speci c objectives | Test basis, referenced to derive test cases | Test object (i.e., what is being
tested) |Typical defects and failures | Speci c approaches and responsibilities

221 Component Testing (Unit & Module) Mock | A > Stub | Driver > B

t. basis: Detailed design | Code | Data model | Component speci cations

t. objects: Components, units or modules| Code and data structures| Classes | Database
modules

typical defects and failures: Incorrect functionality (not as described in design speci cations) |
Data ow problems | Incorrect code and logic

Speci c approaches and responsibilities: XP > TDD

222 Integration Testing | Interfaces :


Component integration : Automation, CI

System integration: Microservices, Webservices, parallel or after sys. t.

t. basis: Software and system design | Sequence diagrams | Interface and communication
protocol speci cations | Use cases | Architecture at component or system level | Work ows |
External interface de nitions 

t. objects: Subsystems | Databases | Infrastructure | Interfaces | APIs | Microservices

typical defects and failures (component i.t.): Incorrect data, missing data, or incorrect data
encoding | Incorrect sequencing or timing of interface calls | Interface mismatch | Failures in
communication between components | Unhandled or improperly handled communication failures
between components | Incorrect assumptions about the meaning, units, or boundaries of the data
being passed between components

typical defects and failures (system i.t.): Inconsistent message structures between systems |
Incorrect data, missing data, or incorrect data encoding | Interface mismatch | Failures in
communication between systems | Unhandled or improperly handled communication failures
between systems | Incorrect assumptions about the meaning, units, or boundaries of the data
being passed between systems | Failure to comply with mandatory security regulations

Speci c approaches and responsibilities: Top-down, Bottom-up, Functional incremental

223 System Testing : release decisions

t.basis: System and software requirement speci cations (functional and non-functional) | Risk
analysis reports | Use cases | Epics and user stories | Models of system behavior | State diagrams
| System and user manuals

t.objects: Applications | Hardware/software systems | Operating systems | System under test


(SUT) | System con guration and con guration data

typical defects and failures : Incorrect calculations | Incorrect or unexpected system functional
or non-functional behavior | Incorrect control and/or data ows within the system | Failure to
properly and completely carry out end-to-end functional tasks | Failure of the system to work
Page 4 of 16
ff
fl
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fl
fi
fi
fl
properly in the system environment(s) | Failure of the system to work as described in system and
user manuals

Speci c approaches and responsibilities: often independent , 3rd party team, both funct. and
non-funct.

224 Acceptance Testing : not nding-preventing defects

1. User acceptance testing: validating the tness 



2. Operational acceptance testing: keep working under exceptional condt.

Testing of backup and restore |Installing, uninstalling and upgrading | Disaster recovery |User
management | Maintenance tasks | Data load and migration tasks | Checks for security
vulnerabilities | Performance testing -> e.g: system admin

3. Contractual and regulatory acceptance testing: by users or indp. testers 

4. Alpha and beta testing: by dev and COTS feedback from users before market

Alpha at the company, Beta at the users o ce

t.basis: Business processes | User or business requirements | Regulations, legal contracts and
standards | Use cases and/or user stories | System requirements | System or user documentation|
Installation procedures | Risk analysis reports 

t.basis-oat: Backup and restore procedures | Disaster recovery procedures | Non-functional
requirements | Operations documentation | Deployment and installation instructions | Performance
targets | Database packages | Security standards or regulations 

t.objects: System under test | System con guration and con guration data |Business processes
for a fully integrated system | Recovery systems and hot sites (for business continuity and disaster
recovery testing) | Operational and maintenance processes | Forms | Reports | Existing and
converted production data

typical defects and failures : System work ows do not meet business or user requirements |
Business rules are not implemented correctly | System does not satisfy contractual or regulatory
requirements | Non-functional failures such as security vulnerabilities, inadequate performance
e ciency under high loads, or improper operation on a supported platform 

Speci c approaches and responsibilities: Acceptance testing when it is installed or integrated |
Acceptance testing of a new functional enhancement may occur before system testing (iterative)

2.3 Test Types: Functional | Non-funct. | White-box | Change-related

1.Evaluating functional quality characteristics, such as completeness, correctness, and


appropriateness

2. Evaluating non-functional quality characteristics, such as reliability, performance e ciency,


security, compatibility, and usability 

3. Evaluating whether the structure or architecture of the component or system is correct,
complete, and as speci ed 

4. Evaluating the e ects of changes, such as con rming that defects have been xed
(con rmation testing) and looking for unintended changes in behavior resulting from software or
environment changes (regression testing)

231 Functional Testing : black-box | should performed at all test levels | functional coverage
using traceability

232 Non-functional Testing : characteristics, how well, all test levels | performance, stress, load,
security | tested by non-funct. coverage elements

233 White-box Testing: mostly for component and c. integration | code knowledge

234 Change-related Testing: con rmation (re-test after defect xed) | regression (unintended
side-e ects) automation and CI in unchanged areas

235 Test types and test levels: examples for each type in each level

2.4 Maintenance Testing: degree of risk of change | size of existing system | size of change

Page 5 of 16
ffi
fi
fi
fi
ff
ff
fi
fi
fi
ffi
fi
fi
fl
fi
fi
fi
fi
ffi
241 Triggers for Maintenance: Modi cation | Migration | Retirement | restore/retrieve procedures
| regression testing

242 Impact Analysis for Maintenance: identify the intended consequences | identify the impact
of a change on existing tests | can be done before change is made | it can be di cult if:
1. Speci cations (e.g., business requirements, user stories, architecture) are out of date or missing 

2. Test cases are not documented or are out of date 

3. Bi-directional traceability between tests and the test basis has not been maintained 

4. Tool support is weak or non-existent 

5. The people involved do not have domain and/or system knowledge 

6. Insu cient attention has been paid to the software's maintainability during development 


Chapter 3 - Static Testing (5)

3.1 Static Testing Basics: Static analysis (Tools) < Static Testing (Reviews) | Dynamic Testing |
Work products

311 Work Products that Can Be Examined by Static Testing:

Speci cations, business, functional and security requirements |Epics, user stories, and
acceptance criteria | Architecture and design speci cations | Code | Testware, test plans, test
cases, test procedures, and automated test scripts | User guides | Web pages | Contracts, project
plans, schedules, and budget planning | Con guration set up and infrastructure set up | Models,
such as activity diagrams, which may be used for Model-Based testing

312 Bene ts of Static Testing: 1. Feedback in design review and backlog re nement | 2. Detect
at early to avoid regression and con rmation test

1. Detecting and correcting defects more e ciently, and prior to dynamic test execution

2. Identifying defects which are not easily found by dynamic testing

3. Preventing defects in design or coding by uncovering inconsistencies, ambiguities,


contradictions, omissions, inaccuracies, and redundancies in requirements

4. Increasing development productivity (e.g., due to improved design, more maintainable code)

5. Reducing development cost and time

6. Reducing testing cost and time

7. Reducing total cost of quality over the software’s lifetime, due to fewer failures later in the
lifecycle or after delivery into operation

8. Improving communication between team members in the course of participating in reviews

313 Di erences between Static and Dynamic Testing: Static T. defects | Dynamic T. failures

Requirement defects (inconsistencies) | Design defects (ine cient algorithms | Coding defects
(variables with unde ned values) | Deviations from standards (lack of adherence to coding
standards) | Incorrect interface speci cations (di erent units of measurement used) | Security
vulnerabilities (susceptibility to bu er over ows) | Gaps or inaccuracies in test basis traceability or
coverage (missing tests for an acceptance criterion) 


3.2 Review Process: Review < Static T. | Inspection: formal | Pair p: informal, not documented |
Formal: documented | Walkthrough: gain knowledge | Technical r: hold discussion

321 Work Product Review Process: (Formal Reviews)

Planning: De ning the scope, the purpose of the review, what documents or parts of documents
to review, and the quality characteristics to be evaluated (10-20 p, inspection 1-2p.) | Estimating
e ort and timeframe | Identifying review characteristics such as the review type with roles,
activities, and checklists | Selecting the people to participate in the review and allocating roles
(4-6 inc. facilitator(moderator) and author) | De ning the entry and exit criteria for more formal
review types (inspections) | Checking that entry criteria are met (for more formal review types)

Page 6 of 16
ff
fi
ff
ffi
fi
fi
fi
fi
ff
fi
fi
fi
fl
ffi
fi
fi
ff
fi
ffi

fi
ffi
Initiate Review: Distributing the work product and other material, such as issue log forms,
checklists, and related work products | Explaining the scope, objectives, process, roles, and work
products to the participants | Answering any questions that participants may have about the
review

Individual Review: Reviewing all or part of the work product | Noting potential defects,
recommendations, and questions (checking rate: 5-10 page/hour)
Issue communication and analysis: Communicating identi ed potential defects in a review
meeting | Analyzing potential defects, assigning ownership and status to them | Evaluating and
documenting quality characteristics | Evaluating the review ndings against the exit criteria to
make a review decision (reject; major changes needed; accept, possibly with minor changes)
Severity of defects: Critical, Major, Minor, Exit: average defects per page

Fixing and reporting: Creating defect reports for those ndings that require changes to a work
product | Fixing defects found (typically done by the author) in the work product reviewed |
Communicating defects to the appropriate person or team (when found in a work product related
to the work product reviewed) | Recording updated status of defects (in formal reviews) |
Gathering metrics (for more formal review types) | Checking that exit criteria are met (for more
formal review types) | Accepting the work product when the exit criteria are reached

322 Roles and responsibilities in a formal review (K2): author | management | facilitator
(moderator) | review leader | reviewers | scribe (recorder)

Author: Creates the work product under review | Fixes defects in the work product under review

Management: Is responsible for review planning | Decides on the execution of reviews | Assigns
sta , budget, and time | Monitors ongoing cost-e ectiveness | Executes control decisions in the
event of inadequate outcomes

Facilitator (moderator): Ensures e ective running of review meetings | Mediates between the
various points of view | Is often the person upon whom the success of the review depends

Review leader: Takes overall responsibility for the review | Decides who will be involved and
organizes when and where it will take place | sometimes taken by manager or moderator

Reviewers: (checkers or inspectors) subject matter experts, persons working on the project,
stakeholders with an interest in the work product, individuals with speci c technical or business
backgrounds | Identify potential defects in the work product under review | May represent di erent
perspectives (tester, developer, user, operator, business analyst, usability expert, etc.)

Scribe (recorder): Collates, logical sequence, potential defects found during the individual review
activity | Records new potential defects, open points, and decisions from the review meeting

323 Review Types: Informal | Walkthrough | Technical | Inspection

Informal review (pair r.): Main purpose: detecting potential defects | Additional purposes:
generating new ideas or solutions, quickly solving minor problems | Not based on a formal
(documented) process | May not involve a review meeting | May be performed by a colleague of
the author (buddy check) or by more people | Results may be documented (often not) | Varies in
usefulness depending on the reviewers | Use of checklists is optional | commonly Agile dev.

Walkthrough: Main purposes: nd defects, improve the software product, consider alternative
implementations, evaluate conformance to standards and speci cations | Additional purposes:
exchanging ideas about techniques or style variations, training of participants, achieving
consensus | Individual preparation before the review meeting is optional | Review meeting is
typically led by the author of the work product | Scribe is mandatory | Use of checklists is optional
| May take the form of scenarios, dry runs, or simulations | Potential defect logs and review
reports are produced | May vary in practice from quite informal to very formal | Author does prep.

Technical Review: Main purposes: gaining consensus, detecting potential defects | Further
purposes: evaluating quality and building con dence in the work product, generating new ideas,
motivating and enabling authors to improve future work products, considering alternative
implementations | Reviewers should be technical peers of the author, and technical experts in the
same or other disciplines | Individual preparation before the review meeting is required | Review
meeting is optional, ideally led by a trained facilitator (typically not the author) | Scribe is
mandatory, ideally not the author | Use of checklists is optional | Potential defect logs and review
Page 7 of 16
ff
fi
ff
fi
ff
fi
fi
fi
fi
fi
ff
reports are produced 


Inspection: Main purposes: detecting potential defects, evaluating quality and building
con dence in the work product, preventing future similar defects through author learning and root
cause analysis | Further purposes: motivating and enabling authors to improve future work
products and the software development process, achieving consensus | Follows a de ned
process with formal documented outputs, based on rules and checklists | Uses clearly de ned
roles, speci ed in section 3.2.2 are mandatory, and may include a dedicated reader (who reads
the work product aloud often paraphrase, i.e. describes it in own words, during the review
meeting) | Individual preparation before the review meeting is required | Reviewers are either peers
of the author or experts in other disciplines that are relevant to the work product | Speci ed entry
and exit criteria are used | Scribe is mandatory | Review meeting is led by a trained facilitator (not
the author) | Author cannot act as the review leader, reader, or scribe | Potential defect logs and
review report are produced | Metrics are collected and used to improve the entire software
development process, including the inspection process

324 Applying Review Techniques (K3): (Individual Review Prep.) Ad hoc | Checklist-based |
Scenarios and dry runs | Perspective-based | Role-based

Ad hoc: no guidance | little preparation | many duplicate issues

Checklist-based: systematic coverage of typical defects typ. | questions based on potential def.

Scenarios and dry runs: structured guidelines on the work product based on expected usage

Perspective-based: di erent stakeholder viewpoints, end user, marketing, designer, tester, or


operations (high-level) | less duplication | draft acceptance test | checklists expected | most
e ective, all in one, except ad hoc

Role-based: perspective of individual stakeholder roles | speci c end user types and roles in the
organization (experienced, inexperienced, senior, child, etc.) | accessibility roles

325 Success Factors for Reviews (K2):

Organizational success factors: have clear objectives, de ned during review planning, and used
as measurable exit criteria | Pick the right review type and technique | Any review techniques
used, such as checklist-based or role-based reviewing, are suitable for e ective defect
identi cation in the work product to be reviewed | Any checklists used address the main risks and
are up to date | Large documents are written and reviewed in small chunks, limit the scope of the
review | Participants have adequate time to prepare | Reviews are scheduled with adequate notice
| Management supports the review process (by incorporating adequate time for review activities in
project schedules) | Reviews are integrated in the company's quality and/or test policies.

People-related success factors: The right people are involved to meet the review objectives,
pick the right reviewers | Testers, professional pessimists, are seen as valued reviewers who
contribute to the review and learn about the work product, which enables them to prepare more
e ective tests, and to prepare those tests earlier | Participants dedicate adequate time and
attention to detail | Reviews are conducted on small chunks, so that reviewers do not lose
concentration during individual review and/or the review | Defects found are acknowledged,
appreciated, and handled objectively | The meeting is well-managed, so that participants consider
it a valuable use of their time | The review is conducted in an atmosphere of trust; the outcome
will not be used for the evaluation of the participants | Participants avoid body language and
behaviors that might indicate boredom, exasperation, or hostility to other participants | Adequate
training is provided, especially for more formal review types such as inspections | A culture of
learning and process improvement is promoted


Chapter 4 - Test Techniques (11)

411 Categories of Test Techniques and Their Characteristics: Component-system complexity


| Regulatory standards | Customer or contractual requirements | Risk levels and types | Available
documentation | Tester knowledge and skills | Available tools | Time and budget | Software
development lifecycle model | The types of defects expected in the component-system

Page 8 of 16
ff
ff
fi
fi
fi
ff
fi
fi
ff
fi
fi
fi
black-box test techniques: Test conditions, test cases, and test data are derived from a test
basis that may include software requirements, speci cations, use cases, and user stories | Test
cases may be used to detect gaps between the requirements and the implementation of the
requirements, as well as deviations from the requirements | Coverage is measured based on the
items tested in the test basis and the technique applied to the test basis

white-box test techniques: Test conditions, test cases and test data are derived from a test
basis that may include code, software architecture, detailed design or any other source of
information regarding the structure of the software | Coverage is measured based on the items
tested within a selected structure (the code) and the technique applied to the test basis

experience-based test techniques: Test conditions, test cases, and test data are derived from a
test basis that may include knowledge and experience of testers, developers, users or other
stakeholders | for low-level systems, time-pressure -> exploratory testing

4.2 Black-box test techniques


421 Equivalence Partitioning (K3): valid and invalid values [%100 Coverage] | Valid values are
values that should be accepted by the component or system. “valid equivalence partition” | Invalid
values are values that should be rejected by the component or system. “invalid equivalence
partition.” | Partitions can be identi ed for any data element. | Any partition may be divided into
sub partitions if required. | Each value must belong to one and only one equivalence partition. |
When invalid equivalence partitions are used in test cases, they should be tested individually, i.e.,
not combined with other invalid equivalence partitions, to ensure that failures are not masked.
Failures can be masked when several failures occur at the same time but only one is visible,
causing the other failures to be undetected. (Valid partitions can be combined)

422 Boundary Value Analysis (K3): is an extension of equivalence partitioning, but can only be
used when the partition is ordered, consisting of numeric or sequential data. The minimum and
maximum values of a partition are its boundary values | can be applied at all test levels | Boundary
coverage for a partition is measured as the number of boundary values tested, divided by the total
| Applying more than numbers | Applying more than once | Applying to output | Applying to more
than human inputs (API) | Two and three value boundary analysis

423 Decision Table Testing (K3): tester identi es conditions (often inputs) and the resulting
actions | Each column corresponds to a decision rule that de nes a unique combination of
conditions which results in the execution | Boolean values (true or false) or discrete values (red,
green, blue), but can also be numbers or ranges of numbers | For conditions: Y > true [T/1], N >
false [F/0], — > doesn’t matter [N/A] | For actions: X > should occur [Y/T/1], Blank > should not
occur [–/N/F/0] | minimum coverage standard for decision table testing is to have at least one
test case per decision rule in the table

424 State Transition Testing (K3): shows the possible software states, as well as how the
software enters, exits, and transitions between states | initiated by an event (user input of a value
into a eld) | normally show only the valid transitions and exclude the invalid transitions |
Coverage is commonly measured as the number of identi ed states or transitions tested, divided
by the total number of identi ed states

425 Use Case Testing (K2): a speci c way of designing interactions with software items | Use
cases are associated with actors (human users, external hardware, or other components or
systems) and subjects (component or system the use case is applied) | can be described by
interactions and activities, as well as preconditions, postconditions and natural language | Tests
are designed to exercise the de ned behaviors (basic, exceptional or alternative, and error
handling) | Coverage can be measured by the number of use case behaviors tested divided by
the total number of use case behaviors

Page 9 of 16
fi
fi

fi
fi
fi
fi
fi
fi
fi
4.3 White-box test techniques: commonly used at the component test level | %100 coverage
does not mean %100 tested! | Coverage cannot say that hasn’t been written | Coverage only
assess and says nothing about the quality

431 Statement Testing and Coverage (K2): exercises the potential executable statements in
the code

432 Decision Testing and Coverage (K2): exercises the decisions in the code and tests the
code that is executed based on the decision outcomes | control ows [IF/DO-WHILE/CASE] |
Coverage is measured as the number of decision outcomes executed by the tests divided by the
total number of decision outcomes in the test object | Decision c. stronger than Statement c. >

433 The Value of Statement and Decision Testing (K2): %100 Decision means %100 Statement
When 100% decision coverage is achieved, it executes all decision outcomes, which includes
testing the true outcome and also the false outcome, even when there is no explicit false
statement (in the case of an IF statement without an else in the code). | Statement coverage helps
to nd defects in code that was not exercised by other tests. | Decision coverage helps to nd
defects in code where other tests have not taken both true and false outcomes.

4.4 Experience-based Test Techniques: derived from tester’s skill, intuition and experience |
Coverage can be di cult to assess and may not be measurable with these techniques

441 Error Guessing (K2): How the application has worked in the past | What kind of errors tend
to be made | Failures that have occurred in other applications

442 Exploratory Testing (K2): informal (not pre-de ned) tests are designed, executed, logged,
and evaluated dynamically during test execution | results are used to learn more about the
component or system, and to create tests for the areas that may need more testing | In session-
based testing within a de ned time-box, and the tester uses a test charter containing test
objectives to guide the testing | The tester may use test session sheets to document the steps
followed and the discoveries made. | can incorporate the use of other black-box, white-box, and
experience-based techniques.

443 Checklist-based Testing (K2): design, implement, and execute tests to cover test conditions
found in a checklist. | can be created to support various test types, including functional and non-
functional testing | can provide guidelines and a degree of consistency | high-level lists (variability)
> greater coverage but less repeatability

Chapter 5 - Test Management (9)

511 Independent Testing (K2):


Degrees of independence in testing: No independent testers; the only form of testing available
is developers testing their own code | Independent developers or testers within the
development teams or the project team; this could be developers testing their colleagues’
products | Independent test team or group within the organization, reporting to project
management or executive management | Independent testers from the business organization or
user community, or with specializations in speci c test types such as usability, security,
performance, regulatory/compliance, or portability | Independent testers external to the
organization, either working on-site (in-house) or o -site (outsourcing)

Potential bene ts of test independence include: Independent testers are likely to recognize
di erent kinds of failures compared to developers | can verify, challenge, or disprove assumptions
made by stakeholders during speci cation and implementation of the system | Independent
testers of a vendor can report in an upright and objective manner about the system under test
without (political) pressure of the company that hired them

Potential drawbacks of test independence include: Isolation from development team, may lead
to a lack of collaboration, delays in providing feedback to development team, or an adversarial
relationship with development team | Developers may lose a sense of responsibility for quality |

Page 10 of 16
ff
fi
fi
ffi
fi
fi
fi
ff
fi
fl
fi
Independent testers may be seen as a bottleneck | Independent testers may lack some important
information (e.g., about the test object)

512 Tasks of a Test Manager and Tester (K1):


Test manager tasks may include: Develop test policy and test strategy | Plan test activities
and understanding test objectives and risks. [selecting test approaches, estimating test time,
e ort and cost, de ning test levels and test cycles, and planning defect management] | Write
and update test plan | Coordinate test plan with project managers, product owners, and others |
Share testing perspectives with other project activities, such as integration planning | Initiate the
analysis, design, implementation, and execution of tests, monitor test progress and results,
and check the status of exit criteria (or de nition of done) and facilitate test completion activities |
Prepare and deliver test progress reports and test summary reports | Adapt planning based on
test results and progress and take actions necessary for test control | Support setting up the
defect management system and con guration management of testware | Introduce metrics
for measuring test progress and evaluating quality of testing and product | Support selection and
implementation of tools to support the test process | Decide about the implementation of test
environment | Promote and advocate the testers, the test team, and the test profession within
the organization | Develop the skills and careers of testers
Tester tasks may include: Review and contribute to test plans | Analyze, review, and assess
requirements, user stories and acceptance criteria, speci cations, and models for testability |
Identify and document test conditions, and capture traceability between test cases, test
conditions, and the test basis | Design, set up, and verify test environment, often coordinating
with system administration and network management | Design and implement test cases and
test procedures | Prepare and acquire test data | Create the detailed test execution schedule
(manual tests) | Execute tests, evaluate the results, and document deviations from expected
results | Use appropriate tools to facilitate the test process | Automate tests as needed |
Evaluate non-functional characteristics such as performance e ciency, reliability, usability,
security, compatibility, and portability | Review tests developed by others

5.2 Test Planning and Estimation

521 Purpose and Content of a Test Plan (K2): Determining the scope, objectives, and risks of
testing | De ning the overall approach of testing | Integrating and coordinating the test activities
into the software lifecycle activities | Making decisions about what to test, people and
resources required to perform the various test activities, and how test activities will be carried out
| Scheduling of test analysis, design, implementation, execution, and evaluation activities |
Selecting metrics for test monitoring and control | Budgeting for test activities | Determining
the level of detail and structure for test documentation

522 Test Strategy and Test Approach (K2):


*Analytical: based on an analysis of some factor (requirement or risk). Risk-based testing is an
example of an analytical approach.

*Model-Based: based on some model of some required aspect of the product, such as a
function, a business process, an internal structure, or a non-functional characteristic. Examples of
such models include business process models, state models, and reliability growth models. 

*Methodical: relies on making systematic use of some prede ned set of tests or test conditions,
such as a taxonomy of common or likely types of failures, a list of important quality
characteristics, or company-wide look-and-feel standards for mobile apps or web pages.
*Process-compliant (or standard-compliant): involves analyzing, designing, and implementing
tests based on external rules and standards, such as those speci ed by industry-speci c
standards, by process documentation, by the rigorous identi cation and use of the test basis, or
by any process or standard imposed on or by the organization.

*Directed (or consultative): is driven by the advice, guidance, or instructions of stakeholders,


business domain experts, or technology experts, who may be outside the test team or outside the
organization itself.

Page 11 of 16
ff
fi
fi
fi
fi

fi
fi
fi
ffi
fi
fi
*Regression-averse: is motivated by a desire to avoid regression of existing capabilities. This
test strategy includes reuse of existing testware (especially test cases and test data), extensive
automation of regression tests, and standard test suites.

*Reactive: testing is reactive to the component or system being tested, and the events occurring
during test execution, rather than being pre-planned. Tests are designed and implemented, and
may immediately be executed in response to knowledge gained from prior test results.
Exploratory testing is a common technique employed in reactive strategies.

Factors to consider: Risks | Skills | Objectives | Regulations | Products | Business

523 Entry Criteria and Exit Criteria (De nition of Ready and Done) (K2):

entry criteria include: Availability of testable requirements, user stories, and/or models | A. of
test items that have met the exit criteria for any prior test levels | A. of test environment | A. of
necessary test tools | A. of test data and other necessary resources

exit criteria include: Planned tests have been executed | De ned level of coverage of req., user
stories, acceptance criteria, risks, code has been achieved | Number of unresolved defects is
within an agreed limit | Number of estimated remaining defects is su ciently low | Evaluated
levels of reliability, performance e ciency, usability, security, and other relevant quality
characteristics are su cient [Tests Coverage Defects Quality Money Schedule Risk]


524 Test Execution Schedule (K3): Once the various test cases and test procedures are
produced and assembled into test suites, the test suites can be arranged in a test execution
schedule | Factors taken into account: prioritization, dependencies, con rmation tests,
regression tests, and the most e cient sequence for executing the tests | Priorities &
Dependencies | Con rmation and regression tests must be prioritized

525 Factors In uencing the Test E ort (K1): product, development process, people, test results

Product characteristics: risks associated with the product | quality of the test basis | size of
the product | complexity of the product domain | requirements for quality characteristics (e.g.,
security, reliability) | required level of detail for test documentation | Requirements for legal and
regulatory compliance

*Development process characteristics: stability and maturity of the organization |


development model in use | test approach | tools used | test process | Time pressure
*People characteristics: skills and experience of the people involved, especially with similar
projects and products (e.g., domain knowledge) | Team cohesion and leadership

*Test results: number and severity of defects found | amount of rework required

526 Test Estimation Techniques (K2):


*metrics-based technique: estimating the test e ort based on metrics of former similar projects,
or based on typical values (in Agile development burndown charts, within sequential projects
defect removal models)

*expert-based technique: estimating the test e ort based on the experience of the owners of
the testing tasks or by experts (in Agile planning poker, Wideband Delphi estimation technique)

5.3 Test Monitoring and Control: gather information + provide feedback -> assess test progress
*test control: Re-prioritizing tests when an identi ed risk occurs (e.g., software delivered late) |
Changing the test schedule due to availability or unavailability of a test environment or other
resources | Re-evaluating whether a test item meets an entry or exit criterion due to rework 


531 Metrics Used in Testing (K1):


*Purpose of metrics in testing: Progress against the planned schedule and budget |Current
quality of the test object | Adequacy of the test approach | E ectiveness of the test activities with
respect to the objectives

*Common test metrics: Percentage of planned work done in test case preparation | Percentage
of planned work done in test environment preparation | Test case execution (number of test

Page 12 of 16
fl
fi
ffi
ffi
ffi
ff
fi
ff
ff
fi
ff
fi
ffi
fi
cases run/not run, test cases passed/failed, and/or test conditions passed/failed) | Defect
information (defect density, defects found and xed, failure rate, and con rmation test results) |
Test coverage of requirements, user stories, acceptance criteria, risks, code | Task completion,
resource allocation and usage, and e ort | Cost of testing, including the cost compared to the
bene t of nding the next defect or the cost compared to the bene t of running the next test

532 Purposes, Contents and Audiences for Test Reports (K2): purpose of test reporting is to
summarize and communicate test activity information
*t. progress report: status of the test activities and progress against the test plan | Factors
impeding progress | Testing planned for the next reporting period | quality of the test object
*t. summary report: summary of testing performed | Information on what occurred during a test
period | Deviations from plan, including deviations in schedule, duration, or e ort of test
activities | Status of testing and product quality with respect to exit criteria | Factors that have
blocked or continue to block progress | Metrics of defects, test cases, test coverage, activity
progress, and resource consumption| Residual risks | Reusable test work products produced

541 Con guration Management (K2): purpose is to establish and maintain the integrity of the
component or system | During test planning, con guration management procedures and
infrastructure (tools) should be identi ed and implemented | All test items are uniquely identi ed,
version controlled, tracked for changes, and related to each other | All items of testware are
uniquely identi ed, version controlled, tracked for changes, related to each other and related to
versions of the test item so that traceability can be maintained throughout the test process | All
identi ed documents and software items are referenced unambiguously in test documents.

5.5 Risks and Testing

551 De nition of Risk (K1): Risk involves the possibility of an event in the future which has
negative consequences. The level of risk is determined by the likelihood of the event and the
impact (the harm) from that event.

552 Product and Project Risks (K2):


*Product risks: Software might not perform its intended functions according to the speci cation
| Software might not perform its intended functions according to user, customer or stakeholder |
A system architecture may not adequately support some non-functional requirement(s) | A
particular computation may be performed incorrectly in some circumstances | A loop control
structure may be coded incorrectly | Response-times may be inadequate for a high-
performance transaction processing system | User experience (UX) feedback might not meet
product expectations
*Project risks:
*Project issues: Delays may occur in delivery, task completion, or satisfaction of exit criteria or
de nition of done | Inaccurate estimates, reallocation of funds to higher priority projects, or
general cost- cutting across the organization may result in inadequate funding | Late changes may
result in substantial re-work

*Organizational issues: Skills, training, and sta may not be su cient | Personnel issues may
cause con ict and problems | Users, business sta , or subject matter experts may not be
available due to con icting business priorities

*Political issues: Testers may not communicate their needs and/or the test results adequately |
Developers and/or testers may fail to follow up on information found in testing and reviews (not
improving development and testing practices) | There may be an improper attitude toward, or
expectations of, testing (not appreciating the value of nding defects during testing)

*Technical issues: Requirements may not be de ned well enough | The requirements may not
be met, given existing constraints | The test environment may not be ready on time | Data
conversion, migration planning, and their tool support may be late | Weaknesses in the
development process may impact the consistency or quality of project work products such as
design, code, con guration, test data, and test cases | Poor defect management and similar
problems may result in accumulated defects and other technical debt

Page 13 of 16
fi
fi
fi
fi
fi
fi
fl
fi
fi
fl
fi
ff
f
fi
fi
fi
f
fi
ffi
fi
fi
ff
fi
fi
*Supplier issues: A third party may fail to deliver a necessary product or service, or go
bankrupt | Contractual issues may cause problems to the project

553 Risk-based Testing and Product Quality (K2):


*In a risk-based approach, the results of product risk analysis are used to: Determine the test
techniques to be employed | Determine the particular levels and types of testing to be
performed (security testing, accessibility testing) | Determine the extent of testing to be carried
out | Prioritize testing in an attempt to nd the critical defects as early as possible | Determine
whether any activities in addition to testing could be employed to reduce risk (providing training
to inexperienced designers)
*To ensure that the likelihood of a product failure is minimized, risk management activities
provide a disciplined approach to: Analyze what can go wrong (risks) | Determine which risks
are important to deal with | Implement actions to mitigate those risks | Make contingency
plans to deal with the risks should they become actual events

561 Defect Management (K3):


*Typical defect reports have the following objectives: Provide developers and other parties
with information about any adverse event that occurred, to enable them to identify speci c
e ects, to isolate the problem with a minimal reproducing test, and to correct the potential
defect(s), as needed or to otherwise resolve the problem | Provide test managers a means of
tracking the quality of the work product and the impact on the testing (if a lot of defects are
reported, the testers will have spent a lot of time reporting them instead of running tests, and
there will be more con rmation testing needed) | Provide ideas for development and test process
improvement
*A defect report led during dynamic testing typically includes: An identi er | A title and a
short summary of the defect being reported | Date of the defect report, issuing organization, and
author | Identi cation of the test item (con guration item being tested) and environment | The
development lifecycle phase(s) in which the defect was observed | A description of the defect
to enable reproduction and resolution, including logs, database dumps, screenshots, or
recordings (if found during test execution) | Expected and actual results | Scope or degree of
impact (severity) of the defect on the interests of stakeholder(s) | Urgency/priority to x | State of
the defect report (open, deferred, duplicate, waiting to be xed, awaiting con rmation testing, re-
opened, closed) | Conclusions, recommendations and approvals | Global issues, such as other
areas that may be a ected by a change resulting from the defect | Change history, such as the
sequence of actions taken by project team members with respect to the defect to isolate, repair,
and con rm it as xed | References, including the test case that revealed the problem

Chapter 6 - Tool Support for Testing (2)


6.1 Test Tool Considerations: Tools that are directly used in testing, such as test execution
tools and test data preparation tools | Tools that help to manage requirements, test cases, test
procedures, automated test scripts, test results, test data, and defects, and for reporting and
monitoring test execution| Tools that are used for analysis and evaluation| Any tool that assists in
testing (a spreadsheet is also a test tool in this meaning) 


611 Test Tool Classi cation (K2):


*Purpose: Improve the e ciency of test activities by automating repetitive tasks or tasks that
require signi cant resources when done manually (test execution, regression testing) | Improve the
e ciency of test activities by supporting manual test activities throughout the test process |
Improve the quality of test activities by allowing for more consistent testing and a higher level
of defect reproducibility | Automate activities that cannot be executed manually (large scale
performance testing) | Increase reliability of testing (by automating large data comparisons or
simulating behavior)

*Tool support for management of testing and testware: Test management tools and
application lifecycle management tools (ALM) | Requirements management tools (traceability
to test objects | Defect management tools | Con guration management tools | Continuous

Page 14 of 16
ffi
ff
fi
fi
fi
fi
fi
ff
fi
fi
ffi
fi
fi
fi
fi
fi
fi
fi
fi
integration tools (D) 

*Tool support for static testing: Static analysis tools (D)
*Tool support for test design and implementation: Model-Based testing tools | Test data
preparation tools

*Tool support for test execution and logging: Test execution tools (to run regression tests) |
Coverage tools (requirements coverage, code coverage (D)) | Test harnesses (D) 

*Tool support for performance measurement and dynamic analysis: Performance testing tools |
Dynamic analysis tools (D)

*Tool support for specialized testing needs: support speci c testing for non-functional charac.

612 Bene ts and Risks of Test Automation (K1): 



*bene ts: Reduction in repetitive manual work | Greater consistency and repeatability | More
objective assessment (static measures, coverage) | Easier access to information about testing
(statistics and graphs about test progress, defect rates and performance)

*risks: Expectations for the tool may be unrealistic | The time, cost and e ort for the initial
introduction of a tool may be under-estimate | The time and e ort needed to achieve
signi cant and continuing bene ts from the tool may be under-estimated | The e ort required to
maintain the test work products generated by the tool may be under- estimated | The tool may
be relied on too much | Version control of test work products may be neglected | Relationships
and interoperability issues between critical tools may be neglected | The tool vendor may go out
of business, retire the tool, or sell the tool to a di erent vendor | The vendor may provide a poor
response for support, upgrades, and defect xes | An open source project may be suspended | A
new platform or technology may not be supported by the tool | There may be no clear
ownership of the tool (for mentoring, updates, etc.)

613 Special Considerations for Test Execution and Test Management Tools (K1):

Test execution tools:


*Capturing test approach: recording the actions of a manual tester (Ranorex) | This type of script
may be unstable when unexpected events occur

*Data-driven test approach: separates out the test inputs and expected results, usually into a
spreadsheet

*Keyword-driven test approach: a generic script processes keywords describing the actions to
be taken 

Test management tools: To produce useful information in a format that ts the needs of
organization | To maintain consistent traceability to requirements in a requirements
management tool | To link with test object version information in con guration management tool

6.2 E ective Use of Tools

621 Main Principles for Tool Selection (K1): Assessment of the maturity of the own
organization, its strengths and weaknesses | Identi cation of opportunities for an improved
test process supported by tools | Understanding of the technologies used by the test object(s),
in order to select a tool that is compatible with that technology | Understanding the build and
continuous integration tools already in use within the organization, in order to ensure tool
compatibility and integration | Evaluation of the tool against clear requirements and objective
criteria | Consideration of whether or not the tool is available for a free trial period (and for how
long) | Evaluation of the vendor or support for non- commercial (open source) tools | Identi cation
of internal requirements for coaching and mentoring in the use of the tool | Evaluation of
training needs, considering the testing (and test automation) skills of those who will be working
directly with the tool(s) | Consideration of pros and cons of various licensing models
(commercial or open source) | Estimation of a cost-bene t ratio based on a business case

622 Pilot Projects for Introducing a Tool into an Organization (K1): Gaining in-depth
knowledge about the tool, understanding both its strengths and weaknesses | Evaluating how
the tool ts with existing processes and practices, and determining what would need to change
Page 15 of 16
fi
ff
fi
fi
fi
fi
fi
ff
fi
fi
fi
ff
fi
fi
ff
ff
fi
| Deciding on standard ways of using, managing, storing, and maintaining the tool and the test
work products (deciding on naming conventions for les and tests, selecting coding standards,
creating libraries and de ning the modularity of test suites) | Assessing whether the bene ts will
be achieved at reasonable cost | Understanding the metrics that you wish the tool to collect
and report, and con guring the tool to ensure these metrics can be captured and reported

623 Success Factors for Tools (K1): Rolling out the tool to the rest of the organization
incrementally | Adapting and improving processes to t with the use of the tool | Providing
training, coaching, and mentoring for tool users | De ning guidelines for the use of the tool
(internal standards for automation) | Implementing a way to gather usage information from the
actual use of the tool | Monitoring tool use and bene ts | Providing support to the users of a
given tool | Gathering lessons learned from all users 



Page 16 of 16
fi
fi
fi
fi
fi
fi
fi

You might also like