Professional Documents
Culture Documents
Chapter 1 - Fundamentals of Testing (8) Chapter 4 - Test Techniques
Chapter 1 - Fundamentals of Testing (8) Chapter 4 - Test Techniques
Chapter 1 - Fundamentals of Testing (8) Chapter 4 - Test Techniques
1 x K1 - Keywords - 31 Keywords
1 x K1 - Keywords - 16 Keywords
Success
432 Decision Testing and Coverage
E ects
442 Exploratory Testing
1 x K1 - 311 Work Products that Can Be 553 Risk-based Testing and Product
Examined by Static Testing
Quality
formal review
561 Defect Management
313 Di erences between Static and Chapter 6 - Tool Support for Testing (2)
Dynamic Testing
Automation
Page 1 of 16
ff
ff
fi
fi
fi
fi
fi
fl
fi
ff
Chapter 1 - Fundamentals of Testing (8)
1. To prevent defects by evaluate work products such as requirements, user stories, design, and
code
3. To check whether the test object is complete and validate if it works as the users and other
stakeholders expect
5. To nd defects and failures thus reduce the level of risk of inadequate software quality
7. To comply with contractual, legal, or regulatory requirements or standards, and/or to verify the
test object’s compliance with such requirements or standards
1. Start with collaboration rather than battles. Remind everyone of the common goal of better
quality systems.
2. Emphasize the bene ts of testing. For example, for the authors, defect information can help
them improve their work products and their skills. For the organization, defects found and xed
during testing will save time and money and reduce overall risk to product quality.
3. Communicate test results and other ndings in a neutral, fact-focused way without criticizing
the person who created the defective item. Write objective and factual defect reports and review
ndings.
4. Try to understand how the other person feels and the reasons they may react negatively to the
information.
5. Con rm that the other person has understood what has been said and vice versa.
Debugging is the development activity that nds, analyzes, and xes such defects.
1. Having testers involved in requirements reviews or user story re nement could detect defects in
these work products. The identi cation and removal of requirements defects reduces the risk of
incorrect or untestable features being developed.
2. Having testers work closely with system designers while the system is being designed can
increase each party’s understanding of the design and how to test it. This increased
understanding can reduce the risk of fundamental design defects and enable tests to be identi ed
at an early stage.
3. Having testers work closely with developers while the code is under development can increase
each party’s understanding of the code and how to test it. This increased understanding can
reduce the risk of defects within the code and the tests.
4. Having testers verify and validate the software prior to release can detect failures that might
otherwise have been missed, and support the process of removing the defects that caused the
failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs
and satis es requirements.
123 Errors > Defects (Bug, Fault) > Failures + false positive/negative
7. Absence-of-errors is a fallacy
Business domain
142 Test Activities and Tasks & 143 Test Work Products
144 Traceability between the Test Basis and Test Work Products
4. Improving the understandability of test progress reports and test summary reports to include
the status of elements of the test basis (e.g., requirements that passed their tests, requirements
that failed their tests, and requirements that have pending tests)
5. Relating the technical aspects of testing to stakeholders in terms that they can understand
6. Providing information to assess product quality, process capability, and project progress
against business goals
Page 3 of 16
fi
fi
ff
3. Test analysis and design for a given test level begin during the corresponding development
activity
2. Many business units can be part of a project or program (combination of sequential and agile
development)
3. Short time to deliver a product to the market (merge of test levels and/or integration of test
types in test levels)
2.2 Test Levels : Component testing | Integration testing | System testing | Acceptance testing
Speci c objectives | Test basis, referenced to derive test cases | Test object (i.e., what is being
tested) |Typical defects and failures | Speci c approaches and responsibilities
221 Component Testing (Unit & Module) Mock | A > Stub | Driver > B
t. objects: Components, units or modules| Code and data structures| Classes | Database
modules
typical defects and failures: Incorrect functionality (not as described in design speci cations) |
Data ow problems | Incorrect code and logic
Component integration : Automation, CI
t. basis: Software and system design | Sequence diagrams | Interface and communication
protocol speci cations | Use cases | Architecture at component or system level | Work ows |
External interface de nitions
t. objects: Subsystems | Databases | Infrastructure | Interfaces | APIs | Microservices
typical defects and failures (component i.t.): Incorrect data, missing data, or incorrect data
encoding | Incorrect sequencing or timing of interface calls | Interface mismatch | Failures in
communication between components | Unhandled or improperly handled communication failures
between components | Incorrect assumptions about the meaning, units, or boundaries of the data
being passed between components
typical defects and failures (system i.t.): Inconsistent message structures between systems |
Incorrect data, missing data, or incorrect data encoding | Interface mismatch | Failures in
communication between systems | Unhandled or improperly handled communication failures
between systems | Incorrect assumptions about the meaning, units, or boundaries of the data
being passed between systems | Failure to comply with mandatory security regulations
t.basis: System and software requirement speci cations (functional and non-functional) | Risk
analysis reports | Use cases | Epics and user stories | Models of system behavior | State diagrams
| System and user manuals
typical defects and failures : Incorrect calculations | Incorrect or unexpected system functional
or non-functional behavior | Incorrect control and/or data ows within the system | Failure to
properly and completely carry out end-to-end functional tasks | Failure of the system to work
Page 4 of 16
ff
fl
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fi
fl
fi
fi
fl
properly in the system environment(s) | Failure of the system to work as described in system and
user manuals
Speci c approaches and responsibilities: often independent , 3rd party team, both funct. and
non-funct.
Testing of backup and restore |Installing, uninstalling and upgrading | Disaster recovery |User
management | Maintenance tasks | Data load and migration tasks | Checks for security
vulnerabilities | Performance testing -> e.g: system admin
3. Contractual and regulatory acceptance testing: by users or indp. testers
4. Alpha and beta testing: by dev and COTS feedback from users before market
t.basis: Business processes | User or business requirements | Regulations, legal contracts and
standards | Use cases and/or user stories | System requirements | System or user documentation|
Installation procedures | Risk analysis reports
t.basis-oat: Backup and restore procedures | Disaster recovery procedures | Non-functional
requirements | Operations documentation | Deployment and installation instructions | Performance
targets | Database packages | Security standards or regulations
t.objects: System under test | System con guration and con guration data |Business processes
for a fully integrated system | Recovery systems and hot sites (for business continuity and disaster
recovery testing) | Operational and maintenance processes | Forms | Reports | Existing and
converted production data
typical defects and failures : System work ows do not meet business or user requirements |
Business rules are not implemented correctly | System does not satisfy contractual or regulatory
requirements | Non-functional failures such as security vulnerabilities, inadequate performance
e ciency under high loads, or improper operation on a supported platform
Speci c approaches and responsibilities: Acceptance testing when it is installed or integrated |
Acceptance testing of a new functional enhancement may occur before system testing (iterative)
231 Functional Testing : black-box | should performed at all test levels | functional coverage
using traceability
232 Non-functional Testing : characteristics, how well, all test levels | performance, stress, load,
security | tested by non-funct. coverage elements
233 White-box Testing: mostly for component and c. integration | code knowledge
234 Change-related Testing: con rmation (re-test after defect xed) | regression (unintended
side-e ects) automation and CI in unchanged areas
235 Test types and test levels: examples for each type in each level
2.4 Maintenance Testing: degree of risk of change | size of existing system | size of change
Page 5 of 16
ffi
fi
fi
fi
ff
ff
fi
fi
fi
ffi
fi
fi
fl
fi
fi
fi
fi
ffi
241 Triggers for Maintenance: Modi cation | Migration | Retirement | restore/retrieve procedures
| regression testing
242 Impact Analysis for Maintenance: identify the intended consequences | identify the impact
of a change on existing tests | can be done before change is made | it can be di cult if:
1. Speci cations (e.g., business requirements, user stories, architecture) are out of date or missing
2. Test cases are not documented or are out of date
3. Bi-directional traceability between tests and the test basis has not been maintained
4. Tool support is weak or non-existent
5. The people involved do not have domain and/or system knowledge
6. Insu cient attention has been paid to the software's maintainability during development
3.1 Static Testing Basics: Static analysis (Tools) < Static Testing (Reviews) | Dynamic Testing |
Work products
Speci cations, business, functional and security requirements |Epics, user stories, and
acceptance criteria | Architecture and design speci cations | Code | Testware, test plans, test
cases, test procedures, and automated test scripts | User guides | Web pages | Contracts, project
plans, schedules, and budget planning | Con guration set up and infrastructure set up | Models,
such as activity diagrams, which may be used for Model-Based testing
312 Bene ts of Static Testing: 1. Feedback in design review and backlog re nement | 2. Detect
at early to avoid regression and con rmation test
1. Detecting and correcting defects more e ciently, and prior to dynamic test execution
4. Increasing development productivity (e.g., due to improved design, more maintainable code)
7. Reducing total cost of quality over the software’s lifetime, due to fewer failures later in the
lifecycle or after delivery into operation
313 Di erences between Static and Dynamic Testing: Static T. defects | Dynamic T. failures
Requirement defects (inconsistencies) | Design defects (ine cient algorithms | Coding defects
(variables with unde ned values) | Deviations from standards (lack of adherence to coding
standards) | Incorrect interface speci cations (di erent units of measurement used) | Security
vulnerabilities (susceptibility to bu er over ows) | Gaps or inaccuracies in test basis traceability or
coverage (missing tests for an acceptance criterion)
3.2 Review Process: Review < Static T. | Inspection: formal | Pair p: informal, not documented |
Formal: documented | Walkthrough: gain knowledge | Technical r: hold discussion
Planning: De ning the scope, the purpose of the review, what documents or parts of documents
to review, and the quality characteristics to be evaluated (10-20 p, inspection 1-2p.) | Estimating
e ort and timeframe | Identifying review characteristics such as the review type with roles,
activities, and checklists | Selecting the people to participate in the review and allocating roles
(4-6 inc. facilitator(moderator) and author) | De ning the entry and exit criteria for more formal
review types (inspections) | Checking that entry criteria are met (for more formal review types)
Page 6 of 16
ff
fi
ff
ffi
fi
fi
fi
fi
ff
fi
fi
fi
fl
ffi
fi
fi
ff
fi
ffi
fi
ffi
Initiate Review: Distributing the work product and other material, such as issue log forms,
checklists, and related work products | Explaining the scope, objectives, process, roles, and work
products to the participants | Answering any questions that participants may have about the
review
Individual Review: Reviewing all or part of the work product | Noting potential defects,
recommendations, and questions (checking rate: 5-10 page/hour)
Issue communication and analysis: Communicating identi ed potential defects in a review
meeting | Analyzing potential defects, assigning ownership and status to them | Evaluating and
documenting quality characteristics | Evaluating the review ndings against the exit criteria to
make a review decision (reject; major changes needed; accept, possibly with minor changes)
Severity of defects: Critical, Major, Minor, Exit: average defects per page
Fixing and reporting: Creating defect reports for those ndings that require changes to a work
product | Fixing defects found (typically done by the author) in the work product reviewed |
Communicating defects to the appropriate person or team (when found in a work product related
to the work product reviewed) | Recording updated status of defects (in formal reviews) |
Gathering metrics (for more formal review types) | Checking that exit criteria are met (for more
formal review types) | Accepting the work product when the exit criteria are reached
322 Roles and responsibilities in a formal review (K2): author | management | facilitator
(moderator) | review leader | reviewers | scribe (recorder)
Author: Creates the work product under review | Fixes defects in the work product under review
Management: Is responsible for review planning | Decides on the execution of reviews | Assigns
sta , budget, and time | Monitors ongoing cost-e ectiveness | Executes control decisions in the
event of inadequate outcomes
Facilitator (moderator): Ensures e ective running of review meetings | Mediates between the
various points of view | Is often the person upon whom the success of the review depends
Review leader: Takes overall responsibility for the review | Decides who will be involved and
organizes when and where it will take place | sometimes taken by manager or moderator
Reviewers: (checkers or inspectors) subject matter experts, persons working on the project,
stakeholders with an interest in the work product, individuals with speci c technical or business
backgrounds | Identify potential defects in the work product under review | May represent di erent
perspectives (tester, developer, user, operator, business analyst, usability expert, etc.)
Scribe (recorder): Collates, logical sequence, potential defects found during the individual review
activity | Records new potential defects, open points, and decisions from the review meeting
Informal review (pair r.): Main purpose: detecting potential defects | Additional purposes:
generating new ideas or solutions, quickly solving minor problems | Not based on a formal
(documented) process | May not involve a review meeting | May be performed by a colleague of
the author (buddy check) or by more people | Results may be documented (often not) | Varies in
usefulness depending on the reviewers | Use of checklists is optional | commonly Agile dev.
Walkthrough: Main purposes: nd defects, improve the software product, consider alternative
implementations, evaluate conformance to standards and speci cations | Additional purposes:
exchanging ideas about techniques or style variations, training of participants, achieving
consensus | Individual preparation before the review meeting is optional | Review meeting is
typically led by the author of the work product | Scribe is mandatory | Use of checklists is optional
| May take the form of scenarios, dry runs, or simulations | Potential defect logs and review
reports are produced | May vary in practice from quite informal to very formal | Author does prep.
Technical Review: Main purposes: gaining consensus, detecting potential defects | Further
purposes: evaluating quality and building con dence in the work product, generating new ideas,
motivating and enabling authors to improve future work products, considering alternative
implementations | Reviewers should be technical peers of the author, and technical experts in the
same or other disciplines | Individual preparation before the review meeting is required | Review
meeting is optional, ideally led by a trained facilitator (typically not the author) | Scribe is
mandatory, ideally not the author | Use of checklists is optional | Potential defect logs and review
Page 7 of 16
ff
fi
ff
fi
ff
fi
fi
fi
fi
fi
ff
reports are produced
Inspection: Main purposes: detecting potential defects, evaluating quality and building
con dence in the work product, preventing future similar defects through author learning and root
cause analysis | Further purposes: motivating and enabling authors to improve future work
products and the software development process, achieving consensus | Follows a de ned
process with formal documented outputs, based on rules and checklists | Uses clearly de ned
roles, speci ed in section 3.2.2 are mandatory, and may include a dedicated reader (who reads
the work product aloud often paraphrase, i.e. describes it in own words, during the review
meeting) | Individual preparation before the review meeting is required | Reviewers are either peers
of the author or experts in other disciplines that are relevant to the work product | Speci ed entry
and exit criteria are used | Scribe is mandatory | Review meeting is led by a trained facilitator (not
the author) | Author cannot act as the review leader, reader, or scribe | Potential defect logs and
review report are produced | Metrics are collected and used to improve the entire software
development process, including the inspection process
324 Applying Review Techniques (K3): (Individual Review Prep.) Ad hoc | Checklist-based |
Scenarios and dry runs | Perspective-based | Role-based
Checklist-based: systematic coverage of typical defects typ. | questions based on potential def.
Scenarios and dry runs: structured guidelines on the work product based on expected usage
Role-based: perspective of individual stakeholder roles | speci c end user types and roles in the
organization (experienced, inexperienced, senior, child, etc.) | accessibility roles
Organizational success factors: have clear objectives, de ned during review planning, and used
as measurable exit criteria | Pick the right review type and technique | Any review techniques
used, such as checklist-based or role-based reviewing, are suitable for e ective defect
identi cation in the work product to be reviewed | Any checklists used address the main risks and
are up to date | Large documents are written and reviewed in small chunks, limit the scope of the
review | Participants have adequate time to prepare | Reviews are scheduled with adequate notice
| Management supports the review process (by incorporating adequate time for review activities in
project schedules) | Reviews are integrated in the company's quality and/or test policies.
People-related success factors: The right people are involved to meet the review objectives,
pick the right reviewers | Testers, professional pessimists, are seen as valued reviewers who
contribute to the review and learn about the work product, which enables them to prepare more
e ective tests, and to prepare those tests earlier | Participants dedicate adequate time and
attention to detail | Reviews are conducted on small chunks, so that reviewers do not lose
concentration during individual review and/or the review | Defects found are acknowledged,
appreciated, and handled objectively | The meeting is well-managed, so that participants consider
it a valuable use of their time | The review is conducted in an atmosphere of trust; the outcome
will not be used for the evaluation of the participants | Participants avoid body language and
behaviors that might indicate boredom, exasperation, or hostility to other participants | Adequate
training is provided, especially for more formal review types such as inspections | A culture of
learning and process improvement is promoted
Chapter 4 - Test Techniques (11)
Page 8 of 16
ff
ff
fi
fi
fi
ff
fi
fi
ff
fi
fi
fi
black-box test techniques: Test conditions, test cases, and test data are derived from a test
basis that may include software requirements, speci cations, use cases, and user stories | Test
cases may be used to detect gaps between the requirements and the implementation of the
requirements, as well as deviations from the requirements | Coverage is measured based on the
items tested in the test basis and the technique applied to the test basis
white-box test techniques: Test conditions, test cases and test data are derived from a test
basis that may include code, software architecture, detailed design or any other source of
information regarding the structure of the software | Coverage is measured based on the items
tested within a selected structure (the code) and the technique applied to the test basis
experience-based test techniques: Test conditions, test cases, and test data are derived from a
test basis that may include knowledge and experience of testers, developers, users or other
stakeholders | for low-level systems, time-pressure -> exploratory testing
421 Equivalence Partitioning (K3): valid and invalid values [%100 Coverage] | Valid values are
values that should be accepted by the component or system. “valid equivalence partition” | Invalid
values are values that should be rejected by the component or system. “invalid equivalence
partition.” | Partitions can be identi ed for any data element. | Any partition may be divided into
sub partitions if required. | Each value must belong to one and only one equivalence partition. |
When invalid equivalence partitions are used in test cases, they should be tested individually, i.e.,
not combined with other invalid equivalence partitions, to ensure that failures are not masked.
Failures can be masked when several failures occur at the same time but only one is visible,
causing the other failures to be undetected. (Valid partitions can be combined)
422 Boundary Value Analysis (K3): is an extension of equivalence partitioning, but can only be
used when the partition is ordered, consisting of numeric or sequential data. The minimum and
maximum values of a partition are its boundary values | can be applied at all test levels | Boundary
coverage for a partition is measured as the number of boundary values tested, divided by the total
| Applying more than numbers | Applying more than once | Applying to output | Applying to more
than human inputs (API) | Two and three value boundary analysis
423 Decision Table Testing (K3): tester identi es conditions (often inputs) and the resulting
actions | Each column corresponds to a decision rule that de nes a unique combination of
conditions which results in the execution | Boolean values (true or false) or discrete values (red,
green, blue), but can also be numbers or ranges of numbers | For conditions: Y > true [T/1], N >
false [F/0], — > doesn’t matter [N/A] | For actions: X > should occur [Y/T/1], Blank > should not
occur [–/N/F/0] | minimum coverage standard for decision table testing is to have at least one
test case per decision rule in the table
424 State Transition Testing (K3): shows the possible software states, as well as how the
software enters, exits, and transitions between states | initiated by an event (user input of a value
into a eld) | normally show only the valid transitions and exclude the invalid transitions |
Coverage is commonly measured as the number of identi ed states or transitions tested, divided
by the total number of identi ed states
425 Use Case Testing (K2): a speci c way of designing interactions with software items | Use
cases are associated with actors (human users, external hardware, or other components or
systems) and subjects (component or system the use case is applied) | can be described by
interactions and activities, as well as preconditions, postconditions and natural language | Tests
are designed to exercise the de ned behaviors (basic, exceptional or alternative, and error
handling) | Coverage can be measured by the number of use case behaviors tested divided by
the total number of use case behaviors
Page 9 of 16
fi
fi
fi
fi
fi
fi
fi
fi
fi
4.3 White-box test techniques: commonly used at the component test level | %100 coverage
does not mean %100 tested! | Coverage cannot say that hasn’t been written | Coverage only
assess and says nothing about the quality
431 Statement Testing and Coverage (K2): exercises the potential executable statements in
the code
432 Decision Testing and Coverage (K2): exercises the decisions in the code and tests the
code that is executed based on the decision outcomes | control ows [IF/DO-WHILE/CASE] |
Coverage is measured as the number of decision outcomes executed by the tests divided by the
total number of decision outcomes in the test object | Decision c. stronger than Statement c. >
433 The Value of Statement and Decision Testing (K2): %100 Decision means %100 Statement
When 100% decision coverage is achieved, it executes all decision outcomes, which includes
testing the true outcome and also the false outcome, even when there is no explicit false
statement (in the case of an IF statement without an else in the code). | Statement coverage helps
to nd defects in code that was not exercised by other tests. | Decision coverage helps to nd
defects in code where other tests have not taken both true and false outcomes.
4.4 Experience-based Test Techniques: derived from tester’s skill, intuition and experience |
Coverage can be di cult to assess and may not be measurable with these techniques
441 Error Guessing (K2): How the application has worked in the past | What kind of errors tend
to be made | Failures that have occurred in other applications
442 Exploratory Testing (K2): informal (not pre-de ned) tests are designed, executed, logged,
and evaluated dynamically during test execution | results are used to learn more about the
component or system, and to create tests for the areas that may need more testing | In session-
based testing within a de ned time-box, and the tester uses a test charter containing test
objectives to guide the testing | The tester may use test session sheets to document the steps
followed and the discoveries made. | can incorporate the use of other black-box, white-box, and
experience-based techniques.
443 Checklist-based Testing (K2): design, implement, and execute tests to cover test conditions
found in a checklist. | can be created to support various test types, including functional and non-
functional testing | can provide guidelines and a degree of consistency | high-level lists (variability)
> greater coverage but less repeatability
Potential bene ts of test independence include: Independent testers are likely to recognize
di erent kinds of failures compared to developers | can verify, challenge, or disprove assumptions
made by stakeholders during speci cation and implementation of the system | Independent
testers of a vendor can report in an upright and objective manner about the system under test
without (political) pressure of the company that hired them
Potential drawbacks of test independence include: Isolation from development team, may lead
to a lack of collaboration, delays in providing feedback to development team, or an adversarial
relationship with development team | Developers may lose a sense of responsibility for quality |
Page 10 of 16
ff
fi
fi
ffi
fi
fi
fi
ff
fi
fl
fi
Independent testers may be seen as a bottleneck | Independent testers may lack some important
information (e.g., about the test object)
521 Purpose and Content of a Test Plan (K2): Determining the scope, objectives, and risks of
testing | De ning the overall approach of testing | Integrating and coordinating the test activities
into the software lifecycle activities | Making decisions about what to test, people and
resources required to perform the various test activities, and how test activities will be carried out
| Scheduling of test analysis, design, implementation, execution, and evaluation activities |
Selecting metrics for test monitoring and control | Budgeting for test activities | Determining
the level of detail and structure for test documentation
*Model-Based: based on some model of some required aspect of the product, such as a
function, a business process, an internal structure, or a non-functional characteristic. Examples of
such models include business process models, state models, and reliability growth models.
*Methodical: relies on making systematic use of some prede ned set of tests or test conditions,
such as a taxonomy of common or likely types of failures, a list of important quality
characteristics, or company-wide look-and-feel standards for mobile apps or web pages.
*Process-compliant (or standard-compliant): involves analyzing, designing, and implementing
tests based on external rules and standards, such as those speci ed by industry-speci c
standards, by process documentation, by the rigorous identi cation and use of the test basis, or
by any process or standard imposed on or by the organization.
Page 11 of 16
ff
fi
fi
fi
fi
fi
fi
fi
ffi
fi
fi
*Regression-averse: is motivated by a desire to avoid regression of existing capabilities. This
test strategy includes reuse of existing testware (especially test cases and test data), extensive
automation of regression tests, and standard test suites.
*Reactive: testing is reactive to the component or system being tested, and the events occurring
during test execution, rather than being pre-planned. Tests are designed and implemented, and
may immediately be executed in response to knowledge gained from prior test results.
Exploratory testing is a common technique employed in reactive strategies.
523 Entry Criteria and Exit Criteria (De nition of Ready and Done) (K2):
entry criteria include: Availability of testable requirements, user stories, and/or models | A. of
test items that have met the exit criteria for any prior test levels | A. of test environment | A. of
necessary test tools | A. of test data and other necessary resources
exit criteria include: Planned tests have been executed | De ned level of coverage of req., user
stories, acceptance criteria, risks, code has been achieved | Number of unresolved defects is
within an agreed limit | Number of estimated remaining defects is su ciently low | Evaluated
levels of reliability, performance e ciency, usability, security, and other relevant quality
characteristics are su cient [Tests Coverage Defects Quality Money Schedule Risk]
524 Test Execution Schedule (K3): Once the various test cases and test procedures are
produced and assembled into test suites, the test suites can be arranged in a test execution
schedule | Factors taken into account: prioritization, dependencies, con rmation tests,
regression tests, and the most e cient sequence for executing the tests | Priorities &
Dependencies | Con rmation and regression tests must be prioritized
525 Factors In uencing the Test E ort (K1): product, development process, people, test results
Product characteristics: risks associated with the product | quality of the test basis | size of
the product | complexity of the product domain | requirements for quality characteristics (e.g.,
security, reliability) | required level of detail for test documentation | Requirements for legal and
regulatory compliance
*Test results: number and severity of defects found | amount of rework required
*expert-based technique: estimating the test e ort based on the experience of the owners of
the testing tasks or by experts (in Agile planning poker, Wideband Delphi estimation technique)
5.3 Test Monitoring and Control: gather information + provide feedback -> assess test progress
*test control: Re-prioritizing tests when an identi ed risk occurs (e.g., software delivered late) |
Changing the test schedule due to availability or unavailability of a test environment or other
resources | Re-evaluating whether a test item meets an entry or exit criterion due to rework
*Common test metrics: Percentage of planned work done in test case preparation | Percentage
of planned work done in test environment preparation | Test case execution (number of test
Page 12 of 16
fl
fi
ffi
ffi
ffi
ff
fi
ff
ff
fi
ff
fi
ffi
fi
cases run/not run, test cases passed/failed, and/or test conditions passed/failed) | Defect
information (defect density, defects found and xed, failure rate, and con rmation test results) |
Test coverage of requirements, user stories, acceptance criteria, risks, code | Task completion,
resource allocation and usage, and e ort | Cost of testing, including the cost compared to the
bene t of nding the next defect or the cost compared to the bene t of running the next test
532 Purposes, Contents and Audiences for Test Reports (K2): purpose of test reporting is to
summarize and communicate test activity information
*t. progress report: status of the test activities and progress against the test plan | Factors
impeding progress | Testing planned for the next reporting period | quality of the test object
*t. summary report: summary of testing performed | Information on what occurred during a test
period | Deviations from plan, including deviations in schedule, duration, or e ort of test
activities | Status of testing and product quality with respect to exit criteria | Factors that have
blocked or continue to block progress | Metrics of defects, test cases, test coverage, activity
progress, and resource consumption| Residual risks | Reusable test work products produced
541 Con guration Management (K2): purpose is to establish and maintain the integrity of the
component or system | During test planning, con guration management procedures and
infrastructure (tools) should be identi ed and implemented | All test items are uniquely identi ed,
version controlled, tracked for changes, and related to each other | All items of testware are
uniquely identi ed, version controlled, tracked for changes, related to each other and related to
versions of the test item so that traceability can be maintained throughout the test process | All
identi ed documents and software items are referenced unambiguously in test documents.
551 De nition of Risk (K1): Risk involves the possibility of an event in the future which has
negative consequences. The level of risk is determined by the likelihood of the event and the
impact (the harm) from that event.
*Organizational issues: Skills, training, and sta may not be su cient | Personnel issues may
cause con ict and problems | Users, business sta , or subject matter experts may not be
available due to con icting business priorities
*Political issues: Testers may not communicate their needs and/or the test results adequately |
Developers and/or testers may fail to follow up on information found in testing and reviews (not
improving development and testing practices) | There may be an improper attitude toward, or
expectations of, testing (not appreciating the value of nding defects during testing)
*Technical issues: Requirements may not be de ned well enough | The requirements may not
be met, given existing constraints | The test environment may not be ready on time | Data
conversion, migration planning, and their tool support may be late | Weaknesses in the
development process may impact the consistency or quality of project work products such as
design, code, con guration, test data, and test cases | Poor defect management and similar
problems may result in accumulated defects and other technical debt
Page 13 of 16
fi
fi
fi
fi
fi
fi
fl
fi
fi
fl
fi
ff
f
fi
fi
fi
f
fi
ffi
fi
fi
ff
fi
fi
*Supplier issues: A third party may fail to deliver a necessary product or service, or go
bankrupt | Contractual issues may cause problems to the project
*Tool support for management of testing and testware: Test management tools and
application lifecycle management tools (ALM) | Requirements management tools (traceability
to test objects | Defect management tools | Con guration management tools | Continuous
Page 14 of 16
ffi
ff
fi
fi
fi
fi
fi
ff
fi
fi
ffi
fi
fi
fi
fi
fi
fi
fi
fi
integration tools (D)
*Tool support for static testing: Static analysis tools (D)
*Tool support for test design and implementation: Model-Based testing tools | Test data
preparation tools
*Tool support for test execution and logging: Test execution tools (to run regression tests) |
Coverage tools (requirements coverage, code coverage (D)) | Test harnesses (D)
*Tool support for performance measurement and dynamic analysis: Performance testing tools |
Dynamic analysis tools (D)
*Tool support for specialized testing needs: support speci c testing for non-functional charac.
*risks: Expectations for the tool may be unrealistic | The time, cost and e ort for the initial
introduction of a tool may be under-estimate | The time and e ort needed to achieve
signi cant and continuing bene ts from the tool may be under-estimated | The e ort required to
maintain the test work products generated by the tool may be under- estimated | The tool may
be relied on too much | Version control of test work products may be neglected | Relationships
and interoperability issues between critical tools may be neglected | The tool vendor may go out
of business, retire the tool, or sell the tool to a di erent vendor | The vendor may provide a poor
response for support, upgrades, and defect xes | An open source project may be suspended | A
new platform or technology may not be supported by the tool | There may be no clear
ownership of the tool (for mentoring, updates, etc.)
613 Special Considerations for Test Execution and Test Management Tools (K1):
*Data-driven test approach: separates out the test inputs and expected results, usually into a
spreadsheet
*Keyword-driven test approach: a generic script processes keywords describing the actions to
be taken
Test management tools: To produce useful information in a format that ts the needs of
organization | To maintain consistent traceability to requirements in a requirements
management tool | To link with test object version information in con guration management tool
621 Main Principles for Tool Selection (K1): Assessment of the maturity of the own
organization, its strengths and weaknesses | Identi cation of opportunities for an improved
test process supported by tools | Understanding of the technologies used by the test object(s),
in order to select a tool that is compatible with that technology | Understanding the build and
continuous integration tools already in use within the organization, in order to ensure tool
compatibility and integration | Evaluation of the tool against clear requirements and objective
criteria | Consideration of whether or not the tool is available for a free trial period (and for how
long) | Evaluation of the vendor or support for non- commercial (open source) tools | Identi cation
of internal requirements for coaching and mentoring in the use of the tool | Evaluation of
training needs, considering the testing (and test automation) skills of those who will be working
directly with the tool(s) | Consideration of pros and cons of various licensing models
(commercial or open source) | Estimation of a cost-bene t ratio based on a business case
622 Pilot Projects for Introducing a Tool into an Organization (K1): Gaining in-depth
knowledge about the tool, understanding both its strengths and weaknesses | Evaluating how
the tool ts with existing processes and practices, and determining what would need to change
Page 15 of 16
fi
ff
fi
fi
fi
fi
fi
ff
fi
fi
fi
ff
fi
fi
ff
ff
fi
| Deciding on standard ways of using, managing, storing, and maintaining the tool and the test
work products (deciding on naming conventions for les and tests, selecting coding standards,
creating libraries and de ning the modularity of test suites) | Assessing whether the bene ts will
be achieved at reasonable cost | Understanding the metrics that you wish the tool to collect
and report, and con guring the tool to ensure these metrics can be captured and reported
623 Success Factors for Tools (K1): Rolling out the tool to the rest of the organization
incrementally | Adapting and improving processes to t with the use of the tool | Providing
training, coaching, and mentoring for tool users | De ning guidelines for the use of the tool
(internal standards for automation) | Implementing a way to gather usage information from the
actual use of the tool | Monitoring tool use and bene ts | Providing support to the users of a
given tool | Gathering lessons learned from all users
Page 16 of 16
fi
fi
fi
fi
fi
fi
fi