Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

Software Testing Glossary

February 4, 2011
Version 1.00
Table of Contents
1.0 Introduction.....................................................................................................................................1
1.1 PURPOSE........................................................................................................................................................1
1.2 TERMS............................................................................................................................................................1

Testing_glossary.doc ii
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
REVISION HISTORY

Version Date Revision Description Author of


Number Revision

Testing_glossary.doc iii
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
1.0 Introduction
1.1 Purpose
The Software Testing Glossary identifies the terms used throughout the testing effort.

1.2 Terms

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to


accept a software product. Normally performed to validate the software meets a set of agreed
acceptance criteria.

Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf,
blind, mentally disabled etc.).

Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly
trying the system's functionality. This can include negative testing as well. See also Monkey
Testing.

Agile development: A project is executed as a series of small efforts with the set of
requirements focused on small, specific capabilities with the users and developers teaming to
work the interplay of requirements and capabilities together. Cycles are short and do not
required detailed documentation or requirements or designs. Users and developers work
closely together as prototypes and rapid development and testing are done.

Agile Testing: Testing practice for projects using agile methodologies, treating development
as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven
Development.

Alpha testing: is simulated or actual operational testing by potential users/customers or an


independent test team at the developers' site. Alpha testing is often employed for off-the-shelf
software as a form of internal acceptance testing, before the software goes to beta testing.

Application: Software built to support one or more functions.

Application Programming Interface (API): A formalized set of software calls and routines
that can be referenced by an application program in order to access supporting system or
network services.

Architecture: The fundamental organization of a system in terms of its components, their


relationships to each other and the business and technical environments as well as the

Testing_glossary.doc 1
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
principles governing its design and evolution. Architecture can include business processes,
hardware, software applications, and databases.

Assessment: evaluate or estimate the quality of a product and identifying discrepancies


between optimal and actual performance, and establishing priorities for action.

Automated Software Quality (ASQ): The use of software tools, such as automated testing
tools, to improve software quality.

Automated Testing: Testing employing software tools which execute tests without manual
intervention. This can be applied in GUI, performance, API, etc. testing. The use of software
to control the execution of tests, the comparison of actual outcomes to predicted outcomes,
the setting up of test preconditions, and other test control and test reporting functions.

Baseline: The point at which some deliverable produced during the software engineering
process is put under formal change control.

Benchmark Testing: Tests that use representative sets of programs and data designed to
evaluate the performance of computer hardware and software in a given configuration.

Benefits Realization Test: A test or analysis conducted after an application is moved into
production to determine whether it is likely to meet the originating business case.

Beta Testing: Testing of a release of a software product conducted by customers.

Black Box Testing: Testing based on an analysis of the specification of a piece of software
without reference to its internal workings. The goal is to test how well the component conforms
to the published requirements for the component.

Bottom Up Testing: An approach to integration testing where the lowest level components
are tested first, then used to facilitate the testing of higher level components. The process is
repeated until the component at the top of the hierarchy is tested.

Bug: A fault in a program which causes the program to perform in an unintended or


unanticipated manner.

Business process: A collection of related activities that produces a specific result (e.g.,
service or product) or meets an objective for a customer or group of customers.

Testing_glossary.doc 2
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Business process-based testing: testing based on expected user profiles such as scenarios
or use cases. Used in system testing and acceptance testing.

CAST: Computer Aided Software Testing.

Capture/Replay Tool: A test tool that records test input as it is sent to the software under test.
The input cases stored can then be used to reproduce the test at a later time. Most commonly
applied to GUI test tools.

Cause Effect Graph: A graphical representation of inputs and the associated outputs effects
which can be used to design test cases.

Change management: An approach to transitioning organizations or teams to a new state;


this could be new processes and/or systems. Software change management involves the
approach to handling modifications to software systems.

Checklists: A series of probing questions about the completeness and attributes of an


application system. It both limits the scope of the test and directs the tester to the areas in
which there is a high probability of a problem.

Code Complete: A phase of development where functionality is implemented in entirety; bug


fixes are all that are left. All functions found in the Functional Specifications have been
implemented.

Code Coverage: An analysis method that determines which parts of the software have been
executed (covered) by the test case suite and which parts have not been executed and
therefore may require additional attention.

Code Inspection: A formal testing technique where the programmer reviews source code with
a group of peers who ask questions analyzing the program logic, analyzing the code with
respect to a checklist of historically common programming errors, and analyzing its compliance
with coding standards.

Code Walkthrough: A formal testing technique where source code is traced by a group with a
small set of test cases, while the state of program variables is manually monitored, to analyze
the programmer's logic and assumptions.

Coding: The generation of source code.

Testing_glossary.doc 3
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
CMMI: Capability Maturity Model Integration is a process improvement approach that
provides organizations with the essential elements of effective processes that ultimately improve
their performance.

CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging
the maturity of the software processes of an organization and for identifying the key practices
that are required to increase the maturity of these processes.

Compatibility Testing: Testing to determine whether software is compatible with other


elements of a system with which it should operate, e.g. browsers, Operating Systems, or
hardware.

Complete Test Set: A test set containing data that causes each element of pre-specified set
of Boolean conditions to be true. In addition, each element of the test set causes at least one
condition to be true.

Completeness: The property that all necessary parts of an entity are included. Often, a
product is said to be complete if it has met all requirements.

Completion criteria: A criterion for determining when planned testing is complete, defined in
terms of a test measurement technique (e.g. coverage), cost, time or faults found (number
and/or severity). See Exit Criteria.

Component: A minimal software item for which a separate specification is available.

Component Testing: See Unit Testing.

Condition Coverage: A white-box testing technique that measures the number of or


percentage of, decision outcomes covered by the test cases designed.

Configuration management: The practice of establishing and maintaining consistency of a


product’s or system’s attributes with its requirements and evolving the technical baseline.

Configuration Testing: Testing of an application on all supported hardware and software


platforms. This may include various combinations of hardware types, configuration settings,
and software versions.

Conversion Testing Validates the effectiveness of data conversion processes, including field-
to-field mapping, and data translation.
Testing_glossary.doc 4
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Correctness: The extent to which software is free from design and coding defects (i.e., fault-
free). It is also the extent to which software meets its specified requirements and user
objectives.

Cost of Quality (COQ): Money spent beyond expected production costs (labor, materials,
equipment) to ensure that the product the customer receives is a quality (defect free) product.
The Cost of Quality includes prevention, appraisal, and correction or repair costs.

COTS: Commercial-of-the-Shelf software. Technology which is ready-made and available for


sale, lease, or license to the general public.

Coverage: The degree, expressed as a percentage, to which a specified coverage item (an
entity or property used as a basis for testing) has been exercised by a set of tests

Coverage-Based Analysis: A metric used to show the logic covered during a test session,
providing insight to the extent of testing.

Database: Collection of data or a data store to support one or more functions.

Data Dictionary: A database that contains definitions of all data items defined during analysis.

Data Flow Diagram: A modeling notation that represents a functional decomposition of a


system.

Data Driven Testing: Testing in which the action of a test case is parameterized by externally
defined data values, maintained as a file or spreadsheet. It is a common technique in
Automated Testing.

Data Warehouse: A repository of an organization’s data typically to support reporting; may


include data from multiple functions (e.g., finance, marketing, sales, etc.).

DB2: IBM relational database platform.

Debugging: The process of finding and removing the causes of software failures.

Decision Table A tool for documenting the unique combinations of conditions and associated
results in order to derive unique test cases for validation testing.

Testing_glossary.doc 5
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Defect: Nonconformance to requirements, functional or program specification.

Defect Tracking Tools: Tools for documenting defects as they are found during testing and
for tracking their status through to resolution.

Design: Definition of the components, modules, interfaces, and data to meet specified
requirements.

Development: The process of creating or modifying systems in accordance with the design.
This involves coding new modules or modifying existing code and creating and modifying the
databases.

Development Test: testing conducted by the developer. Also known as Unit Test.

Endurance Testing: Checks for memory leaks or other problems that may occur with
prolonged execution.

End-to-End testing: Testing a complete application environment in a situation that mimics


real-world use, such as interacting with a database, using network communications, or
interacting with other hardware, applications, or systems if appropriate.

End-user: An end-user of a computer system is someone who operates the system to carry
out the day-to-day functions.

Entrance Criteria: Required conditions and standards for work product quality that must be
present or met for entry into the next stage of the software development process.

Error: A mistake in the system under test; usually but not always a coding mistake on the part
of the developer. An action that produces an incorrect result.

Error Guessing: Test data selection technique for picking values that seem likely to cause
defects. This technique is based upon the theory that test cases and test data can be
developed based on the intuition and experience of the tester.

Exhaustive Testing: Testing which covers all combinations of input values and preconditions
for an element of the software under test.

Testing_glossary.doc 6
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Exit Criteria: Standards for work product quality, which block the promotion of incomplete or
defective work products to subsequent stages of the software development process.

Expected outcome: The behavior predicted by the specification of an object under specified
conditions.

Failure: Deviation of the software from its expected delivery or service.

Fault: A manifestation of an error in software. A fault, if encountered, may cause a failure.

Flash: A browser plug-in.

Flowchart Pictorial representations of data flow and computer logic.

Functional Requirement: A requirement that specifies a function that a system or system


component must perform.

Functional Specification : A document that describes in detail the characteristics of the


product with regard to its intended features.

Functional Testing: Testing of the functional components of a system based upon the
requirements.

Glass Box Testing: A synonym for White Box Testing .

Gorilla Testing: Testing one particular module or functionality heavily .

Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing
a piece of software against its specification but using some knowledge of its internal workings.

GUI: Graphical User Interface.

High Order Tests: Black-box tests conducted once the software has been integrated.

HTML: Hypertext Markup Language. A web authoring language.

Testing_glossary.doc 7
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Impact: The probability of a defect to be experienced during normal business functions.
Suggested Values: ">50%", ">30%", ">10%", ">1%".

Impact analysis : Assessing the effect of a change to an existing system, usually in


maintenance testing, to determine the amount of regression testing to be done.

Incident: Any significant unplanned event that occurs during testing that requires subsequent
investigation and/or correction. For example when expected and actual test results are
different. Incidents can be raised against documentation as well as code. Incidents are logged
when a person other than the author of the product performs the testing

Independent Verification and Validation (IV&V): The verification and validation of a software
product by an organization that is both technically and managerially separate from the
organization responsible for developing the product.

Informal review : A widely used type of review which is undocumented, but useful.

Inspection: a group review quality improvement process for written material. It consists of two
aspects; product (document) improvement and process improvement (of both document
production and inspection). An inspection is led by a trained leader or moderator (not the
author), and includes defined roles, metrics, rules, checklist, entry and exit criteria.

Installation Testing: Confirms that the application under test recovers from expected or
unexpected events without loss of data or functionality. Events can include shortage of disk
space, unexpected loss of communication, or power out conditions.

Integration Testing: Testing of combined parts of an application to determine if they function


together correctly; usually performed after unit and functional testing. This type of testing is
especially relevant to client/server and distributed systems.

Invalid Input: Test data that lays outside the domain of the function the program represents.

IP Address: Internet Protocol Address.

Isolation testing: component testing of individual components in isolation from surrounding


components, with surrounding components being simulated by stubs.

Iterative software development process: developed in response to the weaknesses of the


waterfall model. It starts with an initial planning and ends with deployment and cyclic
interactions in between. Iterative development is an essential part of the Rational Unified
Testing_glossary.doc 8
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Process, Extreme Programming and generally the various agile software development
frameworks. Also known as incremental development.

JAVA: Object oriented software programming language.

Life Cycle Testing: The process of verifying the consistency, completeness, and correctness
of software at each stage of the development life cycle.

Load Testing: See Performance Testing.


Maintenance testing: testing changes (fixes or enhancements) to existing systems. May
include analysis of the impact of the change to decide what regression testing should be done.

Metric: A standard of measurement. Software metrics are the statistics describing the structure
or content of a program. A metric should be a real objective measurement of something such
as number of bugs per lines of code .

Microsoft Access: Windows-based database tool.

Model office: an environment for system or user acceptance testing which is as close to the
field use as possible.

Negative testing: Testing aimed at showing software does not work. Also known as dirty
testing.

Network Analyzers: A tool used to assist in detecting and diagnosing network problems.

Non-functional system testing: testing of system requirements that do not relate to


functionality. These may include, performance, usability, security, etc. Also known as quality
attributes.

N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in
which errors found in test cycle N are resolved and the solution is retested in test cycle N+1.
The cycles are typically repeated until the solution reaches a steady state and there are no
errors. See also Regression Testing.

Operational Test: A US DoD term for testing performed by the end-user on software in its
normal operating environment.

Testing_glossary.doc 9
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Operational Test and Evaluation: Formal testing conducted prior to deployment to evaluate
the operational effectiveness and suitability of the system with respect to its mission.

Output Forcing: involves making a set of test cases designed to produce a particular output
from the system. The focus is on creating the desired output not on the input that initiated the
system response.

Pass/Fail Criteria: Decision rules used to determine whether a software item or feature
passes or fails a test.

Path: a sequence of executable statements of a component, from an entry point to an exit


point

Path Testing: Testing in which all paths in the program source code are tested at least once.

Peer review : a type of review which is documented, has defined fault detection processes,
and includes peers and technical experts but no managers. Also known as a technical review.

Performance Testing: Testing conducted to evaluate the compliance of a system or


component with specified performance requirements. Often this is performed using an
automated test tool to simulate large number of users. Also known as Load Testing.

Performance/Timing Analyzer: A tool to measure system performance.

Positive Testing: Testing aimed at showing software works. See also Negative Testing.

Precondition: environmental and state conditions which must be fulfilled before the
component can be executed with a particular input value.

Priority: A measure of the effect of a defect on the ability for a business (either the software
producer or the end user client) to conduct business.

Proof of Correctness: The use of mathematical logic techniques to show that a relationship
between program variables assumed true at program entry implies that another relationship
between program variables holds at program exit.

Testing_glossary.doc 10
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
QTP: Quick Test Professional, a tool which is used to automate functional, system, end-to-end,
and, sometimes, unit tests.

Quality Assurance: All those planned or systematic actions necessary to provide adequate
confidence that a product or service is of the type and quality needed and expected by the
customer.

Quality Audit: A systematic and independent examination to determine whether quality


activities and related results comply with planned arrangements and whether these
arrangements are implemented effectively and are suitable to achieve objectives.

Quality Center: a tool which is used to manage the testing life-cycle.

Quality Control: The operational techniques and the activities used to fulfill and verify
requirements of quality.

Quality Management: The aspect of the overall management function that determines and
implements the quality policy.

Quality Policy: The overall intentions and direction of an organization as regards quality as
formally expressed by top management.

Quality review committee: a committee established by a professional organization or


institution to assess and ensure quality. Unlike a peer review committee, it can function on its
own initiative with regard to a broad range of topics.

Quality System: The organizational structure, responsibilities, procedures, processes, and


resources for implementing quality management.

Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at


least one of which is a write, with no mechanism used by either to moderate simultaneous
access.
Ramp Testing: Continuously raising an input signal until the system breaks down.

Rational tools: a set of tools created by IBM to support the software development life-cycle.

Rational Unified Process: (RUP) is an iterative software development process framework.


Within each iteration, the tasks are categorized into nine disciplines: Business Modeling,
Requirements, Analysis and Design, Implementation, Test, Configuration and Change
Management, Project Management, and Environment. RUP has determined a project life cycle
consisting of four phases: Inception, Elaboration, Construction, and Transition,

Testing_glossary.doc 11
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Recovery Testing: Confirms that the program recovers from expected or unexpected events
without loss of data or functionality. Events can include shortage of disk space, unexpected
loss of communication, or power out conditions.

Regression testing: retesting of a previously tested program following modification to ensure


that faults have not been introduced or uncovered as a result of the changes made, and that
the modified system still meets its requirements. It is performed whenever the software or its
environment is changed.

Release Candidate: A pre-release version, which contains the desired functionality of the final
version, but which needs to be tested for bugs (ideally should be removed before the final
version is released).

Reliability: the probability that software will not cause the failure of a system for a specified
time under specified conditions.

Requirement - a clearly articulated and documented need -- what a particular product or


service should do or support. Requirements can refer to business or technical capabilities,
including functions to be performed, security attributes, or performance attributes (e.g., how
fast a query should return results).

Retesting: running a test more than once; often after a defect was indentified and fixed.

Review: a process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users or other interested parties for comment or
approval. Types of review include walkthrough, inspection, informal review and technical or
peer review.

Sanity Testing: Brief test of major functional elements of a piece of software to determine if
it’s basically operational. See also Smoke Testing.

Scalability Testing: Performance testing focused on ensuring the application under test
gracefully handles increases in work load.

Security Testing: Testing which confirms that the program can restrict access to authorized
personnel and that the authorized personnel can access the functions available to their
security level.

Section 508 compliance testing: testing to ensure that the system meets the accessibility
and interoperability requirements mandated by Section 508 of the Rehabilitation Act of 1973,
as amended (29 U.S.C. 794d). This act addresses making information systems accessible to
users with disabilities.

Testing_glossary.doc 12
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Severity: identifies the criticality of the defect (e.g., critical for outages to minor for cosmetic
problems).

Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work
prior to introducing a build to the main testing process. Originated in the hardware testing
practice of turning on a new piece of hardware for the first time and considering it a success if
it does not catch on fire.

Soak Testing: Running a system at high load for a prolonged period of time. For example,
running several times more transactions in an entire day (or night) than would be expected in a
busy day, to identify any performance problems that appear after a large number of
transactions have been executed.

Software Development Lifecycle: SDLC is the process of creating or altering systems, and
the models and methodologies that people use to develop these systems.

Software Requirements Specification: A deliverable that describes all data, functional and
behavioral requirements, all constraints, and all validation requirements for software/

Software Testing: A set of activities conducted with the intent of finding errors in software.

Spiral development: an approach in which each cycle or level in the spiral model includes
several activities found in various phases of the waterfall model. Project is executed as a
series of waterfall life-cycles. Development is incremental.

SQL: a database querying language.

SQLServer: relational database platform, produced by Microsoft.

Standards: The measure used to evaluate products and identify nonconformance. The basis
upon which adherence to policies are measured.

Static Analysis: Analysis of a program carried out without executing the program.

Static Analyzer: A tool that carries out static analysis.

Static testing: testing of an object without execution on a computer. Includes static analysis
by a software program and all forms of review.

Static Testing: Analysis of a program carried out without executing the program.

Testing_glossary.doc 13
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Storage Testing: Testing that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected termination resulting
from lack of space. This is external storage as opposed to internal storage.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits
of its specified requirements to determine the load under which it fails and how. Often this is
performance testing using a very high level of simulated load.

Structural Testing: Testing based on an analysis of internal workings and structure of a piece
of software. See also White Box Testing.

Stub: a skeletal or special purpose implementation of a software module, used to develop or


test a component that calls or is otherwise dependent on it. Used in integration testing.

System assurance: is the justified confidence that the system functions as intended and is
free of exploitable vulnerabilities, either intentionally or unintentionally designed or inserted as
part of the system at any time during the life cycle.

System Testing: Testing that attempts to discover defects that are properties of the entire
system rather than of its individual components. The process of testing an integrated system to
verify that it meets specified requirements. System testing covers both functional and non-
functional system testing.

Technical review: a type of review which is documented, has defined fault-detection


processes, and includes peers and technical experts but no managers. Also known as peer
review.

Testability: The degree to which a system or component facilitates the establishment of test
criteria and the performance of tests to determine whether those criteria have been met.

Test Automation: See Automated Testing.

Test Bed: An execution environment configured for testing. It may consist of specific
hardware, Operating System, network topology, configuration of the product under test, other
application or system software, etc. The Test Plan for a project should enumerate the test
beds(s) to be used.

Test case: a set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify compliance with a
specific requirement.

Test case design technique: a method used to derive or select test cases.

Test condition: anything that could be tested.

Testing_glossary.doc 14
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Test control: actions taken by a test manager, such as reallocating test resources. This may
involve changing the test schedule, test environments, number of testers etc.

Test Data Set: Set of input elements used in the testing process.

Test Director: a tool used to manage the testing life-cycle (a component of Quality Center).

Test Driven Development: Testing methodology associated with Agile Programming in which
every chunk of code is covered by unit tests, which must all pass all the time, in an effort to
eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of
tests, i.e. an equal number of lines of test code to the size of the production code.

Test Driver: A program or test tool used to execute a test. Also known as a Test Harness.

Test Environment: The hardware and software environment in which tests will be run, and
any other software with which the software under test interacts when under test including stubs
and test drivers.

Test First Design: Test-first design is one of the mandatory practices of Extreme
Programming (XP). It requires that programmers do not write any production code until they
have first written a unit test.

Test Harness: A program or test tool used to execute a test. Also known as a Test Driver.

Test Incident Report A document describing any event during the testing process that
requires investigation.

Testing: the process of exercising software to verify that it satisfies specified requirements
and to detect faults in the measurement of software quality.

Test Log A chronological record of relevant details about the execution of tests.

Test Plan: A document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies test items, the features to be tested, the testing tasks, who will do
each task, and any risks requiring contingency planning. It also identifies the test planning
process detailing the degree of tester independence, the test environment, the test case
design techniques and test measurement techniques to be used, and the rationale for these
choices.

Test Procedure: A document providing detailed instructions for the execution of one or more
test cases.

Testing_glossary.doc 15
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Test records: for each test, an unambiguous record of the identities and versions of the
component or system under test, the test specification and actual outcome.

Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they
are to be executed.

Test Script: Commonly used to refer to the instructions for a particular test that will be carried
out by an automated test tool.

Test Specification: A document specifying the test approach for a software feature or
combination or features and the inputs, predicted results and execution conditions for the
associated tests.

Test Suite: A collection of tests used to validate the behavior of a product. The scope of a
Test Suite varies from organization to organization. There may be several Test Suites for a
particular product for example. In most cases however a Test Suite is a high level concept,
grouping together hundreds or thousands of tests related by what they are intended to test.

Test Summary Report: A document that describes testing activities and results and evaluates
the corresponding test items.

Test Tools: Computer programs used in the testing of a system, a component of the system,
or its documentation.

Thread Testing: A variation of top-down testing where the progressive integration of


components follows the implementation of subsets of the requirements, as opposed to the
integration of components by successively lower levels.

Top-down Testing: An approach to integration where the component at the top of the
component hierarchy is tested first; with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is repeated
until the lowest level components have been included.

Total Quality Management: A company commitment to develop a process that achieves high
quality product and customer satisfaction.

Traceability Matrix: A document showing the relationship between Test Requirements and
Test Cases.

Testing_glossary.doc 16
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Usability Testing: Testing the ease with which users can learn and use a product.

Use Case: The specification of tests that are conducted from the end-user perspective. Use
cases tend to focus on operating software as an end-user would conduct their day-to-day
activities.

User: The customer that actually uses the product received.

User Acceptance Testing: A formal product evaluation performed by a customer as a


condition of purchase.

Unit Testing: Testing individual programs, modules, or components to demonstrate that the
work package executes per specification, and validate the design and technical quality of the
application.

Validation: The process of evaluating software at the end of the software development
process to ensure compliance with software requirements. The techniques for validation are
testing, inspection and reviewing.

Valid Input: Test data that lie within the domain of the function represented by the program.
Verification: The process of determining whether or not the products of a given phase of the
software development cycle meet the implementation steps and can be traced to the incoming
objectives established during the previous phase. The techniques for verification are testing,
inspection and reviewing.

Volume Testing: Testing which confirms that any values that may become large over time
(such as accumulated counts, logs, and data files), can be accommodated by the program and
will not cause the program to stop working or degrade its operation in any manner.

Walkthrough: A review of requirements, designs or code characterized by the author of the


material under review guiding the progression of the review.

Waterfall Development: a sequential software development process, in which progress is


seen as flowing steadily downwards (like a waterfall) through the phases of Conception,
Initiation, Analysis, Design (validation), Construction, Testing and Maintenance.

White Box Testing: Testing based on an analysis of internal workings and structure of a piece
of software. It includes techniques such as Branch Testing and Path Testing. Also known as
Structural Testing and Glass Box Testing. Contrast with Black Box Testing.

Testing_glossary.doc 17
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only
Workflow Testing: Scripted end-to-end testing which duplicates specific workflows that are
expected to be utilized by the end-user.

Testing_glossary.doc 18
Last Modified: 02/04/2011
Printed copies of this document are uncontrolled and for reference use only

You might also like