Professional Documents
Culture Documents
Test Automation Body of Knowledge (TABOK - Automated Testing Institute
Test Automation Body of Knowledge (TABOK - Automated Testing Institute
BODY OF KNOWLEDGE
(TABOK)
GUIDEBOOK
Version 1.1
© 2011
Figure 1-1: SDLC and the Automated Testing Lifecycle Methodology (ATLM) .......................................... 16
Figure 1-2: Cumulative Coverage ............................................................................................................... 19
Figure 1-3: NIST software quality study results .......................................................................................... 22
Figure 1-4: Communication Breakdown (unknown author) ........................................................................ 24
Figure 1-5: Quality benefits of discovering defects earlier in the development cycle ................................. 38
Figure 1-6: Escalating costs to repair defects ............................................................................................. 38
Figure 1-7: ROI Formula ............................................................................................................................. 39
Figure 4-1: Automation Cost ....................................................................................................................... 68
Figure 4-2: Data-driven Construct Example................................................................................................ 76
Figure 4-3: Functional Decomposition ........................................................................................................ 80
Figure 5-1: Generic Framework Component Diagram ................................................................................ 89
Figure 5-2: Execution Level File Example .................................................................................................. 90
Figure 5-3: Driver Script Example ............................................................................................................... 90
Figure 5-4: Initialization Script Example ...................................................................................................... 92
Figure 5-5: Example Test Environment Configuration and Iterations ......................................................... 93
Figure 5-6: Configuration Script Example ................................................................................................... 94
Figure 5-7: Automated Framework Directory Structure Example ............................................................... 96
Figure 6-1: Automation Criteria Checklist ................................................................................................. 104
Figure 6-2: Code-based Interface ............................................................................................................. 105
Figure 6-3: Functional Decomposition Interface ....................................................................................... 106
Figure 6-4: Keyword Driven Interface ....................................................................................................... 106
Figure 6-5: VNC Illustration ....................................................................................................................... 113
Figure 6-6: Test Execution Domino Effect ................................................................................................ 114
Figure 6-7: Parallel Test Execution ........................................................................................................... 116
Figure 7-1: Quality Attribute Optimization Examples ................................................................................ 136
Figure 8-1: Sample Pseudocode for Data-Driven Invalid Login Test ....................................................... 142
Figure 8-2: Class, Object, Properties, Methods, Collections Illustration ................................................... 146
Figure 9-1: Object Map Example .............................................................................................................. 157
Figure 9-2: Document Object Model ......................................................................................................... 160
Figure 9-3: Dynamic Object Illustration ..................................................................................................... 162
Figure 9-4: Dynamic Object Map .............................................................................................................. 163
Figure 10-1: Script Error Types ................................................................................................................. 168
Figure 10-2: Debugging With Breakpoints Example ................................................................................. 172
Figure 10-3: Application Error Scenario .................................................................................................... 173
Figure 10-4: Error Simplification ............................................................................................................... 174
Figure 10-5: Wait Statement ..................................................................................................................... 176
Figure 10-6: Synchronization Statement ................................................................................................... 177
Figure 11-1: Popup Error Message ........................................................................................................... 180
Figure 11-2: Error Handling Development Process .................................................................................. 182
Figure 11-3: In-Script Error Handling Example ......................................................................................... 185
Figure 11-4: Passing to Error Handler ...................................................................................................... 186
Figure 11-5: Error Handler Example ......................................................................................................... 186
The Test Automation Body of Knowledge (TABOK) tool-neutral skill set is designed to
help software test automation professionals address automated software testing
challenges. Although they are geared to similar aims, automated software testing is a
discipline that is separate from manual software testing, and must be treated as such.
For this reason, the TABOK provides engineers with a way to assess, improve, and
better market their automated testing skills more effectively than tool-specific
benchmarks can do alone.
The body of knowledge may also be used by organizations as specific criteria for more
effectively assessing resources and establishing career development tracks. Not every
test automation engineer is required to be an expert in each skill category but
knowledge of the different skill categories is essential for professional improvement,
growth, and development. That said, we recommend that each automated test effort
include a team of professionals who collectively possess all of the skills, regardless of
how many people make up the team.
This TABOK Guidebook is a resource for providing guidance in understanding the Test
Automation Body of Knowledge, and may also be used by engineers as a self-study
guide for the Functional Test Automation Professional (F-TAP) Certification. Test
automation is a broad discipline that no single resource can cover exhaustively, so this
manual addresses test automation concepts in a broad sense while providing a deeper
focus on System Functional Test Automation. Although this manual includes key
concepts and workflows relative to the creation and implementation of automated
testing, readers should plan on using several supplemental resources to compliment the
concepts relayed in this guidebook. Each major section of this guidebook includes a
sample list of references that may be used to gain greater insight and information on the
topics covered in the respective section. In addition, the References section at the end
of the manual also provides a list of useful and comprehensive references. The reader
is responsible, however, for finding their own additional and relevant references as
deemed necessary. Every automated test tool has its own characteristics, advantages,
disadvantages, and scope, but there are common approaches and concepts that may
be applied in order for any tool to be used reliably and to best advantage; the TABOK
guidebook offers guidance in understanding those approaches and concepts.
The TABOK guidebook assumes that the reader has some background in software or
systems testing and is comfortable with the concepts of testing, its terminology, and
methodologies. If used as a self-study guide for the F-TAP certification, this guidebook
should not be construed as a single resource for all topics that will be addressed on the
certification exam. It is instead a non-mandatory tool to aid in exam preparation that
assumes the certification candidate also has the prerequisite experience and education
in the discipline of software test automation that is necessary for passing the exam.
The ability to understand the role of automated testing in the software development
and testing lifecycle is perhaps the most critical of all since that knowledge guides
For simplicity the different types of automated testing – functional (regression), unit,
integration, performance, etc. – are often discussed together but they perform
different functions at different phases of the software development lifecycle. While
you are not expected to be an expert in all test automation types, it is important to
understand basic concepts in each type of automation to provide management with
confidence in your test automation organization.
3. Automation Tools
4. Automation Frameworks
Total automation cost includes both development and maintenance costs. As the
automation framework becomes more defined, scripting (and costs) increases in
complexity but over time, maintenance efforts (and costs) will decrease. Post-
release, maintenance becomes increasingly important which requires that the
automation framework likewise mature in order to reduce total automation costs.
Therefore, before defining the framework to support test automation, you must
evaluate the implied and/or stated scope of the organization‘s automation effort, and
implement it in concert with the requirements and design phase of the software
development lifecycle.
Automation frameworks may include simple (but unreliable) Record & Playback
processes, more complex functional decomposition to marry requirements with test
scripts that may be run independently or in defined combinations or sequences in a
given test bed, or more complex, abstracted scripts that may be reused by tests
within the same or different applications.
5. Automation Framework Design Process
Designing an automated test framework is not an exact science. You can definitively
identify the different types of frameworks but the process to select and implement a
particular framework is a little more difficult to pinpoint. The most important point,
however, is to base your well-considered approach on common, successfully
implemented industry practices and tailor it to your organization. This skill category
addresses developing and executing critical activities including selecting a
framework type, identifying framework components, identifying the framework
directory structure, developing implementation standards, and developing automated
tests.
6. Automated Test Script Concepts
This skill category defines different quality attributes of the automated test suite and
identifies ways of addressing these attributes based on priorities and constraints
surrounding each. Some quality attributes include maintainability, portability,
flexibility, robustness, scalability, reliability, usability, and performance.
8. Programming Concepts
Whether you use a tool with a scripting language, tree structure, and/or keywords,
fundamental programming concepts are necessary to effectively automate tests and
increase testing flexibility to include application-to-system coverage.
9. Automation Objects
Regardless of how well automated tests are planned, designed, and created, bugs
will occur. It can be difficult, in fact, to determine whether the problem is due to the
application under test (AUT), the test script itself, the test environment, or a
combination of these and other factors. Skillful debugging identifies syntax, runtime,
logical, and application errors that may be the root cause(s) of failure so they can be
repaired, re-tested, and ultimately deployed. Successful debugging also affects the
entire project by helping to avoid potential schedule delay brought by unexpected
crashing of the automation framework, unreliable test results, and at worst, releasing
a poor quality product that costs much time, money, and credibility to repair.
11. Error Handling
Error handling dictates how the test script responds and reports when the application
under test deviates from anticipated behaviors and outputs. This makes it a critical
component in pinpointing bugs and their remediation while allowing testing to
continue. Well-developed error handling helps to diagnose potential errors, log error
data, identify and report points of failure, and trap critical diagnostic outputs. Error
handling is implemented in a variety of ways within automated test scripts, generally
step-by-step, by component, or at run time.
Manual testing, however, is able to leverage one characteristic of the human mind that
is devilishly difficult and expensive to automate: the power of learning and judgment.
Consider that a computer is a small, 1-processor computing element while the human
brain is composed of about 100 billion connected neurons that function as an organic
whole. In a sense, this makes the human mind a system of millions of processors that
have been preprogrammed through years of learning (and, as some might assert,
through thousands of years of evolutionary programming) that work together. In a
fraction of a second, the human brain can recognize patterns that would take hours for a
computer to learn or a test engineer to script, and can then learn from, and make
adjustments, based on those patterns. Also, humans visually inspect applications and
perform numerous verification points whether or not these verification points are
documented as part of a test procedure. While computers are faster at accomplishing
routine, simple tasks, they only make adjustments (a form of judgment) in processing
based only how they are programmed to adjust. Automated test tools and scripts only
check exactly what they are programmed to check.
Consider this example: A certain AUT mysteriously shuts down during a test; a manual
tester knows instinctively that to continue the test, the application must be re-launched.
The manual test procedure need not be written to include this type of exception
handling instruction. An automated test, however, would fail miserably in this situation
unless it was specifically programmed to handle that situation or state at the exact point
that it occurred.
Automated testing is the use of software to test or support the testing of other software
known as an AUT. This includes the use of software to dynamically setup test
preconditions, produce test data, execute test procedures against the AUT, and
dynamically compare actual results to expected results of a test. Also, this may involve
the use of software to dynamically collate test results and produce meaningful test
charts, graphs and reports that effectively relay those results to stakeholders.
Additionally, it entails general support for and implementation of software tools that
integrate with and help to facilitate the testing process. The ‗general support‘ view of
test automation – supported by Agile Test Automation Principles1, as well as many
testing organizations – calls for awareness of all phases of the testing lifecycle, the tools
that support the lifecycle, and the approaches for effectively using these tools.
1
http://www.satisfice.com/articles/agileauto-paper.pdf
A key difference between manual and automated testing is that a single test engineer
can run many test scripts simultaneously or in sequence, as the test plan demands.
Automated scripts can be executed by a manual command, or launched on a timer or
from a software trigger. Not only can this potentially reduce load on the test team‘s
system resources, it allows more flexibility in the timing and staffing in the test cycle.
Beyond running scripts, automated testing can store results for regression test results
comparison.
The protocols in test planning, design, development, and implementation are therefore,
very different for each testing discipline. So what does this mean for considering
implementing a test automation approach?
Automated test staff must have a deep knowledge of the full hardware,
development, and software environments as well as all phases of the testing
lifecycle and, thus, the test tools that support each phase.
The human element is still paramount in automated testing to question the
results of test to assure that they are reliable and actionable.
Automated test staff must have proficiency in the test tools in use, as well as in
the AUT.
Good tools do not replace the ingenuity, skill, and judgment of test professionals.
Nor do they fix poor test processes or decision-making.
In general, implementing test automation will likely result in higher up-front costs
for software due to
→ The costs of tools, including their evaluation, purchase, licensing,
installation, and maintenance
→ Increased pre-planning and planning time for the appropriate integration of
automated testing into all phases of the application‘s lifecycle
→ The ability to develop more sophisticated test scripts that evaluate the
application itself in addition to its integration in the deployment hardware
and software environment
→ Testing the test scripts to assure they fully address requirements
→ Skills of the test professionals
→ Expanded test environment
→ Richer test results to analyze and use
Armed with this information, it is easy to see how developing automated tests to be
implemented by a computer program requires a different set of skills than developing
manual tests to be implemented by a manual tester. See Roles and Responsibilities to
learn more about specific skills required for each role.
3
Figure 1-1: SDLC and the Automated Testing Lifecycle Methodology (ATLM)
2
Several methodologies can be classified as the software (or system) development lifecycle. Regardless
of whether the organization follows a waterfall, agile, or extreme programming approach, testing touches
every point in the application‘s lifespan.
3
Dustin, Elfriede, Jeff Rashka, and John Paul. Automated Software Testing: Introduction, Management,
and Performance. Boston, MA: Addison-Wesley, 1999. For more information see Appendix E: The
TABOK, SDLC and the Automated Testing Lifecycle Methodology.
4
Your work is not done. Part of implementing the test automation effort is regularly managing those
expectations and the occasional nay-saying derisions.
In a well-developed test automation effort, the net effect of these benefits can result in:
Cost Savings – long-term savings through repeatable and reusable tests,
reduced staff load, earlier identification and repair of defects, reduced rework
Increased Efficiency– Savings through faster test execution time and schedule
reduction.
Increased Software Quality – Increased and deeper test coverage throughout
software and hardware components to reduce the risk and cost of a potential
failure reaching production.
These categories are uncompromisingly interrelated in that increased efficiency and
increased software quality will ultimately lead to cost savings (when failure costs are
considered), and increased efficiency may lead to increased software quality. Please
see Section 1.3 Automation Return-on-Investment (ROI) for different ways of
quantifying these benefits.
(from 25 during the first build to 81 by the final build, for an ideal Cumulative Coverage
of 268 tests).
When the allotted testing time is short, some regression testing is often sacrificed,
especially if a substantial amount of new functionality has been included in the build.
Neglecting to regression test the existing functionality along with the new, however,
often poses a greater risk to the quality and integrity of the application‘s reliability.
Cutting corners in full regression occurs for many reasons. Sometimes, an organization
considers this type of redundancy to be a wasteful use of time and resources without
the understanding that the best designed and developed functional can still introduce
defects upon integration. In the current commercial market, defects in established
functionality deployed with the release of a new version can greatly – and adversely –
affect the system developer‘s (and organization‘s) reputation and by effect, profits. The
time and cost to repair and deploy the defects as well as the organization‘s place in the
market may far outweigh the initial cost of responsible testing.
By running repeatable test scripts unpredictable defects can be identified and repaired
more quickly over a shorter period of time. This can increase test coverage over
multiple generations of builds which can result in a more reliable, stable product that
meets requirements while reducing risks of failure.
5
These figures are reported in the thorough study by RTI for the National Institute of Standards and
Technology. See Gregory Tassey‘s The Economic Impacts of Inadequate Infrastructure for Software
Testing. Planning Report 02-3. May 2002. Available at
http://www.nist.gov/director/planning/upload/report02-3.pdf.
6
Figure 1-3: NIST software quality study results
6
Hewlett-Packard. Reducing risk through requirements-driven quality management: An end-to-end
approach. 2007. Available at
http://viewer.media.bitpipe.com/1000733242_857/1181846419_126/74536mg.pdf.
integrated with the test tool suite. This bridge results in a neat mapping of user goals to
functional requirements, and those functional requirements may be mapped to the code
specifications, code base, and test plans and scripts.
Examination of automated test tools used by testers for automated test development
and implementation provides another example of how tools can bridge communication
gaps. Given that these tools require coding, and a lot of the same skills commonly used
by developers, the testers have an opportunity for greater collaboration with developers
from which a mutual respect for each other can be formed. Tearing down poor
communication walls ultimately build efficiency in the development process and team
workings, save valuable time and resources, and improve product quality.
1.1.2 Misconceptions
Many of the challenges involved in the implementation of a successful automated test
effort have nothing to do with the technical skills in getting the job done. The challenge
is in getting the job done while effectively managing unrealistic expectations and
misconception held by stakeholders, specifically when it comes to functional test
automation. Leaving these misconceptions unchecked means operating a test
that can help better implement the activities and efforts that are already working. In
some situations, if the processes are strong enough, a tool may not even be necessary.
This misconception also implies another consideration when implementing a new tool:
planning its implementation should include coordination with the organization‘s business
processes in addition to its application design and development processes.
detailed discussion of the advantages and disadvantages that may occur with Record &
Playback.
team communication is fostering the introduction of defects into the system, so a tool to
help streamline collaboration and communication is desired.
A requirements matrix (such as the Criteria Checklist illustrated in Appendix A: Sample
Evaluation Criteria) is a tool to help the application team (which includes project
managers, requirements analysts, developers, and test engineers) capture the
requirements the tool must meet and identify which tools fit the bill. This matrix also
supplements a convincing business case to describe the tools and benefits the
proposed tool solution will provide, and that due diligence has been followed to assure
that proper analysis has driven the decision. See Skill Category 3: Automation Tools for
information on specific features of various tools that support the SDLC.
7
Be sure to be conservative in ROI estimates in an effort to successfully prevent the early inception of
unrealistic expectations, and to allow for escalation in costs.
8
In reality, however, the tool selection often pre-dates and influences the decision to automate. If this is
not the case, however, an evaluation must take place.
considerations (such as loading the test tool and configuring it for the specific
environment), staffing requirements, tool training, processes, roll-out schedule,
evaluation schedule, and the like. Once the tool is acquired, these plans (vetted and
communicated to all appropriate stakeholders) drive the order of activities of its initial
implementation and status of those steps.
Planning should also include
An effective ―public relations‖ campaign within the team and external
stakeholders (as appropriate) to manage the expectations of this new tool‘s
position in the application environment.
Training for appropriate personnel.
Modifying existing processes to accommodate the new tool.
Updating the system under test to accommodate the tool implementation, such
as adding stubs to the code to make the tool automatable.
Automation test engineers are often tasked with reporting ROI to decision-makers, and
without regularly and candidly reporting the unadulterated ROI, the benefits of testing
may go unnoticed and misconceptions about test automation may begin to creep in.
The test professionals must manage expectations for successful automation by
9
Borland Corporation. Successful Projects Begin with Well-Defined Requirements. E-Project
Management Advisory Service Executive Update 2 (7), 2001. Available at
http://www.borland.com/resources/en/pdf/solutions/rdm-success-projects-defined-req.pdf.
10
W. Charles Slavin. Software Peer Reviews: An Executive Overview. Cincinnati SPIN. January 9, 2007.
Available at www.cincinnatispin.com/1_2008.ppt.
From Grady, Robert B. 1999. ―An Economic Release Decision Model: Insights into Software Project
Management.‖ In Proceedings of the Applications of Software Measurement Conference, 227–239.
Orange Park, FL: Software Quality Engineering.
Figure 1-5: Quality benefits of discovering defects earlier in the development cycle
11
Figure 1-6: Escalating costs to repair defects
11
Karl E. Wiegers and Sandra McKinsey. Accelerate Development by Getting Requirements Right.
Serena Corporation. Available at http://www.serena.com/docs/repository/products/dimensions/accelerate-
developme.pdf.
12
This approach generally works well with processes – automated testing activities – when those
processes can be run in parallel without concern about dependencies between test activities. It does not
work well, however, when dividing work across multiple test engineers without regard to the
dependencies and order of test protocols. This scenario is referred to as the ―mythical man month‖ and is
analogous to assuming that 9 women can be pregnant for 1 month each and produce a baby.
The Software Testing profession has many distinct, and widely accepted testing types,
including unit testing, integrating testing, functional testing, and performance testing.
Based on this, it would be reasonable to assert that the automation of a test that fits into
one of those categories would be an automated test by the same name (i.e. a
compatibility test that is automated would fall into the ‗automated compatibility test‘
category). While this assertion would not be incorrect, we suggest that there is another
way to segment automated tests in a manner that more closely reflects how automated
testing is implicitly segmented within the world of testing. Such segmentation produces
the following types:
Unit Test Automation
Applications typically have multiple units that need to be tested at a single time, so unit
tests are often grouped into suites that can be executed with a single command or
trigger. Test harnesses – a framework that calls and executes a unit of application
source code outside of its natural environment or calling context for which it was
originally created – are often used for accumulating tests into test suites so that they
can be easily executed as a batch.
Operating System Layer – Layer in which the operating system functions. Issues
related to disk space, CPU usage, memory, disk I/O, etc., may result in poor
application performance.
Network Level – Layer in which network communication occurs. Low network
bandwidth may be one cause of poor application performance.
Performance test automation typically starts out measuring response times across the
collection of all layers. Once a bottleneck is identified, layers are stripped out in order to
identify the source of the bottleneck. In addition, the performance test engineer may
employ various monitors at the various levels to help identify specific problem areas.
Performance test automation is often considered to be synonymous with load test
automation (a.k.a. volume testing) and stress test automation but there are subtle
differences. Load test automation is similar to performance testing except that load
testing (also known as volume testing) gradually increases the load on an application up
to the application‘s stated or implied maximum load to ensure the system performs at
acceptable levels. In this respect, load testing is often a part of the larger performance
test strategy. Stress test automation takes this a step further by determining what load
will actually ―break‖ the application. This is accomplished by gradually increasing the
load on the application beyond the stated or implied maximum load to the point the
system no longer functions effectively.
13
W3C. Accessible at http://www.w3.org/TR/ws-arch/#whatis.
A key skill in test automation includes understanding, using, and supporting the test
automation tools that support all aspects of the testing lifecycle. It is expected that an
automated test professional have a basic understanding of that wide range of tools,
especially tools that automate processes. Therefore, one must be able to assess tools
for their appropriateness, and know them well enough to implement and customize
them, develop and generate reports, manipulate files, monitor the system, and prepare
data.
Table 3-1 provides a listing of some of the basic types of the tools that support the
testing lifecycle. In addition the table also provides the lifecycle phase(s) in which the
tool primarily operates.
Business / System
Modeling Tool Description
Feature
Diagram Support Provides templates and symbols for a vast collection of diagram
types (e.g., block diagram, organizational chart, class diagrams,
flow charts, state diagrams, use cases, physical data models,
Business / System
Modeling Tool Description
Feature
etc.)
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Model-driven Supports the capture, development, and use of patterns to
development support automatic code generation.
Reporting Can generate reports relative to developed diagrams. Supports
creation of customized reports.
Security Allows set up and control of access rights to the modeling tool
itself, projects, and artifacts within the tool.
Support for Various Supports development of models using standard rules and
Modeling syntax defined within a specific modeling language (e.g., UML).
Languages
Traceability Establishes links between related artifacts. Allows user to link
diagrams with actual code components
Requirements
Management Tool Description
Feature
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Import/Export Supports import of requirements from external source; exports
requirements to source documentation.
Source Can store and/or link source documentation to one or more
Documentation requirement stored in the tool.
Management
Customizability Can add existing fields and components to the tool that make
sense for the current organization or project.
History Maintains requirements change history
Reporting Can generate reports and documentation relative to the
existing requirements, requirements prioritization, requirements
history, or relation to outside entities such as test scripts.
14
http://junit.sourceforge.net/
15
http://en.wikipedia.org/wiki/Unit_test
Automated unit testing may be performed without a framework by simply writing code
within the same development tool that is used to develop the code that needs to be
tested. This test code calls and tests the code units of system and uses assertion and
exceptions to dynamically identify failures. Unit test frameworks allow unit testers to
perform the same activities but offer more advanced features that facilitate the creation,
execution, and results reporting of unit tests.
Below is a list of some fundamental Unit Testing tool features.
Table 3-5: Unit Testing Framework Features
Test Management
Description
Tool Feature
Manual Test Supports authoring manual test cases and procedures within
Maintenance the tool.
Automated Test Supports integration with and storage of automated tests.
Maintenance
Distributed Test Allows setup and administering test assignments to remote
Environment machines.
Test Control Supports execution of both manual and automated tests from
the test management tool and stores the results. Tests may be
executed at the current time or scheduled to run at a later time.
Reporting Supports development of customized reports that allow for the
extraction of information relative to the tests and/or results of
test executions.
Import/Export Supports import of tests from, and export of tests to, source
documentation.
Test Organization Allows the test engineer to arrange tests in a folder structure
within the tool similar to how the tests may be arranged within
an operating system file system.
Traceability Allows establishment of links and dependencies between
related entities (e.g., requirements, defects, etc.).
Automatic Email Allows automatic email of test results to identified application
Notification team members.
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Defect Management The ability to report and management defects.
Data Generation
Description
Tool Feature
Multiple Formats Offers the ability to generate data in multiple formats (e.g. flat
files)
Data Repository Supports storage of generated data in a specified data base.
Real Data Seeding Offers the ability to generate data based on real application
sample data.
Defect Tracking
Description
Tool Feature
Automatic Email Allows automatic email of test results to identified application
Notification team members based on specified defect report modification
(e.g. status changes).
Data Storage Supports storing multiple pieces of information about the defect,
such as time/date of creation, severity, priority, status, summary
of problem, problem details, attachments, ticket creator, name
Defect Tracking
Description
Tool Feature
or ID of the developer assigned to fix the issue, and estimated
time to fix the problem.
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Process Supports enforcing constraints established in the Defect
Management Tracking process defined by an organization via tool
Support customization. This includes the ability to control who can make
changes and the types of changes they can make, as well as
the ability to define and govern transitions related to the state
currently held by the defect.
Reporting Provides the ability to generate reports relative to defect
statuses and trends.
Security Allows set up and control of access rights to the tool itself,
projects, and artifacts within the tool.
Traceability Allows establishment of links and dependencies between
related entities (e.g., requirements, defects, etc.).
Code Coverage
Analyzers Feature Description
Flexible Coverage The ability to measure code coverage in various ways
Measurements including: Statement Coverage, Condition Coverage, Path
Coverage, Decision Coverage, Module Coverage, Class
Code Coverage
Analyzers Feature Description
Coverage, Method Coverage, etc.
Reporting Provides the ability to generate reports relative to code
coverage.
Functional System
Description
Test Tool Feature
Customization Provides easy customization of the tool‘s features.
Cross Platform Allows the tool to function across different platforms or
Support browsers.
Test Language Supports ability to script tests in a standard programming
language.
Record & Playback Supports ability to dynamically capture actions that are
performed manually on an application, and replicating those
actions in the form of code that can be replayed on the
application.
Test Control Allows the user to manage when and by what trigger
automated test(s) are run and the results stored.
Distributed Test Allows setup and administering test assignments to remote
Execution machines.
Test Suite Recovery Indicates the ability of the tool to recover from unexpected
errors.
Functional System
Description
Test Tool Feature
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Reporting Provides the ability to generate reports relative to defect
statuses and trends.
Vendor Support Indicates the availability of technical support from the tool‘s
vendor.
Licensing The vendor provides different licensing solutions that meet the
needs on the customer.
Performance Test
Automation Tool Description
Feature
Customization Provides easy customization of the tool‘s features.
Dynamic Virtual User Indicates the ability to dynamically adjust the number of virtual
Adjustments users accessing some component or the entire system at any
given time.
Cross Platform Allows the tool to function across different platforms or
Support browsers.
Standard Test Supports ability to script tests in one or more standard
Protocols languages.
Record & Playback Supports ability to dynamically capture actions that are
performed manually on an application, and replicating those
actions in the form of code that can be replayed on the
Performance Test
Automation Tool Description
Feature
application.
Test Control Allows the user to manage when and by what trigger
automated test(s) are run and the results stored.
Configurable Think Designates the amount of time each virtual user ―waits‖ before
Times performing the next action.
Distributed Test Allows setup and administering test assignments to remote
Execution machines.
Session Handling Tracks performance of session handling techniques such as
cookies and URL rewriting.
Test Suite Recovery Indicates the ability of the tool to recover from unexpected
errors.
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Reporting Provides the ability to generate reports relative to defect
statuses and trends.
Vendor Support Indicates the availability of technical support from the tool‘s
vendor.
Licensing The vendor provides different licensing solutions that meet the
needs on the customer.
16
http://opensource.org/docs/osd
Equilibrium
Total
Automation
Development Cost
Cost
Ce
Maintenance
Cost
Framework AF
Definition
17
Figure 4-1: Automation Cost
This skill category addresses the three framework levels, the different types of
frameworks associated with each level, and how your choice of framework may be
affected by the scope that is defined for automation in a given organization. The level of
expertise required for implementing an automation framework may vary depending
upon whether the framework is built in-house or acquired (commercial or open source),
but the same basic concepts and understanding remain important.
17
Adapted from a diagram in Fewster, Mark, and Dorothy Graham‘s Software Test Automation: Effective
use of test execution tools, p68.
AF = AN + VN + BN + TN + CN + EN + Ti + P + R
Do not assume the factors are numbers to be plugged in; this equation illustrates the
relationship of the framework choice to automation scope. The greater the scope items
– number of applications, versions, tests to be automated, configurations, environments,
the number and natures of builds, the time period over which automated tests will need
to be supported, the organizational maturity, and test team‘s technical level – the more
precisely defined the framework may need to be. It is up to the test team to analyze this
information, and then make a determination about the type of architecture needed in
order to minimize automation costs. And this choice should be supported by addressing
many if not most of these factors.
Let us take a moment and examine each of the factors.
Number of applications under test (AUT) (AN)
A software project may consist of more than one AUT. When responsible for automating
tests for several applications, it is important to take into consideration the characteristics
– functionality, environment, users, etc. – that the applications have in common and
how they are related, and design the automated test framework accordingly.
Number of AUT releases and versions (VN)
Automated testing is often introduced into the testing life cycle to verify application
functionality across multiple application versions. It is therefore important to make the
automated tests scalable, flexible, and modular enough to minimize the impact of
changes in application functionality.
Number and nature of AUT builds (BN)
Just as it is important to make the automated test scalable, flexible, and modular
enough to withstand code and environment changes introduced in new application
versions, it is important to make sure the tests can also withstand changes introduced
by multiple builds within a single version.
Number of tests to automate (TN)
The larger the number of tests that will be automated and maintained, the more robust
and flexible the automated test framework has to be. The framework may need to be
flexible enough to quickly group and execute tests by functional area or priority. In
addition, the framework will need to be robust enough to prevent a failure in one test
from negatively impacting the entire suite of tests being executed.
Number of application test configurations (CN)
Theoretically, if an application is required to be compatible with several different
browser/operating system configurations, it should be the same on all of those
browser/operating system configurations. In reality, however, there may be some subtle
Username Password
John johnPassword
Lee leePassword
Mattie mattiePassword
Page Navigation
Main
FAQ Main Account Services FAQ
Main FAQ
Main Account Services FAQ
Main Orders FAQ
Account Services Main Account Services
Orders 1 Main Orders Orders 1
Orders 2 Main Orders Orders 2
This example reveals two navigation paths that are used multiple times:
These paths are fairly basic and could very easily have been identified at the beginning
of the automation process. Building a similar table and using that table to point out the
most redundant navigation paths can identify the basic navigation functions in the
environment for which functions need to be created.
Error-handling functions – Error-handling functions are functions created to
inform the test script (and test engineer) how to respond when certain
unexpected events that may occur during testing.
Miscellaneous functions – These are functions that don‘t fall under any of the
other categories.
The Functional Decomposition framework can vary from relatively low to high
complexity based on the level at which functions are created. Functions may be created
for simple tasks such as menu clicks, or may be created for complex functional
activities, complex error handling routines, and complex reporting mechanisms. (See
section 6.2.1 for more information.)
change can make multiple scripts susceptible to failures), it also helps make
maintenance a little less complex. With Functional Decomposition, maintenance
is often required for both the framework and specific scripts. While this may
reduce the amount of maintenance required, it also makes maintenance a little
more complex.
the team to support the use of the framework must be increased. While some
standards are automatically imposed with this type of framework, many
standards are not; standards such as how and when certain keywords are
created and used, and how objects are named and identified. This then requires
that the test team commit to ensuring that its members (as well as the
development team) are aware of standards and their sources, understand them,
and are able to effectively implement them. Increased documentation will
probably be required to identify framework features, particularly documentation
that chronicles the keywords that exist as part of the framework that may be
used.
Increased management support – Management support is a challenge for any
automation effort but it is particularly difficult with keyword-driven frameworks.
For this framework, committed management support is imperative to assure that
the time, and staffing necessary for creating and maintaining the structures,
documentation, and personnel (both technical and non-technical) are available.
Restrictive for technical staff – For technically adept test team members who are
tasked with day-to-day automation of a software application, keyword-driven
frameworks may be overly restrictive. They may be perceived as an entity that
―ties their hands‖ into automating in a ―standard‖ way at the expense of
automating in the most efficient way for a particular application, or particular
feature within an application. Keyword applications typically require increased
―public relations‖ work to sale the approach to both stakeholders and
management.
The process of designing an automated test framework is not an exact science and is
therefore difficult to pinpoint in such a way that most of the industry experts agree. The
most important thing however, at this point in automation history is to ensure that a well
thought out approach based on common, successfully implemented industry practices is
used and honed within a given organization. This skill category identifies an approach,
from a high enough level that it will fit with where the IT industry currently is relative to
test automation while still remaining low-level enough to be useful for implementation.
This approach involves the following steps:
1. Select a Framework Type
2. Identify Framework Components
3. Identify Framework Directory Structure
4. Develop Implementation Standards
5. Develop Automated Tests (refer to Skill Category 6: Automated Test Script
Concepts)
Prior to the actual design of the automated test framework, it is a good practice to
develop a Test Automation Implementation Plan. This plan will provide guidance for the
creation of the framework (See Appendix F: Test Automation Implementation Plan
Template for an outline of what goes into an implementation plan).
Upon making a determination about the framework that will be used, identifying the
specific components that will compose that framework is the next logical step. Figure
TestScriptName Tier
SystemLogin 1
PlaceAnOrder 1
ViewOrderStatus 3
CancelOrder 3
UpdateOrder 2
A parameterized driver script that reads the Execution Level File described in Figure 5-2
may be structured as illustrated in Figure 5-3.
TestRunLevel = 1
Open Execution Level File
Execute Data rows to end of file (EOF)
If <Tier> == TestRunLevel Then
Set Initialization and Configuration
Call <TestScriptName>
Evaluate Pass/Fail Status of <TestScriptName>
Implement Error Handling Routine as necessary
End If
Next Data File row
Call Reporting Utility to generate logs and reports
At the top of the script, a variable by the name of TestRunLevel may be used to set the
desired execution level for the given test run. The driver script then opens the Execution
Level File, reads a row and gets values for the <Tier> and <TestScriptName>
parameters. All scripts that have a Tier (priority) equal to that set at the top of the driver
script in the TestRunLevel variable will be called and executed. All other scripts will
remain unexecuted.
Note: Many organizations use a test management tool to execute test sequences in lieu
of a driver script.
At the beginning of a test run, simply specifying the desired environment variable sets
all of the necessary parameters greatly simplifies automated test execution. This
environment variable may be declared and set in the driver script.
Framework Root
Directory
Figure 5-7 illustrates how components may be arranged in the framework. This diagram
shows how the components are physically stored while Figure 5-1: Generic Framework
Component Diagram reveals how the framework components may interact with one
another.
Standards should be created by the automated test team to govern the implementation
of an automated test framework in order to help ensure the success of that framework.
The framework structure is itself a standard, as are the defined component interactions,
but additional supporting standards must still be defined, communicated, respected, and
followed. Without standards, the automated test development will undoubtedly be badly
fragmented, difficult to manage, and unreliable in its effectiveness. Each test will
potentially have different conventions, which means that each test will need to be
maintained differently. Without supporting standards to function as the glue that keeps
the framework together, the framework structure will not be properly enforced, and will
ultimately collapse.
Some of the standards that may need to be considered include:
18
Tiemann, Michael (2006). An objective definition of open standards. Computer Standards and
Interfaces (20): 495-507. Also OMB Circular A-119 (1998) and NTTAA.
19
It is beyond the scope of this Manual to create a comprehensive discussion of application and system
compliance standards; that is its own course of study. But standards are a form of requirements that the
test plan must address as diligently as functional, performance, and non-functional requirements.
See http://www.whitehouse.gov/omb/assets/memoranda_2010/m10-15.pdf,
http://csrc.nist.gov/groups/SMA/fisma/compliance.html, and http://csrc.nist.gov/drivers/documents/FISMA-
final.pdf for some example.
Primary role involved: Test Engineer, Lead Automation Architect, Cross Coverage
Coordinator, Automation Engineer
Primary skills needed: Selecting automation candidates, understanding automated
script elements, constructing an automated test
Selecting tests for test automation involves determining which manual tests should be
automated, and when those tests should be automated.
A more detailed explanation of what should be automated is directly tied to the goals of
the organization. The organizational goals fit into the same categories discussed in the
Section 1.3 Automation Return-on-Investment (ROI): risks, costs and efficiency.
Reducing risks often involves increasing coverage. Several common areas for
automation include:
Automating some subset of regression tests. This will usually free manual testers
to test other parts of the project or even other projects.
Criteria
The test is executed multiple times.
The test is executed on multiple machines, configurations and/or environments.
Criteria
The test is not feasible or is extremely tedious to perform manually.
The test is not negatively impacted by no longer being executed manually (does
not requires human analysis during run-time).
The test is able to be executed using a consistent procedure.
The test covers relatively stable application functionality.
The test covers a portion of the application that has been deemed automatable.
The estimated ROI from automating the test is positive and within a desirable
range.
The test inputs and output are predictable.
The test is non-distributed or the tool is able to handle distributed testing.
This checklist in Figure 6-1 does not present an all-or-nothing proposition, so not all
items need to be checked for test automation to commence. Instead, this checklist is
meant to guide the decision to automate. If one or more of the items in the Automation
Criteria Checklist is checked, then this signals that the test may be a good candidate for
automation.
Automated test design and development are typically most effective when approached
methodically. One way to begin the design process is by developing an algorithm, then
using quality attributes to help determine the level of detail and structure the test should
have. Next, the algorithm can be translated into actual automated test syntax (based on
the automated test interface used) based on the team‘s development standards and
using the framework‘s defined automation modes.
Tests in functional decomposition frameworks are still code-based but as the framework
becomes more defined, the tests become slightly less technical. This is because the
tests are largely created by stacking reusable components.
1 Login(“John”, “Jpass”)
2 Verify_Screen(“Welcome”)
Figure 6-3 reveals how the statements in Figure 6-2 might be written in a functional
decomposition framework. Statements 1 through 3 in Figure 6-2 have been
parameterized and placed in a function called ―Login‖ while steps 4 through 8 have
been parameterized and placed in a function called ―Verify_Screen.‖ The Functional
decomposition framework test is designed to exist in a format as shown in Figure 6-3.
The level 3 frameworks typically have tests designed in a less technical format, such as
a table. For example, the keyword equivalent of the statements illustrated in Figure 6-2
might appear as illustrated in Figure 6-4.
ending state, respectively, of the AUT and automation framework at runtime. This helps
to ensure that the test is able to successfully run through multiple iterations, and that –
when run within a batch – one test doesn‘t adversely affect the execution of subsequent
tests. Initialization scripts and parameters (described in Section 5.2.2) typically bring the
environment or overall test run to a controlled stable point. Initialization conditions at the
script level are normally more specific to the test, such as focusing on initializing test
specific data.
Cleanup steps are responsible for activities that bring the AUT and framework back to
an initialized state to ensure subsequent tests are not adversely affected and to ensure
the AUT is back to a required state. Cleanup may perform activities such as closing an
AUT and disposing of variable and objects used by the test.
Equality Assertions
An equality assertion checks to ensure that some expected result matches an
associated actual result. If the expected result does not match the actual result an error
message is generated.
The structure of an equality assertion may be as follows:
If (expected == actual) Then
Generate ‘Pass’ message
Else
Generate ‘Fail’ message
End If
This may, for example, be used to check that a data element that actually exists in a
text field in the application matches what is expected to be in the text field.
Inequality Assertions
An inequality assertion checks to ensure that some expected result does not match an
associated actual result. If the expected result does match the actual result an error
message is generated.
The structure of an equality assertion may be as follows:
If (expected Not = actual) Then
Generate ‘Pass’ message
Else
Generate ‘Fail’ message
End If
This may, for example, be used to verify that an updated AUT data element does not
still maintain its old value.
Assertion Functions
Since the basic structure of an assertion is reused, it is often useful to place the
assertion in a reusable function. For example, a condition assertion function may
appear as follows:
Invoke_Application(“C:/testapp.exe”)
Mouse_click(33,25,Left)
Type(“John”)
Mouse_click(45,23,Left)
Keyboard_input(<ENTER>)
Bitmap_check(expImage,actualImage)
It also performs much of its verification via bitmap image comparisons. This mode,
which does not take into account the objects and its properties (refer to 8.6 for more
information on objects), is normally more volatile than the Context Sensitive approach,
due to the fact that a slight change in location or screen size may cause an analog test
to fail.
Image-based
Image-based automation, often based on Virtual Network Computing (VNC), relies on
image recognition as opposed to object recognition (Context Sensitive) or coordinate
recognition (Content Sensitive). VNC is a platform-independent graphical desktop
sharing client-server system that uses the RFB protocol to allow a computer to be
remotely controlled by another computer. The controlling computer is the client or
viewer, while the computer being controlled is the server, and the server transmits
images to the client.
Automated test tools that use an image-based automation approach typically rely on
VNC and thus follow the two-computer system. The automated tool resides on the client
machine and functions as the VNC client. The server machine on which the AUT is
installed will run a VNC server that communicates with and transmits images to the
VNC client. These tools, therefore, recognize application objects based on analysis of
the transmitted images.
This mode is more closely related to context sensitive automation than content
sensitive, because it is not completely coordinate-based. These images may still be
located if they are moved to a different screen position.
20
Figure 6-5: VNC Illustration
Once an automated test has been developed, the next step is to organize and prioritize
it within the framework (see Skill Category 5: Automated Test Framework Design). Then
based on the test plan, execute the tests against the AUT and analyze the results that
are reported from the script runs (see Skill Category 12: Automated Test Reporting &
Analysis).
20
RealVNC. Available at http://www.realvnc.com/vnc/how.html
Dividing the load among several machines is a useful approach for reducing the overall
execution time of the automated test suite. For example, Figure 6-7: Parallel Test
Execution illustrates 12 tests that have been divided among 4 machines and executed
in parallel. If each of these tests takes 5 minutes to run, the total serial execution on a
single machine would take
12 x 5 = 60 minutes
Parallel execution as illustrated in the figure divides this 60 minutes across four
machines (3 tests on each machine), resulting in a total elapsed execution time across
all machines of
3 x 5 = 15 minutes
Figure 6-7: Parallel Test Execution illustrates how parallel test execution might be
accomplished. In this illustration, the Controller is responsible for orchestrating the test
execution on all machines. The Execution Machines have the automated test
tool/framework installed on it and is used by the Controller to execute a select list of
Automated Tests.
Three mechanisms for accomplishing this type of parallel test execution are:
Manually accessing multiple machines
set to trigger when new code or updated code is checked into configuration
management, or is scheduled to execute at least once a day. This frequent and
automated process helps to identify bugs the moment they enter the system, and given
the fact that the number of introduced code changes is relatively small the test results
analysis may be easier and more effective.
In automated testing, as with testing in general, there are a multitude of ―best practices.‖
With that said, a ―best practice‖ may not necessarily be a best practice for all
organizations. All best practices need to be evaluated in the context of the environment
in which they are to be applied. The ultimate goal for test automation is to have a quality
set of automated scripts that meet the testing needs of the organization while
realistically working with the constraints inherent in that organization.
Quality Attributes in test automation are those characteristics deemed important for a
particular test automation effort. At first glance, one might be tempted to judge all of the
quality attributes with equal importance but that is as unhelpful as it is unrealistic. Such
an approach will result in a cost-intensive, time-intensive, and resource-intensive
automation effort that will not guarantee increased quality. It will, however, almost
guarantee a failed automation approach. It is necessary to assess the environment, and
make informed, thought-out decisions that result in a series of trade-offs that determine
which quality attributes receive greater focus, and which will receive lesser focus.
7.1.1 Maintainability
Maintainability represents the ease with which the automation framework or scripts can
be modified to correct errors, improve components, or adapt to changes in the AUT. It is
important to understand that maintainability is not just a property of the automated test
framework to which it belongs and the mechanisms that lend themselves to
7.1.2 Portability
Porting is the process of adapting the automated test framework (or scripts) so that it
may be implemented in an environment that is different from the one for which it was
originally designed. This environment change may be a change in application
technology, servers, automation script programming language, automated test tools,
and the like. Portability is a result of factoring redevelopment and porting costs.
7.1.3 Flexibility
Flexibility refers to the testing framework‘s ease of execution. Not all test builds contain
the same types of test protocols, test for the same types of conditions, or evaluate the
same environments or applications. Depending upon the nature of a build, the time
given to test it may vary from a few minutes to a few weeks or more. Due to time or
resource constraints, full regression may not always be an option. A flexible framework
7.1.4 Robustness
Robustness indicates the quality of an automated test with respect to its ability to
withstand system changes, both predictable and unpredictable, with minimal interruption
to execution and reporting mechanisms associated with automated test implementation.
The more robust the framework is, the greater the number and types of changes the
framework can effectively handle.
7.1.5 Scalability
A scalable framework – one that can support testing when the size and scope of the
AUT (or component) expands or decreases – allows an automation framework to
seamlessly handle varying degrees of work and/or readily add or subtract framework
7.1.6 Reliability
Reliability indicates the ability of the automated test framework to consistently perform
and maintain its intended functions in routine circumstances, as well as hostile or
unexpected circumstances. Just as with the AUT, automated test frameworks are
software products and thus, not immune to defects. A reliable framework has a minimal
number of defects that negatively impact the ability to dependably verify the application
functionality. In addition, a reliable framework offers a high degree of results integrity
over repeated test executions.
7.1.7 Usability
Usability signifies the ease with which test engineers can employ the automated test
framework for their intended purposes. If the framework has a well-defined separation of
roles, usability judges the ease with which each role is able to employ the tasks for
which the role is responsible.
7.1.8 Performance
Just as software performance is treated as a non-functional requirement for
applications, performance standards and requirements of automated tests should also
be carefully considered. Certainly, one of the advantages of automating tests is that the
execution is typically faster than manual execution. If the performance of automated
tests is compromised by such factors as insufficient environment (e.g., network
performance, optimized test scripts, inefficient test protocols, errors in configuration
management or version control), the time saved through test automation may be
drastically reduced.
Table 7-9: Framework Quality Attribute Rankings revisits each of the automation
framework types, and identifies the inherent strengths and weakness of each.
Table 7-9: Framework Quality Attribute Rankings
Flexibility
Robustness
Scalability
Reliability
Performance
Keyword Maintainability Usability when:
Driven
Portability ─ Organizational processes
are moderate at best
Flexibility
─ Automation framework
Robustness
development personnel
Scalability (lead automation
Reliability architects) are not
sufficiently technical
Usability when:
─ Automated test script
─ Organizational development personnel
processes such as (automation engineers) are
documentation and relatively technical
communication are
strong Performance
─ Automation
framework
development
personnel (lead
automation architects)
are extremely
technical
7.2.2 Data-driven
Data-driven frameworks tend to be largely based on the Linear framework. Therefore,
they have similar strengths and weaknesses. The main difference is that a modest level
of reuse is introduced by using parameters for script data being stored in an external
file. This distinction tends to not only make Maintainability a little stronger but it also
7.2.4 Keyword-driven
Most of strengths and weakness associated with functional decomposition also apply to
the keyword-driven framework. The primary difference is that the strength of the
usability quality attribute is not hinged on the moderate technical proficiency of all
resources. The technical resources responsible for developing tests can have moderate
technical skills but the resources responsible for maintaining the framework typically
need much stronger technical skills. The keyword-driven framework tends to be more
usable, however, than the functional decomposition framework when there is significant
division of responsibilities and skill levels. When there are resources with strong
technical skills, and some resources with low to moderate technical skills, and all need
to operate in test automation, the keyword-driven framework tends to be more usable.
This is due to the fact that the highly technical resources build and maintain the
framework while the less technical resources implement the framework to create
application specific tests. Without such a division of labor, the keyword-driven
framework can tend toward being overkill, and not very usable.
7.2.5 Model-based
The model-based framework cannot really be discussed in the same terms as the other
framework types because its purpose and objective are completely unique. The scope
of the model-based framework is normally to explore the application, as opposed to just
automating existing manual tests. So when the scope of automation is to introduce
automated exploratory testing, and technical resources are strong, model-based is the
One of the main challenges in selecting a framework is in identifying what the desired
quality attributes mean and indicate for test automation within a given organization.
Developing a methodical approach for associating automation framework quality
attributes to a particular automated test framework is important for successful
automation implementation within an organization. While Section 7.1 provides a
description of typical quality attributes, and some guidance for how to artfully make real-
time, optimal automation implementation choices in a constrained environment, this
section addresses the actual categorization of quality attributes from a high-level and
how it may be accomplished by reviewing the overall automation scope. In addition, the
use of the quality attributes for selecting a framework type is touch upon.
21
Figure 7-1: Quality Attribute Optimization Examples
21
Fewster, Mark, and Dorothy Graham. Software Test Automation: Effective use of test execution tools.
Reading, MA: Addison-Wesley, 1999
At this point in automation history, this process is much less about following a scientific
approach, and more about making educated, analytical decisions based on information
provided. Such analytical decision making may best be illustrated via a scenario.
In this environment, maintainability, and portability are ranked high given the fact that
the project will go indefinitely, and tests will be executed on multiple configurations and
environments. Reliability is also going to be ranked high given the fact that the window
of test execution is small (each week for 2 months). Flexibility will be given a medium
ranking, because although 75 tests is not that small of a number, in this environment, it
shouldn‘t be too difficult to pick out tests on a whim that need to be executed in the
event of an abbreviated testing cycle being required. Usability and Performance will
be given a medium ranking, because the test number is moderate, and not expected to
expand too significantly but there is a need to be able to quickly execute and analyze
results given the tight execution window. Scalability will probably get a low ranking,
because there will be little need to regularly add components.
Based on this information, and what we know about framework strengths, a moderately
complex Functional Decomposition framework is recommended. Heavy focus should
be placed on developing strong components for maintenance, portability and reliability.
This section discusses in detail the low-level process knowledge required for
successfully implementing a test automation effort.
Whether using a tool with a scripting language, tree structure and/or keywords,
fundamental programming concepts remain paramount in effectively automating tests
and increasing the effectiveness of the test through increased system coverage and test
flexibility. Concepts such as variables, control flow statements (if..then..else, for..next,
etc.), and modularity are discussed in this category.
An algorithm is a set of rules or steps aimed at solving a problem, and it is at the heart
of computer programming and thus software test automation. Algorithms are the
blueprint for developing an effective automated solution so it is imperative that
automators have a basic understanding of how to create one. The understanding and
development of algorithms has itself been the subject of many books and classes, so it
will obviously not be covered ad nauseam in this section. There are a basic set of steps,
however, that can be useful for developing an effective algorithm for test automation
scripting:
1. Identify the problem – A good understanding of the overall purpose and goal of
the script including its inputs and desired outputs is an important first step.
2. Analyze the problem – The problem must next be logically deconstructed, so that
the solution may later be reconstructed in the form of an automated script. This
involves identifying relationships between the inputs and outputs, understanding
constraints of the target system, automated test framework and automated test
tool. This also involves understanding the differences between how the system is
accessed manually versus how it is accessed via the framework and test tool.
3. Create a high-level set of steps to accomplish the stated goals in the form of a
flowchart (See Figure 12-4: Automated Test Results Analysis for a sample
flowchart) or pseudocode (See Figure 8-1: Sample Pseudocode for Data-Driven
Invalid Login Test for sample pseudocode).
4. Walkthrough the algorithm using one or more real life scenarios (both negative
and positive), and add additional detail as necessary until the walkthrough
reaches a successful end.
Much of the work involved in developing an algorithm for automating a functional test is
accomplished during the development of manual test procedures. This provides a good
justification for having a well defined set of manual test procedures prior to automating
tests against an application.
8.3 Variables
Short Answer
A variable is a container for storing and reusing information in scripts. Referencing a
variable by name provides the ability to access its value or even change its value.
Variables store information that may change dynamically during the running of the
program or by the automator during the program design time.
nameCounter = 4
Referencing Variables
In this example, the number 4 is assigned to the variable called nameCounter.
Whenever the nameCounter variable is referenced by the script, the number 4 is
returned. Therefore, the variable may be used in the same way that the number 4 may
be used.
For example, the nameCounter variable may be used in a multiplication problem as
shown below:
nameCounter * 2
4 * 2
Properties:
Shape
Text
Class: Button
Methods:
Click
Object 2: O_Button
Object 1: R_Button
Properties: Properties:
Shape: Rectangle Shape: Oval
Text: R_Button Text: O_Button
Collection: Buttons
The class in the figure is a Button class and has two properties that define the features
of all buttons: Shape and Text. The two objects (i.e., instances of the class) both have
shape and text properties but the values for shape and text are different for each
button object. One button object (Text: R_Button) is rectangular in shape; the other
button object (Text: R_Button) is oval in shape. Both button objects respond to the
method Click; this is inherited from the overarching class: Button. The two button
objects collectively make up what is called the Buttons collection. The collection allows
the automator to refer to each button based on its position in the group. Therefore,
Object 1 can be referred to in two ways:
Its collection location – Buttons Collection object 1
Its properties – The button that has a rectangular shape, and is labeled
R_Button.
Control Flow functions provide control over the order in which the code is executed.
They also determine whether or not specific blocks of code get executed at all. This is
critical for many functions, such as exception handling. Normally, a script executes
every line in sequence. Often, however, this sequential execution is not desirable.
Sometimes, certain code should only be executed under specific conditions and at other
times some lines of code should be executed multiple times.
The two main categories of constructs that provide the automator with the ability to
control how the code is executed are Branching Constructs (e.g., if-then, case-select)
and Looping Constructs (e.g., while, for).
If-Then Construct
If-Then statements use a Boolean condition to determine what code blocks to
perform. It is structured as follows:
If (condition) Then
<statements>
Else
<statements>
End If
The condition can be any statement that is evaluated as either true or false. For
example, the condition results of 4 > 5 would evaluate to false, because 4 is not greater
than 5. When the condition is true, the first set of statements (those following the Then
keyword) is executed. Otherwise, the next set of statements (those following the Else
keyword) is executed.
Keep in mind that the Else keyword is optional. If the Else is not included, then only
the If-Then statements are executed. When the condition is not true, execution ends
and the control continues to the next line of code after the construct.
The If-Then construct may be further expanded by introducing the ElseIf keyword.
The ElseIf keyword makes it possible to build a nested set of conditions to evaluate..
The structure of the If-Then statement with ElseIf included is as follows:
If (condition) Then
<statements>
ElseIf (condition) Then
<statements>
ElseIf (condition) Then
<statements>
...
Else (condition) Then
<statements>
End If
Only the statements following the first true condition are executed while all other
statements within the construct are skipped. The statements of the final Else will be
executed if none of the conditions are true.
Case statements are typically used in lieu of using an If-Then construct that has
numerous ElseIf branches.
While construct
The syntax of each depends on the tool and language used for test automation. That
said, each may be described in terms of their basic structure.
For Construct
A For loop construct has an explicit loop counter or loop variable and is typically used
when the number of iterations is known before entering the loop. The structure of the
For construct is as follows:
For i = 1 to 20
<statements>
Loop
In this example, the variable i takes on the values 1, 2, 3…20, until the loop has been
executed 20 times. When i exceeds 20, the statements following the For loop are
executed.
While Construct
The While construct executes the loop based on a Boolean condition. It is often
structured as follows:
While (condition)
<statements>
Loop
The condition is a statement that is evaluated as true or false. When the condition is
true, the statements within the While construct continue to be executed. Otherwise, the
execution moves on to the statements following the looping construct.
When creating automated scripts, there will often be activities that need to be executed
multiple times in varying locations. These activities may be application agnostic
activities such as calculating the difference between two numbers obtained from the
system at runtime. The activities may also be application specific activities such
entering information that will log the user into the application. If the programming
language or tool being used doesn‘t offer a function to accomplish the specific activity at
hand, it normally will provide the capability of creating a custom, user-defined function.
A function is a block of code within a larger script or program that executes a specific
task, but while it is part of the larger script it operates independent of that script. It is
executed not according to where it is located in the script, but rather based on where it
is ―called‖ within a script and it typically allows for arguments and return values.
Arguments are values that are entered into the function and the function has the liberty
of using and even altering the values. Return values are data that come out of the
function and may be used by the calling script.
Functions are often structured as follows:
AdditionFunction is the function name, while digit1 and digit2 are the
function arguments. The return value is represented by retValue. This function may
be used by the calling script in the following manner:
sumValue = AdditionFunction(3, 2)
The AdditionFunction will use the number 3 in the variable digit1 and the
number 2 in the variable digit2. It will calculate 2 + 3 and return 5 to the variable
sumValue.
Objects are a central element in automating applications. If the tool or script cannot
access objects, there‘s a good chance that the application cannot be effectively
automated. A lack of proper understanding of objects and object behavior may at times
result in automation issues and failures. This section builds on the foundation of key
terminology and concepts introduced in Section 8.4.
The object-oriented approach to test automation helps increase the reliability and
robustness of automated tests. It also increases the responsibility of the test automator.
Since most application objects have multiple properties that can be used to uniquely
identify an object on the screen, it is the responsibility of the automator to determine
which properties to use for test development. Using the example illustrated in Figure
8-2, the Shape property, the Text property, the Class property, the Collection property,
or some combination of all of these may be used to reference the specific button on the
screen. The key for test automators is analyzing the AUT and determining which
properties are necessary to uniquely identify a particular type of object. Using the wrong
set of properties (or too few properties) will not sufficiently identify a unique object while
using too many properties will make the automated tests too susceptible to application
changes thus decreasing the framework‘s robustness. Whenever the property values of
objects in the application change, the object properties used in the automated script for
identifying application objects must also change. If the object property values are not
consistent between the AUT and the automated script, failure will result when the script
attempts to identify and automate the object in the application. For each type of object
that may be automated, an automator must make an assessment of which properties to
use for uniquely identifying the object.
Some general guidelines for identifying object properties include the following:
Choose object properties that are descriptive. For example, if an object has a
property called ID that equals 5jf4f and a Name property that equals
AddressField, the Name property is a better choice.
Choose object property combinations that are likely to uniquely distinguish the
object from all other objects on the screen.
Choose object property combinations with properties that have as little dynamic
behavior as possible. It is not always possible to escape dynamic behavior and
there are ways to handle object properties with dynamic behavior (see section
9.4 Dynamic Object Behavior). But whenever possible, dynamic object properties
should be avoided.
Choose object property combinations with properties that are not likely to be
affected by cosmetic changes (e.g., object positioning, color, etc), since cosmetic
changes are likely to occur frequently.
Communicate with development about object naming conventions.
Understanding how developers handle object properties (e.g., how they name
the objects, how often particular object properties are altered, etc.) will often
provide insight into how automators should handle object properties. In addition,
in an effort to make the application more automatable, developers may alter the
way they name objects, how they update object properties, or even how often
they update object properties.
An example of how to choose object properties for identifying objects within an
application may be discussed using the two objects from Figure 8-2. If these two objects
exist on the same screen, the following may hold true:
This property data may be used in an automated script in a statement that resembles
the following:
GetElement(“Class:=“Button”, “Text:=“O_Button”).Click
This statement gets an element on the screen that has a Class property equal to
Button, and a Text property equal to O_Button, then clicks the elements.
An Object Map (also known as a GUI map) is a file that maintains a logical
representation and physical description of each application object that is referenced in
an automated script. Object maps allow for the separation of application object property
data (known as the physical description) from the automated script via a parameter
(known as the logical name). This approach is similar to the data-driven technique (see
Section 4.2.3.1) in that some implementations of the object map occur in a table format.
Object maps are unique, however, in that they deal with object properties that are used
by the script for identifying objects as opposed to data that is entered by the script into
an object.
Most commercial automated test tools have an object map feature to sustain and
configure object property information in a maintainable way. This object map may
maintain and display information in a plain text format as shown in Figure 9-1:
RectangleButton
{
Class: Button
Text: “R_Button”
Logical Name
Shape: “Rectangle”
}
OvalButton Physical Description
{
Class: Button
Text: “O_Button”
Shape: “Oval”
}
Figure 9-1 illustrates how the objects from Figure 8-2 may be displayed in an object
map. The advantage of an object map is that all of the object properties are stored in
one location. Thus, when an object is referenced in a script, instead of identifying the
physical description as in the following statement:
GetElement(“Class:=“Button”, “Text:=“O_Button”).Click
GetElement(“OvalButton”).Click
This separation helps make the framework more robust because it is likely that this
object may be referenced in several locations within a single or several different
automated test scripts. If the physical description is included in every automated test
statement that references the same object and one of the object properties in the
application changes, every one of the statements would need to be altered to keep the
script from failing to recognize the object. By using the logical name, an object property
change would only prompt a single change to the object map. Once the object map is
updated, every statement that references the changed object will be able to successfully
reference the newly altered object.
Object Map Maintenance
Object maps add significant power to the test automation effort but only if the object
maps are properly maintained. Many of the tools that provide an object map feature also
provide the Record & Playback feature, and this normally adds objects to the object
map automatically during recording sessions. While this may be useful, it also increases
the risk of bloating the object map with duplicate objects and garbage information that
cancels out many of the advantages offered by the object map. Thus, if not properly
maintained, the object map may make the automation effort even more cumbersome
and costly. To preserve the benefits of the object map, its maintenance should be
included as a regular part of automation framework maintenance. Maintenance may
include the following:
Configure the object map object identification feature to use the minimum
number of properties for uniquely identifying objects in the application, and select
properties that are less likely to regularly change. This will be most useful during
Record & Playback session or during the use of other features that automatically
―learn‖ object properties into the object map.
During Record & Playback sessions, ensure duplicate and/or other unnecessary
objects are not being added to the object repository.
When a script fails, work to assess whether the failure is due to the ability to
successfully identify an object and whether the issue can be fixed by adjusting
the properties in the object map.
When the application removes objects, remove the corresponding objects from
the object map to reduce clutter.
*
*
*
*
*
* *
*
*
*
*
*
*
*
22
Figure 9-2: Document Object Model
The pictorial view of the object model reveals that the Window object is at the top of the
hierarchy, which makes the browser Window the parent of the HTML Document. The
HTML Document is, in turn, the parent of the HTML Elements. Understanding this
hierarchy makes it possible to properly refer to and perform actions on an object in the
application. The hierarchy of a statement that performs an action (such as a method
that performs click) on an element may resemble the following:
Window.Document.Element.Method
Not shown in the illustration is the other information that comes along with the DOM.
The DOM also provides textual information about all of the properties, methods, and
collections that may be used on the Window, Document, and Element objects. This
information is useful for performing automated test actions or verification on various
objects in the application, because automating actions against an object often requires
setting the object‘s property values or executing the object‘s methods. Therefore, the
22
Access eLearning. Document Object Model. Available at
http://www.accesselearning.net/mod10/10_07.php
But an automated test tool may instead represent the same action with the following
hierarchy:
Window.Element.Method
In some instances, however, the tool still provides the ability to script the application
based on the original application object model. This can be useful, given that the
automated test tool cannot always sufficiently automate some application objects.
Having the option of creating scripts based on the application‘s object model provides a
way of handling these objects. In addition, understanding object models can add
additional scripting powers beyond basic AUT scripting statements. Understanding
object models may allow automation of tedious process tasks including data extraction
and manipulation, file manipulation, report generation, and the like. In addition, powerful
scripts may be written to perform multiple activities with only a few statements.
Many AUTs have objects that are either created or modified at run-time. Therefore, the
properties of those objects are very rarely static.
Dynamic Links
For example, Figure 9-3 reveals a Welcome screen that may be displayed upon logging
into an application. The dynamic links are generated based on the profile associated
with the username of the person who logs into the application. The links Profile, Home,
and Messages are customized to reflect that user, in this case, John.
Assuming that these objects were static, the object map may log these items thus:
ProfileLink
{
Class: Link
innerText: “John Profile”
}
HomeLink
{
Class: Link
innerText: “John Home”
}
MessageLink
{
Class: Link
innerText: “John Messages”
}
The innerText property represents the text of the link shown on the screen and is the
primary property used for identifying the links. The values associated with the
innerText property in the object map for each of these links passes testing when the
logged in user is ―John‖. If ―Sue‖ logs in, however, the automated tests will fail to
appropriately recognize the link objects, because the links in the AUT will now read,
―Sue Profile,‖ ―Sue Home,‖ and ―Sue Messages‖ while the object map still looks for
―John Profile,‖ ―John Home,‖ and ―John Messages.‖
As previously noted, object properties are essential for automating tests to effectively
communicate with the AUT, so the ability to manage and test dynamic objects is also
essential. Two ways in which dynamic objects may be handled are:
Dynamically construct property values
Use regular expressions
Username = “John”
The rest of the innerText value is static for each of the links. Therefore, the
innerText property value may be constructed to handle the dynamic nature of these
links in the following manner:
These statements allow the object property values used by the automated tests to be as
dynamic as the actual property values in the AUT. Whether these property value
variables are applied to the object map or to the automated test statements will depend
on automated test tool being used.
.*at
Actual regular expression syntax varies depending on the tool and language used for
test automation. Most commonly, some implementations use the period (.) as a wildcard
to represent any single character while the asterisk (*) is used to represent 0 or more of
the character(s) it follows. Given this, the property value variables in the previous
section may be handled by regular expressions in the following manner:
profileInnerText = “.*Profile”
homeInnerText = “.*Home”
messageInnerText = “.*Messages”
Essentially, the regular expressions ignore the username in the links, and only look for
links that contain the terms ―Profile‖, ―Home‖ and ―Message,‖ respectively. Since the
Welcome screen contains only one link each that has these words, the regular
expression statement should work fine.
As discussed earlier, the syntax for regular expressions largely depends on the
automation tool and language being used for test automation but there are some basics
that are fairly common to most tools and languages, including syntax for:
Alternation – deals with choosing between alternatives, similar to the way an Or
statement does in logic. The common syntax is to separate the alternatives with
a vertical bar (|). For example, pin|pen is syntax used to match either ―pin‖ or
―pen‖.
Grouping – addresses the assemblage of characters together to define the scope
and precedence of other regular expression operators. Grouping is commonly
accomplished using parenthesis ( ( ) ). For example, p(i|e)n is syntax used to
match either ―pin‖ or ―pen‖.
Single Character Match – allows nearly any single character to be represented
using the period (.). For example, p.n is syntax used to match ―pin‖, ―pen‖, ―pan‖,
etc.
Regardless of how well automated tests are designed and created, problems will occur.
Sometimes the problems are related to the scripts, and sometimes they are related to
the application. Regardless, the root cause is not always simple to find. The inability to
effectively debug scripts can severely delay schedules, and can even bring automation
to a screeching halt.
The most common types of errors, such as those discussed here, account for the many
of the bugs or anomalies that appear during automated test execution.
Syntax errors typically occur when an automated test script statement is written
incorrectly. Usually, some element is missing, out of place, or deviating from the
grammar rules that define the language being used. When compiled languages are
used, syntax errors will prevent the successful compilation of the automated script.
When scripting languages are used, syntax errors will prevent the script from executing
any of the lines in the script and will typically result in some language- or tool-specific
syntax error being displayed.
Run-time errors are the result of some improper action taken by the script as opposed
to some improper grammar within the script. These errors will halt the script during
execution, at the point in which the error occurs. For example, an automated test may
have the following equation:
intergerVar1
result =
intergerVar2
Logic errors are the result of automator error. Typically they are not related to any
syntax or run-time problems (although logic errors may contaminate other data used by
the script and result in a run-time error elsewhere in the script). Instead, they simply
result in undesirable and often incorrect results. An example of a logic error may be
seen by examining the ‗=‘ operator along with the ‗==‘ operator. In many automation
languages, the ‗=‘ operator is meant to assign value to a variable as in the following
statement:
result = 5
If result == 5 then
<PassScript>
End If
In the above statement, the result variable is evaluated to see if the number 5 is
stored in it. If the condition is true, then the script passes. Otherwise the script will fail. A
logic error would occur if the two operators were mixed up to produce the following
statement:
If result = 5 then
<PassScript>
End If
In many languages, this is deemed a perfectly valid statement that results in 5 always
being stored in the result variable. Therefore, even if some number other than 5
existed in the variable prior to the If statement, the script would still pass because the
number 5 gets stored in the variable within the If statement. So instead of getting an
appropriate script failure, the script improperly passes (known as a false Positive).
These types of logic errors can be particularly dangerous due to the fact that they have
the appearance of everything being fine, when in actuality they are not. Excess logic
errors are the enemy of automation reliability.
Application errors are a result of the application functioning contrary to what is
expected. These errors are the reason that automated testing is being performed to
begin with. In some situations, application errors may result in run-time errors but in
other situations, there will be no indication of an application error unless a verification
point is specifically used to detect the error.
Figure 10-2 represents an automated test with a breakpoint at step number 7. This
breakpoint allows the simulation of an error because after entering the appropriate data
into the application and clicking the Next button, the script stops, and step number 7 is
not executed; so even though the Confirmation Screen appears in the application, the
screen verification is not performed. While the script is waiting, the automator can
simulate an error condition by manually causing another screen to display in the
application, or by closing the application all together. Once the error condition has been
simulated, the next step is to restart the script at the point it was stopped, and ensure
the Verification Point fails.
After each automated script has been executed individually, performing the positive and
negative tests, the scripts should then be executed in batch mode as they are likely to
be executed for testing of an application build. Running the scripts in batch often
uncovers issues that aren‘t seen when running the tests individually. This is due to the
fact that one test may modify the state of the AUT or test environment in a way that
negatively impacts the tests that follow.
Given the scenario just identified, names like ‗Christopher Davis‘ would cause a
problem. The problem is a direct result of the fact that ‗Christopher‘ is 11 characters,
and upon executing the script, the application would drop the last character of the first
name. When the application attempts to match the name ‗Christophe Davis‘ with a
name in the database, it will fail, preventing the Confirmation screen from appearing. If
the application is dropping characters, yet still has data in the database longer than the
10 characters, it may not be clear, initially, why the application fails to show the
Confirmation page. Then, if the test script tests varying data values, the error may not
be consistently reproducible, because some of the names may be less than 11
characters, resulting in the display of the Confirmation screen. And even when using a
name longer than 10 characters, the issue is not apparent, because the characters are
not dropped in the front end. If the information is reported to the application developers,
they may attempt to reproduce the error in their environment but the names they
attempt to use are less than 10 characters, which seems to be a perfectly valid test,
because nobody knows that the application is dropping characters. When developers
are unable to reproduce the error, a question arises about whether it is somehow due to
the application or due to the automated test scripts. Pinpointing the cause of the error is
additionally challenging because the error is observed at step 7 due to an error at step
6, based on data entered at step 4.
To truly understand the cause or at least the source of an error, the automator must be
skilled in localizing the error. Techniques for error localization include:
Backtracking
Error Simplification
Binary Search
Hypothesis
Backtracking involves starting from the point at which an error is observed and moving
backwards through the code until the cause of the error becomes more apparent.
Error Simplification extracts portions of complex statements or modular components in
an effort to simplify the statement and pinpoint where the error may be originating.
1 name = nameFunction()
2 address = addressFunction()
3 phone = phoneFunction()
4 inputData(name, address, phone)
specified amount of time for the AUT to achieve a specified state before moving to the
next test step.
Data
Data errors occur when the test data environment is not as expected, resulting in
changes to expected results. For example, if a refresh of data from production occurs,
the state of data elements may have changed, i.e. an insurance policy may have been
canceled or an employee may have been terminated. This may appear to be an
application error when it is not.
Error handling is the automated process of handling the occurrence of some condition
that causes the actual behavior of the application to deviate from the behavior that is
expected by the automated script. These unexpected occurrences may be errors, or
simply some functionality or event that the script was not initially designed to handle.
Whatever the cause, unexpected behavior can wreck havoc on automated test runs, by
distorting data, disrupting batch executions, killing test reliability and rendering the
automated test run useless. Test implementation is largely a process of performing
actions, monitoring the system response, and moving forward accordingly. Error
handling provides automated tests with the ability to move forward based on real time
test responses. The effective handling of errors ensures errors are properly logged, and
that the remainder of the automated test run is salvaged if possible, completed in a
timely manner and maintains its integrity. An automated error handling process typically
involves the following activities:
1. Recovery Trigger
2. Error Recovery
3. Error Logging
4. Script Recovery
A Trigger is the error or event that causes the error handling routine to begin working its
magic. Triggers must be explicitly defined within the automated test framework in order
for the error handling routine to ―know‖ when to begin handling unexpected behavior,
and there must be a way to ―capture‖ the error trigger.
Error Recovery is the process of efficiently handling the error in the most appropriate
manner. For example, if an error negatively affects the application data, the error
recovery might clean up the application data.
Error Logging logs data in a log file, which aids in debugging and test analysis. The
logging data typically addresses the type of error and the recovery activity that resulted.
The error handling component may call a separate Reporting component in the
Error handling is implemented in a variety of ways within automated test scripts, most of
which can be summarized in the following three categories:
Step Implementation
Component Implementation
Run Implementation
With each of these categories of error handling follows an increased level of
Robustness.
Not only will this prevent the current test from successfully logging into the application
but it will also prevent subsequent tests from closing the application, and logging into
the application with correct data. The entire test run will be replete with failures (See
Figure 6-6 for an illustration of cascading errors), and will result in lost testing time and
wasted personnel. This error is very specific to the login process, and must be
anticipated, in order to effectively handle it. An error handling routine would need to be
employed to specifically handle this login error so that it wouldn‘t negatively impact the
remainder of the test run, and to log data that may be used for analyzing the test run
appropriately. An illustration of this implementation is provided in Figure 11-3 and Figure
11-4.
Error handling routines are critical for test automation, particularly for effectively
resolving runtime errors and other unexpected system events, because excessive
runtime errors are the enemy of test automation robustness and reliability. Figure 11-2
illustrates an error handling development process that may be used for implementing a
successful error handling approach. These steps include:
1. Diagnose Potential Errors
2. Define Error Capture Mechanism
3. Create Error Log Data
4. Create Error Handler Routine
In this construct, part or all of the automated test script would be included in the try
block of code represented by <automated test code>. In the event that an
exception occurs, the script captures the error then automatically shifts control to the
catch block of code represented by <exception handling code>.
This example has some error handling in steps 4 through 8. It very specifically looks for
the anticipated ―Login Error‖ dialog, and handles it by clicking the OK button, then
aborting the test. The advantage to handling this error is that time is not wasted by
continuing to execute a failed test; the report will be clean, because only the main error
will be logged, and the remaining tests will not be adversely affected.
Another approach to handling this error is to use a separate error handler.
Figure 11-4 reveals how a separate error handler component may be introduced. In this
illustration an error code is received from a verification point (line 4), then that code is
sent to an error handler, illustrated in Figure 11-5, in the event that the code is not a
―Pass‖. If a try/catch construct is used, the if-then statement would be unnecessary. The
error handler would simply be called from within the catch block.
Error recovery involves dealing directly with the error that occurs so that it may be
resolved in a satisfactory manner. Some common approaches to error recovery are:
AUT Cleanup – Resetting AUT initial conditions, or resetting AUT data
AUT Shutdown – Closing and/or restarting the AUT
Script recovery addresses how the automated test run is to proceed. Common
approaches for addressing script recovery are:
Re-execute Step – Re-execute the test step(s) that failed
Skip Step – Skip the failed test step(s) and move on to some other portion of the
test
Re-execute Test – Re-execute the entire test that contained the failed step(s)
Abort Test – Terminate the current failed test and move on to the next test
Abort Test Run – Terminate the entire test run
Test reports and logs are important for communicating the status of an automated test
effort as well as for analyzing and debugging of automated tests. Reports are essential
in the identification of issues and in determining whether these issues are related to
application defects or automation framework problems. In addition, automated testing
depends on test reports to substantiate claims about its effectiveness and value.
This category addresses the types of reports that are commonly generated and how to
effectively produce these reports, so that time can be saved in analysis and reporting.
The ability to show automated test development progress is critical to the success and
survival of an automated testing development effort. These metrics need to first convey
that automated test development tasks are being completed. If several weeks go by
with several man-hours being burned on test automation, stakeholders will reasonably
expect to see increases in the number of completed automated tests. Metrics will also
need to show the impact of test automation on the entire testing effort. The metrics
should be provided in a format that reflects how the data is grouped in the framework
(see section 6.3.1 for more information on test grouping). For example, Figure 12-1:
Automation Development Sample Metrics offers several metrics grouped by Functional
Area.
Total Total
Number
Number Number Automation Automatable
Functional Area of Total
Automatable Automated Completion Completion
Tests
% %
Automated
Stay Remaining
Manual
This information generally summarizes the entire test run as opposed to providing
information on a test-by-test or step-by-step basis. This bigger picture is normally what
upper-level management is most concerned with. High-level reports should organize its
information in a way that reflects how the data was grouped (see section 6.3.1 for more
information on test grouping).
Low-level reports are sometimes used for relaying information to management but are
usually used for providing detailed information about the test run to automated test
engineers. The following information is often in low-level reports:
Date/Time
Associated Screen
Associated Object
Data Used
Desired Output
Actual Output
Pass/Fail Status
Result Description (detailed description of the passed or failed step)
The information found in low-level logs, unlike high-level reports, is not a summary of
the run. If the log is an ‗error log‘ this information is logged for every single error
identified during the test run. If the log is a ‗run log‘ then this information is logged for
every single step in the test run.
Many determinations must be made when making decisions about automated test
report production. The first is a decision about the report format. Automation reports
may be dynamically written in many formats including:
Command prompt
Tool Specific files
Text files
Spreadsheet files
Word Processor files
XML
HTML
12.2.1 Analysis
Analysis is important, but can be time consuming and fairly frustrating for the automated
test team and its stakeholders if there is no identified process for how it is to be
conducted following a test run. A process for results analysis may resemble the one
illustrated in Figure 12-4: Automated Test Results Analysis.
This process begins with a preliminary review of the automated test reports and logs.
Prior to moving into detailed analysis of any potential failures that may be in the report,
it is important to address any immediate reporting needs that may exist. For example,
management may be particularly concerned with a specific area of the application, and
needs to know immediately if there is a definitive positive result for that area of the
application. In such a situation, a quick review of the results and logs will be followed by
an immediate response to management regarding their pressing concerns.
After the immediate reporting needs are addressed, if there are no failures in the run, a
complete final execution report is submitted to management. In the event that errors do
exist, it‘s time to begin going through and taking a closer look at each of the failures.
After picking an error, a determination must be made on whether it is an application or a
script error. This determination is based on the AUT and the level of detail provided in
the log.
For example, this illustration revealing a sample log file contains 2 lines that represent a
single application failure (Note: a high-level report would probably only contain the 2nd of
the 2 lines, while the low-level log provides both lines). The failure is an inability to log
into the application. The log‘s first entry reveals that the login failure is probably due to
the script‘s inability to locate the PasswordText textbox. This information in addition to a
quick look at the AUT is enough to make a preliminary determination of the cause of the
error. If, upon manually opening the application and visually inspecting the page for the
PasswordText object, the object exists, this is an indication that something went wrong
during the script execution. Either the object attributes have changed, or there was a
synchronization problem, or some run-time fluke has occurred. If the PasswordText
textbox is missing, then it‘s clear that an application error exists. Refer to Skill Category
10: Debugging Techniques for more information on script debugging and common
automation errors.
If an error is identified as a script error then the script should either be added to the
rerun list – the list of automated tests to re-execute after analysis is complete – or
flagged for post analysis debugging and manual verification. Conversely, if the error is
identified as an application error, steps should be taken to reproduce it manually.
Automation is often used as a scapegoat, so the first thing a software developer will ask
is whether it was done manually or with a script. Having already reproduced it manually
will save a lot of time and effort.
If the failure cannot be reproduced manually then an effort must be made to determine
why it is occurring in the script and not in the application. Eventually, however, if it can‘t
be reproduced, it will need to be flag for later debugging so that the remaining failures
can be debugged.
For example, several automated scripts may check the username and password field
labels on a Login screen. Therefore, if the label for the password field was inadvertently
changed, many automated scripts will fail. This may be deemed low priority by
developers and not fixed immediately. Therefore, all subsequent executions of the
scripts will yield the same known error.
The continuous appearance of expected failures in the automation execution report
instills a numbness to these errors that eventually leads to these errors being
completely overlooked. Overlooking errors is not a good practice, because it can lead to
mistakenly overlooking new errors that require attention. Therefore, it‘s important to
establish an approach for handling known errors. Some approaches include:
Changing the failure to a warning
Appendices
This section contains sample evaluation criteria that may be used for evaluation of
functional automated testing tools.
A.1. Criteria
The following criteria may be used to assess functional automated test tools:
1. Ease of Use/Learning Curve – The intuitiveness of the tool or the extent to which
specialized or vendor-specific training is required before the tool can be fully utilized.
2. Ease of Customization – The ease with which certain features of the tool can be
customized.
3. Cross Platform Support – The ability of the tool to function across different
platforms/Browsers.
4. Test Language Features – The level of support of test language features provided
by the tool.
5. Test Control Features – The measure of error-free control offered by the tools test
control features.
6. Distributed Test Execution – The ability to execute tests across remote locations
from one central location.
7. Test Suite Recovery Logic – A measure of the ability of the tool to recover from
unexpected errors.
8. Tool Integration Capability – The ability of the tool to integrate with other tools.
9. Tool Reporting Capability – How the tool presents the test results, considers the
detail, display and customization of the reports.
10. Vendor Qualifications – The qualifications of the vendor in terms of financial stability,
continued/consistent growth patterns, market share, and longevity.
11. Vendor Support – The amount of technical support available. Also considers the
responsiveness of the vendor‘s customer service in answering and following up on
questions and problems.
12. Licensing – Provides licensing support to meet the needs of the client.
Each tool that is evaluated will be assessed against the above criteria using a weighted
ranking scale. The ranking is done on a scale from 1 to 5, where 5 indicates that the tool
does not provide the function or that there is no perceived benefit from the
functionality/feature, and 1 meaning the tool possesses full and enhanced functionality
or that significant benefit was observed in this area.
Following is a chart that presents a generalized interpretation of the rankings for each
function.
Function Ranking
1 2 3 4 5
Ease of Use No training Some in- Vendor- Requires Hard to use
required house training specific or continued self even after
is required instructor-led study even training
training after training
required has been
performed.
Test Control High level of Sufficient level Limited level Extremely No error-free
Features error-free of error-free of error-free limited level of control
control control control error-free
control
Function Ranking
1 2 3 4 5
Distributed High level of Sufficient level Limited level Extremely No level of
Test distributed of distributed of distributed limited level of distributed
test test execution test execution distributed test test
Execution
execution execution execution
Criterion/Feature Rating (1 – 5)
Tool 1 Tool 2
1 Ease of Use
Criterion/Feature Rating (1 – 5)
Tool 1 Tool 2
Record/Playback with minimal coding
Application language easy to understand
Multiple statements can be commented
2 Ease of Customization
Tool Bars help customize/reflect any commonly used tool
capabilities
Ease of adding or removing any fields as and when
necessary
Contains Editor with formats and fonts for better readability
3 Cross Platform Support
Supports Multiplatform support-example-Unix, Windows 7
etc.
Browser support-Single or Multiple-example Firefox, I.E,
Chrome
Cross Technology Support – supports multiple
technologies ex –VB, Java, PowerBuilder, etc.
4 Test Language Features
Allows Add-ins/extensions compatible with third party tools.
Contains Test Editor and Debugging feature
Complexity of Test Scripting Language
Robust test scripting language that will allow modularity.
Test Scripting Language allows for variable declaration and
capability to pass parameters between functions
Test Script compiler available for debugging error
Supports interactive debugging by viewing values at run
time
Supports Data-driven Testing
Allows for automatic data generation
Displays start and end time of test execution
Allows for adding comments during recording
Allows for automated or manual synchronization capability
Supports verification of object properties
Supports database verification – Oracle/SQL etc.
Supports Text Verification
Allows for automatic data retrieval from any data source-
example RDBMS for Data-driven testing
Allows the use of common spreadsheet for Data-driven
testing
Ability to compare the results of test execution of different
runs of the same test or different runs of different tests
Ability to run the same test multiple times and track results
Allows Replay in both Batch Mode and regular mode
Criterion/Feature Rating (1 – 5)
Tool 1 Tool 2
Supports variable Parameterization
5 Test Control Features
Ability to schedule test execution at predefined times and
unattended (less manual intervention)
6 Distributed Test Execution
Allows for local or remote execution control across
networks
7 Test Suite Recovery Logic
Supports Unexpected Error Recovery
8 Tool Integration Capability
Supports integration with pertinent SDLC tools.
9 Test Reporting Capability
Script Error generated can be easily understood and
interpreted
Allows Error Filtering and reporting features
Supports Metric collection and analysis
Supports Multiple report views
Allows reports to be exported to notepad, word, excel,
HTML or any other format
10 Vendor Qualifications
Financial stability
Continued/consistent growth patterns
Market share
Longevity
11 Vendor Support
Patches provided as and when required and needed
Upgrades provided on regular basis
Scripts from previous versions can be used in newer
versions
Future updates should not require major rework of existing
test scripts.
Supports Help feature that is well documented that can be
easily understood
Vendor provides a support website
Phone Technical support by the vendor as needed
Provided features and functions are supported
12 Licensing
Allows for temporary, floating, local and server licenses
This checklist can be used for performing peer reviews on automated tests.
Header
Format is as follows:
################################################################
# File:
#
# Created by:
# Modification Date (Last):
# Modified By (Last):
# Purpose:
################################################################
Includes Test Name
Includes Date
Includes Date of Last Revision
Includes Created By
Includes Purpose
Includes Modified By
Constant Declarations
Syntax: static/public constant <CONSTANT_NAME> = <const_value>;
Variable Declarations
Syntax: [private/public]<variable_name> = [<variable_value>];
Array Declarations
Syntax: <array_name>[0]=<value_0>;…<array_name>[n]=<value_n>
User-Defined Functions
Syntax:
[public/private] function<function_name>([In/Out/InOut]<parameter_list>)
{
Variable declarations:
statement_1;
statement_n;
}
Defined after variable declarations
Placed in function library if declared as public
Standard return values
Comments
Start and end with #
…………..;
}
else
{
statement2 ;
}
Do Loop Syntax:
Do
{
statement_1;
…………..
statement_n;
}
while(<condition>)
Object Map
Logical names modified for clarity
No duplicate objects
Evaluation
Decide to Automate[Meets
Decide not to Automate/Change Automation Automation Standards]/Change
Status to ‗Do Not Automate‘ Automation Status to ‗Automate‘
Do Not
Automate
Automate
Issue Resolved/Change
Automation Status to ‗Automate‘ Issue Compl ete
1. When creating a manual test case, the manual tester places it in Evaluation status.
2. A determination should be made on whether or not to automate the test.
a. If the manual tester decides the test shouldn‘t be automated, it is placed in Do
Not Automate status.
b. If the manual tester decides the test should be automated, move on to the next
step.
3. Once the test is ready for automation, the manual tester places the test in Automate
status.
a. If an issue is found while automating, the automator places the test in Issue
status. The manual tester must work with the automator to resolve the issue.
Once the issues are resolved, the automator places the test back in Automate
status.
4. Upon successful automation of the manual test, the automator places the test in
Complete status.
In order to calculate the ROI, we must calculate the investment costs and the gain. The
investment costs may be calculated by expressing automation factors in monetary
terms. The automated tool and license cost, training cost, and machine cost are
straightforward but the other factors will need to be processed.
Automated Test Development Time may be converted to a dollar amount by
multiplying the average hourly automation time per test (1 hour) by the number of
tests (500), then by the Tester Hourly Rate ($60) = $30,000.
Automated Test Execution Time doesn‘t need to be converted to a dollar figure in
this example, because the tests will ideally run unattended on one of the
Automated Test Machines. Therefore, no human loses time in execution.
The Automated Test Analysis Time can be converted to a dollar figure by
multiplying the Test Analysis Time (4 hours per week given that there is a build
once a week) by the timeframe being used for the ROI calculation (6 months or
approximately 24 weeks), then by the Tester Hourly Rate ($60). 4 x 24 x 60 =
$5,760.
The Automated Test Maintenance Time can be converted to a dollar figure by
multiplying the Maintenance Time (8 hours per week) by the timeframe being
used for the ROI calculation (6 months or approximately 24 weeks), then by the
Tester Hourly Rate ($60) = $11,520.
That‘s a lot of money! But before you decide to eliminate test automation from your
project, let us consider the gain. The gain can be calculated thus:
Manual Test Execution/Analysis Time that will no longer exist once the set of
tests has been automated. The Manual Execution/Analysis Time can be
converted to a dollar figure by multiplying the Execution/Analysis Time (10
minutes or .17 hours) by the number of tests (500), then by the timeframe
covered by the ROI calculation (6 months or approximately 24 weeks), and by
the Tester Hourly Rate ($60).
The gain is therefore
.17 Execution/Analysis Time (in hours)
500 Number of tests
24 Weeks covered by the ROI calculation
x 60 Tester Hourly Rate
$122,400 Gain
Once again, in considering the equation illustrated in Figure 1-7, we see that the
investment and gain must be calculated. The investment is derived from calculating the
time investment required for automation development, execution and analysis of 500
tests, and then adding the time investment required for manually executing the
remaining 1000 tests. Calculations are expressed in terms of days rather than hours
because the automated tests ideally operate in 24 hour days while manual tests operate
in 8 hour days. Since tests runs can often abruptly stop during overnight runs, however,
it is usually a good practice to reduce the 24 hour day factor to a more conservative
estimate of about 18 hours.
Automated Test Development Time is calculated by multiplying the average
hourly automation time per test (1 hour) by the number of tests (500), then
dividing by 8 hours to convert the figure to days. This equals 62.5 days. (Note:
This portion of the calculation may be omitted after the first run of the automated
tests unless more development is performed on the tests following the initial run.)
Automated Test Execution Time must be calculated in this example because
time is instrumental in determining the test efficiency. This is calculated by
multiplying the Automated Test Execution Time (2 min or .03 hours) by the
number of tests per week (500), by the timeframe being used for the ROI
calculation (6 months or approximately 24 weeks) then dividing by 18 hours to
convert the figure to days. .03 x 500 x 24 /18 equals 20 days. Note that this
number will be reduced when tests are split up and executed on different
machines but for simplicity we will use the single machine calculation see Section
1.3.5 for more information on significance of running with multiple machines).
The Automated Test Analysis Time can be calculated by multiplying the Test
Analysis Time (4 hours per week given that there is a build once a week) by the
timeframe being used for the ROI calculation (6 months or approximately 24
weeks), then dividing by 8 hours (since the analysis is still a manual effort) to
convert the figure to days. This equals 12 days.
The Automated Test Maintenance Time is calculated by multiplying the
Maintenance Time (8 hours) by the timeframe being used for the ROI calculation
(6 months or approximately 24 weeks), then dividing by 8 hours (since the
maintenance is still a manual effort) to convert the figure to days. This equals 24
days.
The Manual Execution Time is calculated by multiplying the Manual Test
Execution Time (10 min or .17 hours) by the remaining manual tests (1000), then
by the timeframe being used for the ROI calculation (6 months or approximately
24 weeks), then dividing by 8 to convert the figure to days. This equals 510 days.
Note that this number is reduced when tests are split up and executed by
multiple testers but for simplicity we will use the single tester calculation.
The total time investment (in days) can now be calculated thus:
62.5 Automated Test Development Time
20 Automated Test Execution Time
12 Automated Test Analysis Time
24 Timeframe for ROI Calculation
+ 510 Manual Execution Time
628.5 Total Time Investment (days)
This figure would certainly decrease if more test engineers support or increase with
fewer test engineers.
The gain is calculated in terms of the Manual Test Execution/Analysis Time thus:
The Manual Execution/Analysis Time can be converted to days by multiplying the
Execution/Analysis Time (10 minutes or .17 hours) by the total number of tests
(1500), then by the timeframe being used for the ROI calculation (6 months or
approximately 24 weeks), then dividing by 8 hours to convert the figure to days.
This equals 765 days. Note that this number is reduced when tests are divided
among multiple testers (which would have to be done in order to finish execution
within a week). For simplicity, we will use the single tester calculation.
.17 Execution/Analysis Time
1,500 Total Number of Tests
+ 24 Timeframe for ROI Calculation
765 Total Time Investment (days)
Inserting the Investment and Gains into our formula:
The RO is calculated at 21.7%. This means that for each hour invested, .217 hours
were saved in execution. After the initial execution, the efficiency percentage increases,
because the automation development decreases.
The ROI is calculated at 564.2%, indicating a 564.2% increase in quality over a similar
application for which automated testing was not used.
23
Dustin, Elfriede, Jeff Rashka, and John Paul. Automated Software Testing: Introduction, Management,
and Performance. Boston, MA: Addison-Wesley, 1999.
The ATLM stages may be married to the TABOK skills as illustrated in the following
table:
1. INTRODUCTION
1.1. Assumptions and Constraints
1.2. Scope
1.3. Roles and Responsibilities
2. MANUAL-TO-AUTOMATION PROCEDURES
2.1. Test Selection Criteria
3. AUTOMATION FRAMEWORK DESIGN CONSIDERATIONS
4. AUTOMATION FRAMEWORK DESIGN
4.1. Automation Framework Components
4.2. Automation Framework Directory Structure
4.2.1. Root Directory
4.2.2. Driver Scripts Directory
4.2.3. Test Scripts Directory
4.2.4. Data Directory
4.2.5. Libraries Directory
4.2.6. Object Repository Directory
5. AUTOMATED TEST DEVELOPMENT
5.1. Traceability
5.2. Application Objects & the Object Repository
5.2.1. Object Identification
5.2.2. Object Naming Standards
5.3. Reusable Components
5.4. Test Structure
5.5. Comments
This driver script reads a keyword file like the one illustrated in Figure 6-4.
Appendix I: Glossary
Term Definition
Agile Test Principles that define the agile approach to test automation for
Automation medium to large software projects
Principles
Assertion Constructs that provide a mechanism for concisely identifying a
checked condition and presenting an error message if the condition
fails.
AUT Application Under Test. An application that is being tested.
Automatable Something that has the ability to be automated.
Automation The features, reach and deliverables planned for an automation
scope effort.
Black-box Testing method that verifies functionality with little to no regard for
the internal workings of the code that produces that functionality.
Boolean A statement that can only be evaluated to a value of True or False.
condition
Bottom Up A type of integration testing that integrates and tests lower-level units
Technique first. By using this approach, lower-level units are tested early in the
development process and the need for stubs is minimized. The need
for drivers to drive the lower-level units, in the absence of top-level
units, increases, however.
Bug See Defect
Build When used as a verb, a build is the process of converting source
code files into standalone software artifact(s) and placing the
artifact(s) on a system. When used as a noun, a build is the product
of the previously mentioned process.
Collections A set of data or objects that are of the same type.
Compiled A type of programming language that is converted at design-time to
Languages a set of machine specific instructions.
Conditionals Coding constructs that alter the flow of script execution or cause a
Term Definition
varying set of script statements to be executed based on the
evaluation of a specified Boolean condition. Also known as
branching constructs.
Continuous A frequent, automated build process that also integrates automated
Integration (CI) testing.
Cumulative Test coverage assessed over a period of time or across multiple test
Coverage cycles.
Data-driven A framework built mostly using data-driven scripting concepts. In this
Framework framework each test case is combined with a related data set and
executed using a reusable set of test logic.
Data-driven A scripting technique that stores test data separately from the test
Scripting script that uses the data. The data may be stored in a flat file,
spreadsheet, or some other data store, and it is used by the test
script via parameterization. A single block of script logic may then be
executed within a loop using different data from the data source on
each execution.
Defect An error, bug, flaw, or nonconformance found in a software system
that causes that system to behave in a manner that is either contrary
to requirements, technical specifications, service level agreements,
or at times the reasonable expectations of the system‘s
stakeholders.
Distributed Test Executing tests across remote locations from one central location
Execution
Document Object model for Internet applications.
Object Model
Events Repeatable actions but, unlike methods, are called based on an
action performed by a user (i.e., mouse click, scroll, etc.).
Exception A special condition that causes a program‘s normal flow of execution
to change.
Simultaneous gathering of information, test design and test
Exploratory execution resulting in the immediate creation and implementation of
Testing tests based on how the application is responding in real-time.
FISMA Federal Information Security Management Act. FISMA requires each
Term Definition
federal agency to develop, document, and implement an agency-
wide program to provide information security for the information and
information systems that support the operations and assets of the
agency, including those provided or managed by another agency,
contractor, or other source.
Flowchart A diagram that represents an algorithm or process.
Framework The physical structures used for test creation and implementation, as
well as the logical interactions among those structures. This
definition also includes the set of standards and constraints that are
established to govern the framework‘s use.
Functional Functional decomposition refers broadly to the process of producing
Decomposition modular components (i.e., user-defined functions) in such a way that
automated test scripts can be constructed to achieve a testing
objective by combining these existing components. Modular
components are often created to correspond with application
functionality but many different types of user-defined functions can
be created.
Functions A block of code within a larger script or program that executes a
specific task. While it is part of the larger script, it operates
independent of that script. It is executed not according to where it is
located in the script, but rather based on where it is ―called‖ within a
script and it typically allows for arguments and return values.
Gain The benefit received from an investment. Used in calculating ROI.
GUI Map See Object Map.
Image-based Automation approach that communicates with the system it is
automation automating via recognition of image information as opposed to object
information.
Initialization A script that sets parameters that are used throughout a test run and
Script brings the test environment to a controlled, stable state prior to test
execution.
Integrated A software application that provides a comprehensive set of
Development resources to programmers for editing, debugging, compiling and
Environment building code.
Term Definition
(IDE)
Integration Testing that involves combining individual units of code together and
Testing verifying they properly interface with one another without damaging
functionality developed. Also known as integration testing.
Interface Testing See Integration Testing.
ISO International Organization for Standards. An international standards
setting organization.
Iterators Coding constructs that provide a mechanism for a single set of
statements to be executed multiple times. Also known as looping
constructs.
Linear An automated test framework that is driven mostly by the use of the
Framework Record & Playback. Typically, all components that are executed by a
Linear framework script largely exist within the body of that script.
Load Test Performance testing approach that gradually increases the load on
Automation an application up to the application‘s stated or implied maximum load
to ensure the system performs at acceptable levels.
Methods Repeatable actions that may be called by an automated script. There
are often two types of methods: functions and sub-procedures.
Model-based Framework that uses descriptions of application features, typically in
Framework the form of state models, as a basis for dynamically creating and
implementing tests on the application.
MTTR Mean time to repair. This is a basic measure of maintainability,
reliability and robustness, and it represents time required to repair a
failed script or component.
Negative Tests Tests that check system‘s response to receiving inappropriate and/or
unexpected inputs.
Notation A set of symbols or conventions set aside for a specialized, specific
use.
Object Map A file that maintains a logical representation and physical description
of each application object that is referenced in an automated script.
Also known as a GUI Map.
Object Model An abstract representation of the hierarchical group of related
Term Definition
objects that define an application and work together to complete a
set of application functions.
Open Source A set of criteria compiled by the Open Source Initiative used to
Definition determine whether or not a software product can be considered
open source.
Parameterization The association of a script variable to an external data source, such
that data is passed directly to that variable from the associated data
source at run time.
Positive Tests Tests that verify the system behaves correctly when given
appropriate and/or expected inputs.
Properties Simple object variables that may be maintained throughout the life of
the object.
Pseudocode An artificial and informal language with some of the same structural
conventions of programming that is written and read by
programmers as an algorithm that will later be translated into actual
code.
Quality Desirable characteristics of a system.
Attributes
Regression Retesting a previously tested program following modifications to
either that program or an associated program. The purpose of this
testing is to ensure no new bugs were introduced by the
modifications.
Regular A character expression that is used to match or represent a set of
Expression text strings, without having to explicitly list all of the items that may
exist in the set.
Requirements A complete description of the expected behavior and/or attributes of
a system.
Sanity Test A high-level test meant to verify application stability prior to more
rigorous testing.
Scripting Also known as interpreted languages, scripting languages are a type
Language of programming language that is converted at run-time to a set of
machine specific instructions.
Term Definition
Shelfware Software that is developed or acquired but is not being used. It may
literally be stored on a shelf, but doesn‘t have to be.
Smoke Test See Sanity Test
Software A model that defines the structure, processes and procedures used
Development for developing software.
Lifecycle (SDLC)
Stakeholders A person, group or organization that may be affected by or has an
expressed interest in an activity or project.
Stress Test Performance/Load testing approach that involves determining what
Automation load will significantly degrade an application.
String Testing See integration testing.
Sub-procedures A block of code within a larger script or program that executes a
specific task, but while it is part of the larger script it operates
independent of that script. It is executed not according to where it is
located in the script, but rather based on where it is ―called‖ within a
script and it typically allows for arguments. Subroutines often do not
allow for return values.
SUT System Under Test. A system that is being tested.
Test bed The hardware and software environment that has been established
and configured for testing.
Test Coverage A measure of the portion of the application that has been tested by a
suite of tests. Two types of test coverage include requirements
coverage and code coverage.
Test Fixture The necessary preconditions or the state used as a baseline for
running tests. This term is commonly used in reference to unit test
framework events (i.e. setup and teardown) that establish fixtures for
the unit tests.
Test Harness See framework.
Top Down A type of integration testing. High-level logic and communication is
Technique tested early with this technique, and the need for drivers is
minimized. Stubs for simulation of lower-level units are used while
actual lower-level units are tested relatively late in the development
Term Definition
cycle.
Unit The smallest amount of code that can be tested.
Validate Ensure a product or system fulfills its intended purpose.
Variables A variable is a container for storing and reusing information in
scripts.
Verify Ensure a product or system conforms to system specifications.
VNC Virtual Network Computing. A platform-independent graphical
desktop sharing system that uses the remote framebuffer (RFB)
protocol to allow a computer to be remotely controlled by another
computer.
White-box Testing method that verifies the internal structures of an application.
World Wide Web An international community where member organizations, a full-time
Consortium staff, and the public work together to develop Web standards.
(W3C)
xUnit Name given to a group of unit test frameworks that all implement the
same basic component architecture. When used as a noun, a build
is the resulting artifacts of the build process.
INDEX
A D
API.............................................. See Application Interfaces Data-driven Framework ....................................... 76, 82, 89
Application Interfaces ................................................. 31, 50 Data-driven Scripts............................................... 75, 76, 77
API .............................................................................. 51 Advantages ................................................................ 77
CLI .............................................................................. 50 Disadvantages ............................................................ 77
GUI ................................................................ 49, 51, 111 Debugging ............................................5, 40, 167, 170, 173
Web Service ........................................ See Web Services Decision to Automate ................................................. 30, 33
assertion ............................................................ 58, 109, 110 Distributed Test Execution ....................................... 64, 117
AUT ................................................................................ 120 Document Object Model ................................................ 159
Automated Test Interface ................................................ 105 Driver Script ................................................. 89, 90, 94, 180
Automated Test Modes ................................................... 111
Content Sensitive ..................................................... 111 E
Context Sensitive ..................................................... 112
Image-based ............................................................. 112 Efficiency ............................................. 18, 37, 103, See ROI
Automated Test Reporting ............................................... 6 Error Handling ..................................5, 179, 181, 182, 185
automated testing definition............................................ 6, 7 events ............................................................................. 145
automation scope .............................................................. 69 Exception ......................................................................... 58
Automation Types ....................................................... 3, 46 Execution .................................. 21, 102, 113, 114, 171, 191
Execution Level File .................................................. 89, 90
B
F
Black-box.......................................................................... 48
branching constructs ............................................... 109, 147 Flexibility ................................................124, 125, 134, 137
business case ......................................... 3, 15, 17, 30, 32, 33 flowchart ........................................................................ 142
Framework ............................................4, 10, 69, 134, 223
Components .......................................................... 87, 88
C
Directory Structure............................................ 4, 87, 95
Capture/Replay ............................... See Record & Playback Types........................................................................... 87
class ............................................ 47, 78, 145, 146, 153, 156 Functional Decomposition75, 76, 78, 81, 82, 132, 135, 138,
CLI.............................................. See Application Interfaces 181
Coding Standards .................................................. 73, 98, 99 Advantages ................................................................ 80
collections ............................................... 145, 146, 159, 160 Disadvantages ............................................................ 81
compiled languages ................................................ 143, 168 Functional Decomposition Framework78, 79, 80, 81, 82,
conditionals ............................................................ 147, 149 83, 87, 94, 106, 123
Configuration Management ...................... 10, 32, 54, 97, 99 Functional system test automation ................................... 48
Configuration Parameters ............. See Configuration Scripts functions ........................................................... 95, 145, 150
Configuration Scripts .................................................. 92, 93
Constraints G
Environmental ............................................................. 34
Functional .................................................................... 34 GUI ............................................. See Application Interfaces
Quality ......................................................................... 34 GUI Map ..................................................... See Object Map
T U
table-driven ........................................... See Keyword-driven Usability .................................. 129, 131, 132, 134, 135, 137
Test Automation Implementation Plan 87, 97, 217, 219, 222 User-defined Functions .................................................... 95
Test bed .................................................................... 4, 6, 26
test coverage ..................................................................... 61 V
code coverage .............................................................. 61
requirements coverage ................................................. 61 Variables .......................................................... 98, 143, 144
Test Development ................................................... 102, 113 virtualization .................................................................. 117
Test Fixture ....................................................................... 58 VNC ............................................................................... 112
Test harness ...................................................................... 48 volume testing .................................................................. 50
Test Scripts .................................................. 4, 94, 202, 219
Test Selection ................................................................. 102 W
Testability ......................................................................... 32
tool ............................ 10, 16, 23, 34, 82, 164, 165, 184, 223 Web Services .............................................................. 49, 51
Tool Acquisition ............................................................... 30 What to automate .................................... See Test Selection
Evaluation.................................................................... 30 White-box......................................................................... 49
Implementation ............................................................ 30 World Wide Web Consortium (W3C) .............................. 51
Integration ................................................................... 30
Pilot ............................................................................. 30 X
Selection ...................................................................... 30
try/catch .................................................................. 184, 186 xUnit .......................................................................... 57, 72
The following is a list of comprehensive reference material that may be used for
gathering additional information on various topics discussed in the TABOK. For shorter,
more pointed references, see the final subsection of each skill category.
1. Dustin, Elfriede, Jeff Rashka, and John Paul. Automated Software Testing:
Introduction, Management, and Performance. Boston, MA: Addison-Wesley,
1999.
2. Dustin, Elfriede, Garrett, Thom and Gauf, Bernie. Implementing Automated Software
Testing. Boston, MA: Pearson Education, 2009.
3. Fewster, Mark, and Dorothy Graham. Software Test Automation: Effective use of
test execution tools. Reading, MA: Addison-Wesley, 1999.
4. Hayes, Linda. The Automated Testing Handbook. Richardson, TX: Software Testing
Institute, 1996.
5. Mosley, Daniel J., and Bruce A. Posey. Just Enough Software Test Automation.
Upper Saddle River, NJ: Prentice Hall, 2002.
6. Various articles, white papers and books indexed at
www.automatedtestinginstitute.com.
7. Various magazine articles published at
www.astmagazine.automatedtestinginstitute.com