Professional Documents
Culture Documents
Testing Tut
Testing Tut
What are some recent major computer system failures caused by software bugs?
In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage
company could proceed; the lawsuit reportedly involved claims that the company was not
fixing system problems that sometimes resulted in failed stock trades, based on the
experiences of 4 plaintiffs during an 8-month period. A previous lower court's ruling that
"...six miscues out of more than 400 trades does not indicate negligence." was
invalidated.
In April of 2003 it was announced that the largest student loan company in the U.S. made
a software error in calculating the monthly payments on 800,000 loans. Although
borrowers were to be notified of an increase in their required payments, the company will
still reportedly lose $8 million in interest. The error was uncovered when borrowers
began reporting inconsistencies in their bills.
News reports in February of 2003 revealed that the U.S. Treasury Department mailed
50,000 Social Security checks without any beneficiary names. A spokesperson indicated
that the missing names were due to an error in a software change. Replacement checks
were subsequently mailed out with the problem corrected, and recipients were then able
to cash their Social Security checks.
In March of 2002 it was reported that software bugs in Britain's national tax system
resulted in more than 100,000 erroneous tax overcharges. The problem was partly
attibuted to the difficulty of testing the integration of multiple systems.
A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-
shelf software that had long been used in systems for tracking certain U.S. nuclear
materials. The same software had been recently donated to another country to be used in
tracking their own nuclear materials, and it was not until scientists in that country
discovered the problem, and shared the information, that U.S. officials became aware of
the problems.
According to newspaper stories in mid-2001, a major systems development contractor
was fired and sued over problems with a large retirement plan management system.
According to the reports, the client claimed that system deliveries were late, the software
had excessive defects, and it caused other systems to crash.
In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would
TESTINGFAQ 1
not run due to their inability to recognize the date '31/12/2000'; the trains were started by
altering the control system's date settings.
News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't work.
In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district's CIO was fired. The school district decided to reinstate it's original 25-year old
system for at least a year until the bugs were worked out of the new system by the
software vendors.
In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined that
spacecraft software used certain data in English units that should have been in metric
units. Among other tasks, the orbiter was to serve as a communications relay for the Mars
Polar Lander mission, which failed for unknown reasons in December 1999. Several
investigating panels were convened to determine the process failures that allowed the
error to go undetected.
Bugs in software supporting a large commercial high-speed data network affected 70,000
business customers over a period of 8 days in August of 1999. Among those affected was
the electronic trading system of the largest U.S. futures exchange, which was shut down
for most of a week as a result of the outages.
In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite
launch, the costliest unmanned accident in the history of Cape Canaveral launches. The
failure was the latest in a string of launch failures, triggering a complete military and
industry review of U.S. space launch programs, including software integration and testing
processes. Congressional oversight hearings were requested.
A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7
million in March of 1999. This was about 700 times larger than its normal bill. It turned
out to be due to bugs in new software that had been purchased by the local power
company to deal with Y2K software issues.
In early 1999 a major computer game company recalled all copies of a popular new
product due to software problems. The company made a public apology for releasing a
product before it was ready.
The computer system of a major online U.S. stock trading service failed during trading
hours several times over a period of days in February of 1999 according to nationwide
news reports. The problem was reportedly due to bugs in a software upgrade intended to
speed online trade confirmations.
In April of 1998 a major U.S. data communications network failed for 24 hours, crippling
a large part of some U.S. credit card transaction authorization systems as well as other
large U.S. bank, retail, and government data systems. The cause was eventually traced to
a software bug.
January 1998 news reports told of software problems at a major U.S. telecommunications
company that resulted in no charges for long distance calls for a month for 400,000
customers. The problem went undetected until customers called up with questions about
their bills.
In November of 1997 the stock of a major health industry company dropped 60% due to
reports of failures in computer billing systems, problems with a large database
conversion, and inadequate software testing. It was reported that more than $100,000,000
in receivables had to be written off and that multi-million dollar fines were levied on the
company by government agencies.
TESTINGFAQ 2
A retail store chain filed suit in August of 1997 against a transaction processing system
vendor (not a credit card company) due to the software's inability to handle credit cards
with year 2000 expiration dates.
In August of 1997 one of the leading consumer credit reporting companies reportedly
shut down their new public web site after less than two days of operation due to software
problems. The new site allowed web site visitors instant access, for a small fee, to their
personal credit reports. However, a number of initial users ended up viewing each others'
reports instead of their own, resulting in irate customers and nationwide publicity. The
problem was attributed to "...unexpectedly high demand from consumers and faulty
software that routed the files to the wrong computers."
In November of 1996, newspapers reported that software bugs caused the 411 telephone
information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000
operators had to search through phone books instead of using their 13,000,000-listing
database. The bugs were introduced by new software modifications and the problem
software had been installed on both the production and backup systems. A spokesman for
the software vendor reportedly stated that 'It had nothing to do with the integrity of the
software. It was human error.'
On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket
failed shortly after launching, resulting in an estimated uninsured loss of a half billion
dollars. It was reportedly due to the lack of exception handling of a floating-point error in
a conversion from a 64-bit integer to a 16-bit signed integer.
Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be
credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The
American Bankers Association claimed it was the largest such error in banking history. A
bank spokesman said the programming errors were corrected and all funds were
recovered.
Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear
war in 1983, according to news reports in early 1999. The software was supposed to filter
out false missile detections caused by Soviet satellites picking up sunlight reflections off
cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on
a what he said was a '...funny feeling in my gut', decided the apparent missile attack was a
false alarm. The filtering software code was rewritten.
Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This
is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the
land and employed as a physician to a great lord. The physician was asked which of his
family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion
someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known
among the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes
form. His name is unknown outside our home."
instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'
Poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable code. In
fact, it's usually the opposite: they get points mostly for quickly turning out code, and
there's job security if nobody else can understand it ('if it was hard to write, it should be
hard to read').
Software development tools - visual tools, class libraries, compilers, scripting tools, etc.
often introduce their own bugs or are poorly documented, resulting in added bugs.
TESTINGFAQ 4
Where the risk is lower, management and organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA processes should be balanced with
productivity so as to keep bureaucracy from getting out of hand.
For small groups or projects, a more ad-hoc process may be appropriate, depending on
the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.
In all cases the most value for effort will be in requirements management processes, with
a goal of clear, complete, testable requirement specifications or expectations.
What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or
no preparation is usually required.
What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the inspection
is typically a document such as a requirements spec or a test plan, and the purpose is to
find problems and see what's missing, not to fix anything. Attendees should prepare for
this type of meeting by reading thru the document; most problems will be found during
this preparation. The result of the inspection meeting should be a written report.
Thorough preparation for inspections is difficult, painstaking work, but is one of the most
cost effective methods of ensuring quality. Employees who are most skilled at
inspections are like the 'eldest brother' in the parable in 'Why is it often hard for
management to get serious about quality assurance?'. Their skill may have low visibility
but they are extremely valuable to any software development organization, since bug
prevention is far more cost-effective than bug detection.
TESTINGFAQ 5
Integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications,
client and server applications on a network, etc. This type of testing is especially relevant
to client/server and distributed systems.
Functional testing - black box type testing geared to functional requirements of an
application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course
applies to any stage of testing.)
System testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems if appropriate.
Sanity testing - typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or
destroying databases, the software may not be in a 'sane' enough condition to warrant
further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially
near the end of the development cycle. Automated testing tools can be especially useful
for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or
based on use by end-users/customers over some limited period of time.
Load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system's response time degrades or
fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing.
Also used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will
depend on the targeted end-user or customer. User interviews, surveys, video recording
of user sessions, and other techniques can be used. Programmers and testers are usually
not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test
it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
TESTINGFAQ 6
User acceptance testing - determining if software is satisfactory to an end-user or
customer.
Comparison testing - comparing software weaknesses and strengths to competing
products.
Alpha testing - testing of an application when development is nearing completion; minor
design changes may still be made as a result of such testing. Typically done by end-users
or others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.
TESTINGFAQ 8
What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is
clear, understandable, easily modifiable, and maintainable; is robust with sufficient error
handling and status logging capability; and works correctly when implemented. Good
functional design is indicated by an application whose functionality can be traced back to
customer and end-user requirements. (See further discussion of functional and internal
design in 'What's the big deal about requirements?' <qatfaq2.html> in FAQ #2.) For
programs that have a user interface, it's often a good idea to assume that the end user will
have little computer knowledge and may not read a user manual or even the on-line help;
some common rules-of-thumb include:
The program should act in a way that least surprises the user
It should always be evident to the user what can be done next and how to exit
The program shouldn't let the users do something stupid without warning them.
Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.
Coverage analyzers - these tools check which parts of the code have been
exercised by a test, and may oriented to code statement coverage, condition
coverage, path coverage, etc.
Web test tools - to check that links are valid, HTML code usage is correct, client-
side and server-side programs work, a web site's interactions are secure.
TESTINGFAQ 11
5. Test the system - The test team develops the test plan following the
requirements. The software is build and installed on the test platform after
developers have completed development and Unit Testing. The testers test the
software by following the test plan.
6. Deploy the system – After the user-acceptance testing and certification of
the software, it is installed on the production platform. Demos and training are
given to the users.
7. Support the system - After the software is in production, the maintenance
phase of the life begins. During this phase the development team works with
the development document staff to modify and enhance the application and
the test team works with the test documentation staff to verify and validate
the changes and enhancement to the application software.
Explain the pre testing phase, acceptance testing and testing phase.
Pre testing Phase:
1. Review the requirements document for the testability: Tester will use the
requirement document to write the test cases.
2. Establishing the hard freeze date: Hard freeze date is a date after which
system test team will not accept any more software and documentation
changes from development team, unless they are fixes of severity 1 MR’s.
The date is scheduled so that product test team will have time for final
regression.
3. Writing master test plan: It is written by the lead tester or test coordinator.
Master test plan includes entire testing plan, testing resources and testing
strategy.
4. Setting up MR Tool: The MR tool must be set as soon as you know of the
different modules in the product, the developers and testers on the product,
the hardware platform, and operating system testing will be done.
This information will be available upon the completion of the first draft of the
architecture document. Both testers and developers are trained how to use
the system.
5. Setting up the test environment: The test environment is set on separate
machines, database and network. This task is performed by the technical
TESTINGFAQ 12
support team. First time it takes some time, Afterwards the same
environment can be used by the later releases.
6. Writing the test plan and test cases: Template and the tool is decided to
write the test plan, test cases and test procedures. Expected results are
organized in the test plan according to the feature categories specified in the
requirement document. For each feature positive and negative test cases are
written. Writing test plan requires the complete understanding of the product
and its interfaces with other systems. After test plan is completed, a
walkthrough is conducted with the developers and design team members to
baseline the test plan document.
7. Setting up the test automation tool: Planning of test strategy on how to
automate the testing. Which test cases will be executed for regression
testing. Not all the test cases will be executed during regression testing.
8. Identify acceptance test cases: Select subsets that are expected on the first
day of system test. These tests must pass to accept the product in the
system test.
What is the value of a testing group? How do you justify your work and
budget?
All software products contain defects/bugs, despite the best efforts of their
development teams. It is important for an outside party (one who is not developer)
to test the product from a viewpoint that is more objective and representative of the
product user.
Testing group test the software from the requirements point of view or what is
required by the user. Testers job is to examine a program and see if it does not do
what it is supposed to do and also see what it does what it is not supposed to do.
What is master test plan? What it contains? Who is responsible for writing
it?
OR
What is a test plan? Who is responsible for writing it? What it contains.
OR
What's a 'test plan'? What did you include in a test plan?
A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test plan
is a useful way to think through the efforts needed to validate the acceptability of a
software product. The completed document will help people outside the test group
understand the 'why' and 'how' of product validation. It should be thorough enough
to be useful but not so thorough that no one outside the test group will read it. The
TESTINGFAQ 13
following are some of the items that might be included in a test plan, depending on
the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents,
other test plans, etc.
• Relevant standards or legal requirements
• Trace ability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-
info/responsibilities
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error
classes
• Test environment - hardware, operating systems, other required software,
data configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and
production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
What is MR?
MR is a Modification Request also known as Defect Report, a request to modify the
program so that program does what it is supposed to do.
TESTINGFAQ 15
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
What knowledge you require to do white box, integration and black box
testing?
For white box testing you need to understand the internals of the module like data
structures and algorithms and have access to the source code and for black box
testing only understanding/functionality of the application.
TESTINGFAQ 16
Performance testing tool determine the load a server can handle. And must have
feature to stimulate many users from one machine, scheduling and synchronize
different users, able to measure the network load under different number of
simulated users.
What is ODBC?
Open Database Connectivity (ODBC) is an open standard application-programming
interface (API) for accessing a database. ODBC is based on Structured Query
Language (SQL) Call-Level Interface. It allows programs to use SQL requests that
will access databases without having to know the proprietary interfaces to the
databases. ODBC handles the SQL request and converts it into a request the
individual database system understands.
TESTINGFAQ 17
Who should test your code?
QA Tester
The Capability Maturity Model Integration (CMMI) provides the guidance for
improving your organization's processes and your ability to manage the
development, acquisition, and maintenance of products and services. CMM
Integration places proven practices into a structure that helps your organization
assess its organizational maturity and process area capability, establish priorities for
improvement, and guide the implementation of these improvements.
The new integrated model (CMMI) uses Process Areas (known as PAs), which are
different to the previous model, and covers as well systems as software processes,
rather than only software processes as in the SW-CMM.
It covers the whole software lifecycle, starting with testing the project plan and
estimates and ending with testing the effectiveness of the testing process. The book
is packed with checklists, worksheets and N-step procedures for each stage of
testing.
TESTINGFAQ 19
What is the purpose of the testing?
Testing provides information whether or not a certain product meets the
requirements.
Describe some problems that you had with automation testing tools
One of the problems with Automation tools is Object recognition
Have you ever written test cases or did you just execute those written by
others?
Yes, I was involved in preparing and executing test cases in the entire project.
TESTINGFAQ 21
Where do you get your expected results?
User requirement document
If you were given a program that will average student grades, what kinds of
inputs would you use?
Name of student, Subject, Score
What is the exact difference between Integration and System testing, give
me examples with your project?
Integration testing: An orderly progression of testing in which software
components or hardware components, or both are combined and tested until the
entire system has been integrated.
System testing: The Process of testing an integrated hardware and software
system to verify that the system meets its specified requirements.
Validation: typically involves actual testing and takes place after verifications are
completed. The term 'IV & V' refers to Independent Verification and Validation
What do you mean by “set up the test environment and provide full platform
support?
We need to provide the following for setting up the environment
1) Required software
2) Required hardware
3) Required testing tools
4) Required test data
After providing these we need to provide support for any problems that occur during
the testing process.
TESTINGFAQ 23
Did use SQA Manager?
Yes. For creating test plan and defect reporting/tracking.
You find a bug and the developer says “It’s not possible” what do u do?
I’ll discuss with him under what conditions (working environment) the bug was
produced. I’ll provide him with more details and the snapshot of the bug.
No matter what shape an object is, applying the circumference method to it will
return the correct results. Polymorphism is considered to be a requirement of any
true object-oriented programming language (OOPL).
If you have shortage of time, how would you prioritize you testing?
Use risk analysis to determine where testing should be focused. Since it's rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. Considerations can include:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development
cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
TESTINGFAQ 24
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design is unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?
Performance Testing: Testing the system whether the system functions are being
performed in an acceptable timeframe under simultaneous user load. Timings for
both read and update transactions should be gathered to determine whether. This
should be done stand-alone and then in a multi-user environment to determine the
transaction throughput.
Security Testing: Testing the system for its security from unauthorized use and
unauthorized data access.
Recovery Testing: Testing a system to see how it responds to errors and abnormal
conditions, such as system crash, loss of device, communications, or power.
Volume Testing: Testing to the system to determine if it can correctly process large
volumes of data fed to the system. Systems can often respond unpredictably when
large volume causes files to overflow and need extensions.
What criteria you will follow to assign severity and due date to the MR?
Defects (MR) are assigned severity as follows:
Critical: show stoppers (the system is unusable)
High: The system is very hard to use and some cases are prone to convert to critical
issues if not taken care of.
Medium: The system functionality has a major bug but is not too critical but needs
to be fixed in order for the AUT to go to production environment.
Low: cosmetic (GUI related)
What are the entrance and exit criteria in the system test?
Entrance and exit criteria of each testing phase is written in the master test plan.
Enterence Criteria:
- Integration exit criteria have been successfully met.
- All installation documents are completed.
- All shippable software has been successfully built
- Syate, test plan is baselined by completing the walkthrough of the test plan.
- Test environment should be setup.
- All severity 1 MR’s of integration test phase should be closed.
Exit Criteria:
- All the test cases in the test plan should be executed.
- All MR’s/defects are either closed or deferred.
- Regression testing cycle should be executed after closing the MR’s.
- All documents are reviewed, finilized and signed-off.
If there are no requirements, how will you write your test plan?
If there are no requirements we try to gather as much details as possible from:
• Business Analysts
• Developers (If accessible)
• Previous Version documentation (if any)
• Stake holders (If accessible)
• Prototypes.
The daily build has little value without the smoke test. The smoke test is the sentry
that guards against deteriorating product quality and creeping integration problems.
Without it, the daily build becomes just a time-wasting exercise in ensuring that you
have a clean compile every day.
The smoke test must evolve as the system evolves. At first, the smoke test will
probably test something simple, such as whether the system can say, "Hello, World."
As the system develops, the smoke test will become more thorough. The first test
might take a matter of seconds to run; as the system grows, the smoke test can
grow to 30 minutes, an hour, or more.
TESTINGFAQ 26
What is a pre-condition data?
Data required to setup in the system before the test execution.
What are the best web sites that you frequently visit to upgrade your QA
skills?
http://www.softwareqatest.com/
http://sqp.asq.org/
How do you analyze your test results? What metrics do you try to provide?
OR
How do you view test results?
Test log is created for analyzing the test results. This is a chronological record of the
Test executions and events that happened during testing. It includes the following
sections:
Description: What’s being tested, including Version ID, where testing is being done,
what hardware and all other configuration information.
Activity and Event Entries: What happened including Execution Description: The
procedure used.
Procedure Result: What happened. What did you see and where did you store the
output?
Environment Information: Any changes (hardware substitution) made specifically
for this test.
Unexpected Events: What happened before and after problem/bug occurred.
Incident/Bug Report Identifiers: Problem Report number
If you come onboard, give me a general idea of what your first overall tasks
will be as far as starting a quality effort?
Try to learn about the application, Environment and Prototypes to have the better
understanding of application and existing testing efforts
TESTINGFAQ 27
How do you differentiate the roles of Quality Assurance Manager and
Project Manager?
A quality assurance manager responsibility includes setting up the standards, the
methodology and the strategies for testing the application and providing guidelines
to the QA team. Project Manager is responsible for testing and development
activities.
A three-tier system is one that has presentation components, business logic and data
access physically running on different platforms. Web applications are perfect for
three-tier architecture, as the presentation layer is necessarily separate, and the
business and data components can be divided up much like a client-server
application
What is Internet?
The Internet, sometimes called simply "the Net," is a worldwide system of computer
networks - a network of networks in which users at any one computer can, if they
have permission, get information from any other computer. Physically, the Internet
uses a portion of the total resources of the currently existing public
telecommunication networks. Technically, what distinguishes the Internet is its use
of a set of protocols called TCP/IP (for Transmission Control Protocol/Internet
Protocol). Two recent adaptations of Internet technology, the intranet and the
extranet, also make use of the TCP/IP protocol.
What is Intranet?
An intranet is a private network that is contained within an enterprise. It may consist
of many interlinked local area networks and also use leased lines in the Wide Area
Network. The main purpose of an intranet is to share company information and
computing resources among employees. An intranet can also be used to facilitate
working in groups and for teleconferences.
TESTINGFAQ 28
What is Extranet?
An extranet is a private network that uses the Internet protocol and the public
telecommunication system to securely share part of a business's information or
operations with suppliers, vendors, partners, customers, or other businesses. An
extranet can be viewed as part of a company's intranet that is extended to users
outside the company. It has also been described as a "state of mind" in which the
Internet is perceived as a way to do business with other companies as well as to sell
products to customers.
What is an applet?
A program written in the Java(TM) programming language to run within a web
browser compatible with the Java platform, such as Hot Java(TM) or Netscape
Navigator(TM).
What is ISO-9000?
A set of international standards for both quality management and quality assurance
that has been adopted by over 90 countries worldwide. The ISO 9000 standards
apply to all types of organizations, large and small, and in many industries. The ISO
9000 series classifies products into generic product categories: hardware, software,
processed materials, and services.
What is QMO?
The QMO is a set of processes and guidelines that software systems projects and
products that are built under a contract (with a customer) must follow to comply with
ISO-9000 standards. ISO-9000 states that the guidelines for software development
must be documented.
What is an Object?
An object is a software bundle of related variables and methods. Software objects
are often used to model real-world objects you find in everyday life
What is class?
A class is a blueprint or prototype that defines the variables and the methods
common to all objects of a certain kind.
TESTINGFAQ 29
What is encapsulation? Give one example.
Encapsulation is the ability to provide a well-defined interface to a set of functions in
a way which hides their internal workings. In object-oriented programming, the
technique of keeping together data structures and the methods (procedures) which
act on them.
What different type of test cases you wrote in the test plan?
Test cases for interface, functionality, security, load and performance testing.
What development model should programmers and the test group use?
A Development Model, which helps adopt a structured approach in assessment,
design, integration and implementation of a project and in extending relevant
training and support. Each of these stages is necessarily accompanied with client
inputs, checkpoints and reviews to ensure successful systems implementation.
Basically there are many types of development models to support the development
of high-quality software products. The two most widely used models are Waterfall
and Spiral development model.
Waterfall development model encourages the development team to specify the
business functionality of the software prior to developing a system.
Spiral development model combines the waterfall development model and the
prototype approach, which is a series of partial implementations of the product.
What is inspection?
Inspection: An inspection is more formalized than a 'walkthrough', typically with 3-
8 people including a moderator, reader, and a recorder to take notes. The subject of
the inspection is typically a document such as a requirements spec or a test plan,
and the purpose is to find problems and see what's missing, not to fix anything.
Attendees should prepare for this type of meeting by reading thru the document;
most problems will be found during this preparation. The result of the inspection
meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost effective methods of ensuring quality.
What is Walkthrough?
Walkthrough: A 'walkthrough' is an informal meeting for evaluation or informational
purposes. Little or no preparation is usually required.
Version:
Release:
What is the role of the test group vis-a vis documentation, tech support, and
so forth?
The role of test group in documentation, tech support depends on the fact that the
application is tested for documentation testing so that it is correct, complete and
easy to understand. Coming to tech support, the application has to be tested for help
testing and once the application is deployed, the test group plays an important role
in providing technical support.
How much interaction with users should testers have, and why?
Testers should have good interaction with the users. It is basically the testers who
make sure the user requirements are satisfied and decide that the system is stable,
the system is error free and can be released for the users.
How should you learn about problems discovered in the field, and what
should you learn from those problems?
We can learn about the problems discovered in the field by the past experiences and
problems encountered specific to this field. We need to make sure that such errors or
defects won’t occur again and how to handle them effectively.
What issues come up in test automation, and how do you manage them?
Basically test automation refers to anything that streamlines the testing process,
allowing things to move along more quickly and with less delay. The issues that
come up in test automation are which test cases to automate and which test cases
not. This depends on the test cases that have to be run more often (regression
testing), that are time consuming and need lot of resources to be done manually like
measuring the performance and load/stress characteristics. We have to manage
them by providing good documentation and proper scheduling of test cases.
How do you get programmers to build testability support into their code?
Providing good documentation and modular programming techniques help to test the
application efficiently. When defects were detected, they should be reported in a
proper way so that they can be reproducible and the input data and output results,
resulted for that defect.
In general how do you see automation fitting into the overall process of
testing?
Automation is used for regression testing (repeatability and reusability of scripts).
Using Automation we can make the testing process faster and reliable.
TESTINGFAQ 32
How does unit testing play a role in the development/software lifecycle?
Software passes through various phases of testing like unit, integration, system,
system integration and user acceptance testing. Unit testing is a white-box testing
done by the developer. The developer fully tests the module that he has developed.
He makes sure that there are no logical errors and the module meets the
requirements.
1. Requirements are poorly written when requirements are unclear, incomplete, too general,
or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good
until customers complain or the system crashes.
4. It's extremely common that new features are added after development is underway.
5. Miscommunication either means the developers don't know what is needed, or customers
have unrealistic expectations and therefore problems are guaranteed.
TESTINGFAQ 33
1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and
testable. All players should agree to requirements. Use prototypes to help nail down
requirements.
2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug
fixing, re-testing, changes and documentation. Personnel should be able to complete the
project without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan
for sufficient time for both testing and bug fixing.
4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to
defend design against changes and additions, once development has begun and be
prepared to explain consequences. If changes are necessary, ensure they're adequately
reflected in related schedule changes. Use prototypes early on so customers'
expectations are clarified and customers can see what to expect; this will minimize
changes later on.
5. Communicate. Require walkthroughs and inspections when appropriate; make extensive
use of e-mail, networked bug-tracking tools, tools of change management. Ensure
documentation is available and up-to-date. Use documentation that is electronic, not
paper. Promote teamwork and cooperation.
Good test engineers have a "test to break" attitude, they take the point of view of the customer,
have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers and an ability to communicate with both
technical and non-technical people. Previous software development experience is also helpful as
it provides a deeper understanding of the software development process, gives the test engineer
an appreciation for the developers' point of view and reduces the learning curve in automated test
tool programming.
TESTINGFAQ 35
to enter their previously-assigned password to access the application". Care should be taken to
involve all of a project's significant customers in the requirements process. Customers could be
in-house or external and could include end-users, customer acceptance test engineers, testers,
customer contract officers, customer management, future software maintenance engineers,
salespeople and anyone who could later derail the project. If his/her expectations aren't met, they
should be included as a customer, if possible. In some organizations, requirements may end up in
high-level project plans, functional specification documents, design documents, or other
documents at various levels of detail. No matter what they are called, some type of
documentation with detailed requirements will be needed by test engineers in order to properly
plan and execute tests. Without such documentation there will be no clear-cut way to determine if
a software application is performing correctly.
Please note, the process of developing test cases can help find problems in the requirements or
design of an application, since it requires you to completely think through the operation of the
application. For this reason, it is useful to prepare test cases early in the development cycle, if
possible.
• Ensure the code is well commented and well documented; this makes changes easier for
the developers.
• Use rapid prototyping whenever possible; this will help customers feel sure of their
requirements and minimize changes.
TESTINGFAQ 37
• In the project's initial schedule, allow for some extra time to commensurate with probable
changes.
• Move new requirements to a 'Phase 2' version of an application and use the original
requirements for the 'Phase 1' version.
• Negotiate to allow only easily implemented new requirements into the project; move more
difficult, new requirements into future versions of the application.
• Ensure customers and management understand scheduling impacts, inherent risks and
costs of significant requirements changes. Then let management or the customers decide
if the changes are warranted; after all, that's their job.
• Balance the effort put into setting up automated testing with the expected effort required
to redo them to deal with changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that are most likely to remain
unchanged;
• Devote appropriate effort to risk analysis of changes, in order to minimize regression-
testing needs;
• Design some flexibility into test cases; this is not easily done; the best bet is to minimize
the detail in the test cases, or set up only higher-level generic-type test plans;
• Focus less on detailed test plans and test cases and more on ad-hoc testing with an
understanding of the added risk this entails.
TESTINGFAQ 39
Q. What is parallel/audit testing?
A: Parallel/audit testing is testing where the user reconciles the output of the new system to the
output of the current system to verify the new system performs the operations correctly.
TESTINGFAQ 40
• Assure the successful launch of your product by discovering bugs and design flaws,
before users get discouraged, before shareholders loose their cool and before employees
get bogged down;
• Help the work of your development staff, so the development team can devote its time to
build up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory agencies and your
customers;
• Save money by discovering defects 'early' in the design process, before failures occur in
production, or in the field;
• Save the reputation of your company by discovering bugs and design flaws; before bugs
and design flaws damage the reputation of your company.
TESTINGFAQ 41
This methodology can be used and molded to your organization's needs. Rob Davis believes that
using this methodology is important in the development and in ongoing maintenance of his
customers' applications.
• A description of the required hardware and software components, including test tools.
This information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test and
schedule constraints. This information comes from manhours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
• Requirements that the system can not provide, e.g. system limitations.
• An approved and signed off test strategy document, test plan, including test cases.
• Testing issues requiring resolution. Usually this requires additional negotiation at the
project management level.
• Test cases and scenarios are designed to represent both typical and unusual situations
that may occur in the application.
• Test engineers define unit test requirements and unit test cases. Test engineers also
execute unit test cases.
• It is the test team who, with assistance of developers and clients, develops test cases
and scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one or more test
scenarios.
• Test procedures or scripts include the specific data that will be used for testing the
process or transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices are used to
ensure each test is within scope.
• Test data is captured and baselined, prior to testing. This data serves as the foundation
for unit and system testing and used to exercise system functionality in a controlled
environment.
TESTINGFAQ 42
• Some output data is also baselined for future comparison. Baselined data is used to
support future application maintenance via regression testing.
• A pre-test meeting is held to assess the readiness of the application and the environment
and data to be tested. A test readiness document is created to indicate the status of the
entrance criteria of the release.
• Approved documents of test scenarios, test cases, test conditions and test data.
• Reports of software design issues, given to software developers for correction.
• The output from the execution of test procedures is known as test results. Test results
are evaluated by test engineers to determine whether the expected results have been
obtained. All discrepancies/anomalies are logged and discussed with the software team
lead, hardware test lead, programmers, software engineers and documented for further
investigation and resolution. Every company has a different process for logging and
reporting bugs/defects uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are
recorded in a test summary report. The severity of a problem, found during system
testing, is defined in accordance to the customer's risk assessment and recorded in their
selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the severity of the
problem. Fixes are regression tested and flawless fixes are migrated to a new baseline.
Following completion of the test, members of the test team prepare a summary report.
The summary report is reviewed by the Project Manager, Software QA (SWQA) Manager
and/or Test Team Lead.
• After a particular level of testing has been certified, it is the responsibility of the
Configuration Manager to coordinate the migration of the release software components to
the next test level, as documented in the Configuration Management Plan. The software
is only migrated to the production environment after the Project Manager's formal
acceptance.
• The test team reviews test document problems identified during testing, and update
documents where appropriate.
• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
TESTINGFAQ 43
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software Design
Document.
• A software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.
• Log and summary of the test results. Usually this is part of the Test Report. This needs to
be approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are Requirements
document and Design Document problems.
• Reports on software design issues, given to software developers for correction.
Examples are bug reports on code issues.
• Formal record of test incidents, usually part of problem tracking.
Baselined package, also known as tested source and object code, ready for migration to the next
level.
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilities
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems
• Test environment validity analysis - differences between the test and production systems and their
impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
TESTINGFAQ 44
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture
software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.
(See the Bookstore section's 'Software Testing' and 'Software QA' categories for useful books with more
information.)
• A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain particulars
such as test case identifier, test case name, objective, test conditions/setup, input data
requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements or design of an
application, since it requires completely thinking through the operation of the application. For this reason,
it's useful to prepare test cases early in the development cycle if possible.
36). What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no easy
solution in this situation, other than:
TESTINGFAQ 45
38). How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration
should be given to the interactions between html pages, TCP/IP communications, Internet connections,
firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi scripts, database interfaces, logging applications,
dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various
versions of each, small but sometimes significant differences between them, variations in connection
speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing
for web sites can become a major ongoing effort. Other considerations might include:
• What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of
performance is required under such loads (such as web server response time, database query
response times). What kinds of tools will be needed for performance testing (such as web load
testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kind of connection
speeds will they by using? Are they intra- organization (thus with likely high connection speeds
and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser
types)?
• What kind of performance is expected on the client side (e.g., how fast should pages appear, how
fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it
expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that affect backup
system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and what are the
requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics throughout a site
or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be required? How are
browser caching, variations in browser option settings, dial-up connection variabilities, and real-
world internet 'traffic congestion' problems to be accounted for in testing?
• How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested?
Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links
concerning web site security in the 'Other Resources' section.
Some usability guidelines to consider - these are subjective and may or may not apply to a given situation
(Note: more information on usability testing issues can be found in articles about web site usability in
the 'Other Resources' section):
• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so that it's clear to
the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided or generated
based on the browser-type.
• All pages should have links external to the page; there should be no dead-end pages.
• The page owner, revision date, and a link to a contact person or organization should be included
on each page.
TESTINGFAQ 46
Many new web site test tools are appearing and more than 230 of them are listed in the 'Web Test Tools'
section.
Extreme Programming (XP) is a software development approach for small teams on risk-
prone projects with unstable requirements. It was created by Kent Beck who described
the approach in his book 'Extreme Programming Explained' (See the Softwareqatest.com
Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming.
Programmers are expected to write unit and functional test code first - before the
application is developed. Test code is under source control along with the rest of the
code. Customers are expected to be an integral part of the project team and to help
develope scenarios for acceptance/black box testing. Acceptance tests are preferably
automated, and are modified and rerun for each of the frequent development iterations.
QA and test personnel are also required to be an integral part of the project team.
Detailed requirements documentation is not used, and frequent re-scheduling, re-
estimating, and re-prioritizing is expected. For more info see the XP-related listings in the
Softwareqatest.com 'Other Resources' section.
One of the most common, but unfortunate misuses of terminology is treating "load
testing" and "stress testing" as synonymous. The consequence of this ignorant semantic
abuse is usually that the system is neither properly "load tested" nor subjected to a
meaningful stress test.
3. A third use of the term is as a test whose objective is to determine the maximum
sustainable load the system can handle.
In this usage, "load testing" is merely testing at the highest transaction arrival rate in
performance testing.
TESTINGFAQ 47
- WEB TESTING SPECIFICS
- Internet Software - Quality Characteristics
1. Functionality - Verified content
2. Reliability - Security and availability
3. Efficiency - Response Times
4. Usability - High user satisfaction
5. Portability - Platform Independence
- Link Testing
You must ensure that all the hyperlinks are valid
This applies to both internal and external links
Internal links shall be relative, to minimize the overhead and faults when the web site is
moved to production environment
External links shall be referenced to absolute URLs
External links can change without control - So, automate regression testing
Remember that external non- home page links are more likely to break
Be careful at links in "What's New" sections. They are likely to become obsolete
Check that content can be accessed by means of: Search engine, Site Map
Check the accuracy of Search Engine results
Check that web Site Error 404 ("Not Found") is handled by means of a user-friendly
page
- Compatibility Testing
Check for the site behavior across the industry standard browsers. The main issues
involve how different the browsers handle tables, images, caching and scripting
languages
In cross browsers testing, check for:
Behavior of buttons
Support of Java scripts
Support of tables
Acrobat, Real, Flash behavior
ActiveX control support
TESTINGFAQ 49
Java compatibility
Text size
- Usability Testing
Aspects to be tested with care:
Coherence of look and feel
Navigational aids
User Interactions
Printing
With respect to
Normal behavior
Destructive behavior
Inexperienced users
Usability Tips
TESTINGFAQ 50
• Differentiate visited links from unvisited links
• Never use graphics where HTML text will do
• Make GUI design predictable and consistent
• Check that printed pages fit appropriately to paper pages[consider that many people just
surf and print. Check especially the pages for which format is important. E.g.: an
application form can either be filled on-line or printed/filled/faxed]
- Portability Testing
• Check that links to URLs outside the web site must be in canonical form (Ex:
http://www.arkinsys.com/)
• Check that links to URLs into the web site must be in relative form (e.g.
…./arkinsys/images/images.gif)
- Cookies Testing
A "Cookie" is a small piece of information sent by the web server to store on a web browser. So it can later
be read back from the browser. This is useful for having the browser remember some specific information.
TESTINGFAQ 51
• Correct for difficult test data
• Proven correct using mathematical logic
• Obeys specifications for all valid data
• Obeys specifications for likely erroneous input
• Obeys specifications for all possible input
- TEST PLAN
A software project test plan is a document that describes the objectives, scope, approach, and focus of a
software testing effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to
be useful but not so thorough that no one outside the test group will read it. The following are some of the
items that might be included in a test plan, depending on the particular project:
• Title of the Project
• Identification of document including version numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans,
etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality,
process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment setup and configuration issues
• Test data setup requirements
• Database setup requirements
TESTINGFAQ 52
• Outline of system-logging/error-logging/other capabilities, and tools such as screen
capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to
help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel pre-training needs
• Test site/location
• Relevant proprietary, classified, security, and licensing issues.
• Appendix - glossary, acronyms, etc.
- TEST CASES
- What's a 'test case'?
A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements
or design of an application, since it requires completely thinking through the operation
of the application. For this reason, it's useful to prepare test cases early in the
development cycle if possible.
- TESTING COVERAGE
• Line coverage. Test every line of code (Or Statement coverage: test every statement).
• Branch coverage. Test every line, and every branch on multi-branch lines.
• N-length sub-path coverage. Test every sub-path through the program of length N. For
example, in a 10,000-line program, test every possible 10-line sequence of execution.
• Path coverage. Test every path through the program, from entry to exit. The number of
paths is impossibly large to test.
• Multicondition or predicate coverage. Force every logical operand to take every
possible value. Two different conditions within the same test may result in the same
branch, and so branch coverage would only require the testing of one of them.
TESTINGFAQ 53
• Trigger every assertion check in the program. Use impossible data if necessary.
• Loop coverage. "Detect bugs that exhibit themselves only when a loop is executed more
than once."
• Low Memory Conditions - In the event the application fails due to low memory
conditions, it should fail in graceful manner. This means that error logs should be
written detailing what the conditions were causing the failure to occur, orders the user
may have open are closed, an error message is given informing the user of the condition
and the actions that will be taken, etc
• Every module, object, component, tool, subsystem, etc. This seems obvious until you
realize that many programs rely on off-the-shelf components. The programming staff
doesn't have the source code to these components, so measuring line coverage is
impossible. At a minimum (which is what is measured here), you need a list of all these
components and test cases that exercise each one at least once.
• Fuzzy decision coverage. If the program makes heuristically-based or similarity-based
decisions, and uses comparison rules or data sets that evolve over time, check every rule
several times over the course of training.
• Relational coverage. "Checks whether the subsystem has been exercised in a way that
tends to detect off-by-one errors" such as errors caused by using < instead of <=. This
coverage includes:
• Every boundary on every input variable.
• Every boundary on every output variable.
• Every boundary on every variable used in intermediate calculations.
• Data coverage. At least one test case for each data item / variable / field in the program.
• Constraints among variables: Let X and Y be two variables in the program. X and Y
constrain each other if the value of one restricts the values the other can take. For
example, if X is a transaction date and Y is the transaction's confirmation date, Y can't
occur before X.
• Each appearance of a variable. Suppose that you can enter a value for X on three
different data entry screens, the value of X is displayed on another two screens, and it is
printed in five reports. Change X at each data entry screen and check the effect
everywhere else X appears.
• Every type of data sent to every object. A key characteristic of object-oriented
programming is that each object can handle any type of data (integer, real, string, etc.)
that you pass to it. So, pass every conceivable type of data to every object.
• Handling of every potential data conflict. For example, in an appointment-calendaring
program, what happens if the user tries to schedule two appointments at the same date
and time?
• Handling of every error state. Put the program into the error state, check for effects on
the stack, available memory, handling of keyboard input. Failure to handle user errors
well is an important problem, partially because about 90% of industrial accidents are
TESTINGFAQ 54
blamed on human error or risk-taking. Under the legal doctrine of foreseeable misuse,
the manufacturer is liable in negligence if it fails to protect the customer from the
consequences of a reasonably foreseeable misuse of the product.
• Every complexity / maintainability metric against every module, object, subsystem, etc.
There are many such measures. Jones lists 20 of them. People sometimes ask whether
any of these statistics are grounded in a theory of measurement or have practical value.
But it is clear that, in practice, some organizations find them an effective tool for
highlighting code that needs further investigation and might need redesign.
• Conformity of every module, subsystem, etc. against every corporate coding standard.
Several companies believe that it is useful to measure characteristics of the code, such as
total lines per module, ratio of lines of comments to lines of code, frequency of
occurrence of certain types of statements, etc. A module that doesn't fall within the
"normal" range might be summarily rejected (bad idea) or re-examined to see if there's a
better way to design this part of the program.
• Table-driven code. The table is a list of addresses or pointers or names of modules. In a
traditional CASE statement, the program branches to one of several places depending on
the value of an expression. In the table-driven equivalent, the program would branch to
the place specified in, say, location 23 of the table. The table is probably in a separate
data file that can vary from day to day or from installation to installation. By modifying
the table, you can radically change the control flow of the program without recompiling
or relinking the code. Some programs drive a great deal of their control flow this way,
using several tables. Coverage measures? Some examples:
• check that every expression selects the correct table element
• check that the program correctly jumps or calls through every table element
• check that every address or pointer that is available to be loaded into these tables is valid
(no jumps to impossible places in memory, or to a routine whose starting address has
changed)
• check the validity of every table that is loaded at any customer site.
• Every interrupt. An interrupt is a special signal that causes the computer to stop the
program in progress and branch to an interrupt handling routine. Later, the program
restarts from where it was interrupted. Interrupts might be triggered by hardware events
(I/O or signals from the clock that a specified interval has elapsed) or software (such as
error traps). Generate every type of interrupt in every way possible to trigger that
interrupt.
• Every interrupt at every task, module, object, or even every line. The interrupt handling
routine might change state variables, load data, use or shut down a peripheral device, or
affect memory in ways that could be visible to the rest of the program. The interrupt can
happen at any time-between any two lines, or when any module is being executed. The
program may fail if the interrupt is handled at a specific time. (Example: what if the
TESTINGFAQ 55
program branches to handle an interrupt while it's in the middle of writing to the disk
drive?)
• The number of test cases here is huge, but that doesn't mean you don't have to think
about this type of testing. This is path testing through the eyes of the processor (which
asks, "What instruction do I execute next?" and doesn't care whether the instruction
comes from the mainline code or from an interrupt handler) rather than path testing
through the eyes of the reader of the mainline code. Especially in programs that have
global state variables, interrupts at unexpected times can lead to very odd results.
• Every anticipated or potential race. Imagine two events, A and B. Both will occur, but
the program is designed under the assumption that A will always precede B. This sets up
a race between A and B -if B ever precedes A, the program will probably fail. To achieve
race coverage, you must identify every potential race condition and then find ways,
using random data or systematic test case selection, to attempt to drive B to precede A in
each case.
• Races can be subtle. Suppose that you can enter a value for a data item on two different
data entry screens. User 1 begins to edit a record, through the first screen. In the process,
the program locks the record in Table 1. User 2 opens the second screen, which calls up
a record in a different table, Table 2. The program is written to automatically update the
corresponding record in the Table 1 when User 2 finishes data entry. Now, suppose that
User 2 finishes before User 1. Table 2 has been updated, but the attempt to synchronize
Table 1 and Table 2 fails. What happens at the time of failure, or later if the
corresponding records in Table 1 and 2 stay out of synch?
• Every time-slice setting. In some systems, you can control the grain of switching
between tasks or processes. The size of the time quantum that you choose can make race
bugs, time-outs, interrupt-related problems, and other time-related problems more or less
likely. Of course, coverage is a difficult problem here because you aren't just varying
time-slice settings through every possible value. You also have to decide which tests to
run under each setting. Given a planned set of test cases per setting, the coverage
measure looks at the number of settings you've covered.
• Varied levels of background activity. In a multiprocessing system, tie up the processor
with competing, irrelevant background tasks. Look for effects on races and interrupt
handling. Similar to time-slices, your coverage analysis must specify
• categories of levels of background activity (figure out something that makes sense) and
• all timing-sensitive testing opportunities (races, interrupts, etc.).
• Each processor type and speed. Which processor chips do you test under? What tests do
you run under each processor? You are looking for:
• speed effects, like the ones you look for with background activity testing, and
• consequences of processors' different memory management rules, and
• floating point operations, and
• any processor-version-dependent problems that you can learn about.
TESTINGFAQ 56
• Every opportunity for file / record / field locking.
• Every dependency on the locked (or unlocked) state of a file, record or field.
• Every opportunity for contention for devices or resources.
• Performance of every module / task / object. Test the performance of a module then
retest it during the next cycle of testing. If the performance has changed significantly,
you are either looking at the effect of a performance-significant redesign or at a
symptom of a new bug.
• Free memory / available resources / available stack space at every line or on entry into
and exit out of every module or object.
• Execute every line (branch, etc.) under the debug version of the operating system. This
shows illegal or problematic calls to the operating system.
• Vary the location of every file. What happens if you install or move one of the
program's component, control, initialization or data files to a different directory or drive
or to another computer on the network?
• Check the release disks for the presence of every file. It's amazing how often a file
vanishes. If you ship the product on different media, check for all files on all media.
• Every embedded string in the program. Use a utility to locate embedded strings. Then
find a way to make the program display each string.
• Operation of every function / feature / data handling operation under:
• Every program preference setting.
• Every character set, code page setting, or country code setting.
• The presence of every memory resident utility (inits, TSRs).
• Each operating system version.
• Each distinct level of multi-user operation.
• Each network type and version.
• Each level of available RAM.
• Each type / setting of virtual memory management.
• Compatibility with every previous version of the program.
• Ability to read every type of data available in every readable input file format. If a file
format is subject to subtle variations (e.g. CGM) or has several sub-types (e.g. TIFF) or
versions (e.g. dBASE), test each one.
• Write every type of data to every available output file format. Again, beware of subtle
variations in file formats-if you're writing a CGM file, full coverage would require you
to test your program's output's readability by every one of the main programs that read
CGM files.
• Every typeface supplied with the product. Check all characters in all sizes and styles. If
your program adds typefaces to a collection of fonts that are available to several other
programs, check compatibility with the other programs (nonstandard typefaces will
crash some programs).
TESTINGFAQ 57
• Every type of typeface compatible with the program. For example, you might test the
program with (many different) TrueType and Postscript typefaces, and fixed-sized
bitmap fonts.
• Every piece of clip art in the product. Test each with this program. Test each with other
programs that should be able to read this type of art.
• Every sound / animation provided with the product. Play them all under different
device (e.g. sound) drivers / devices. Check compatibility with other programs that
should be able to play this clip-content.
• Every supplied (or constructible) script to drive other machines / software (e.g.
macros) / BBS's and information services (communications scripts).
• All commands available in a supplied communications protocol.
• Recognized characteristics. For example, every speaker's voice characteristics (for
voice recognition software) or writer's handwriting characteristics (handwriting
recognition software) or every typeface (OCR software).
• Every type of keyboard and keyboard driver.
• Every type of pointing device and driver at every resolution level and ballistic setting.
• Every output feature with every sound card and associated drivers.
• Every output feature with every type of printer and associated drivers at every resolution
level.
• Every output feature with every type of video card and associated drivers at every
resolution level.
• Every output feature with every type of terminal and associated protocols.
• Every output feature with every type of video monitor and monitor-specific drivers at
every resolution level.
• Every color shade displayed or printed to every color output device (video card /
monitor / printer / etc.) and associated drivers at every resolution level. And check the
conversion to grey scale or black and white.
• Every color shade readable or scannable from each type of color input device at every
resolution level.
• Every possible feature interaction between video card type and resolution, pointing
device type and resolution, printer type and resolution, and memory level. This may
seem excessively complex, but some times crash bugs occur only under the pairing of
specific printer and video drivers at a high resolution setting. Other crashes required
pairing of a specific mouse and printer driver, pairing of mouse and video driver, and a
combination of mouse driver plus video driver plus ballistic setting.
• Every type of CD-ROM drive, connected to every type of port (serial / parallel / SCSI)
and associated drivers.
• Every type of writable disk drive / port / associated driver. Don't forget the fun you can
have with removable drives or disks.
TESTINGFAQ 58
• Compatibility with every type of disk compression software. Check error handling for
every type of disk error, such as full disk.
• Every voltage level from analog input devices.
• Every voltage level to analog output devices.
• Every type of modem and associated drivers.
• Every FAX command (send and receive operations) for every type of FAX card under
every protocol and driver.
• Every type of connection of the computer to the telephone line (direct, via PBX, etc.;
digital vs. analog connection and signaling); test every phone control command under
every telephone control driver.
• Tolerance of every type of telephone line noise and regional variation (including
variations that are out of spec) in telephone signaling (intensity, frequency, timing, other
characteristics of ring / busy / etc. tones).
• Every variation in telephone dialing plans.
• Every possible keyboard combination. Sometimes you'll find trap doors that the
programmer used as hotkeys to call up debugging tools; these hotkeys may crash a
debuggerless program. Other times, you'll discover an Easter Egg (an undocumented,
probably unauthorized, and possibly embarrassing feature). The broader coverage
measure is every possible keyboard combination at every error message and every
data entry point. You'll often find different bugs when checking different keys in
response to different error messages.
• Recovery from every potential type of equipment failure. Full coverage includes each
type of equipment, each driver, and each error state. For example, test the program's
ability to recover from full disk errors on writable disks. Include floppies, hard drives,
cartridge drives, optical drives, etc. Include the various connections to the drive, such as
IDE, SCSI, MFM, parallel port, and serial connections, because these will probably
involve different drivers.
• Function equivalence. For each mathematical function, check the output against a
known good implementation of the function in a different program. Complete coverage
involves equivalence testing of all testable functions across all possible input values.
• Zero handling. For each mathematical function, test when every input value,
intermediate variable, or output variable is zero or near-zero. Look for severe rounding
errors or divide-by-zero errors.
• Accuracy of every graph, across the full range of graphable values. Include values that
force shifts in the scale.
• Accuracy of every report. Look at the correctness of every value, the formatting of
every page, and the correctness of the selection of records used in each report.
• Accuracy of every message.
• Accuracy of every screen.
• Accuracy of every word and illustration in the manual.
TESTINGFAQ 59
• Accuracy of every fact or statement in every data file provided with the product.
• Accuracy of every word and illustration in the on-line help.
• Every jump, search term, or other means of navigation through the on-line help.
• Check for every type of virus / worm that could ship with the program.
• Every possible kind of security violation of the program, or of the system while using
the program.
• Check for copyright permissions for every statement, picture, sound clip, or other
creation provided with the program.
• Verification of the program against every program requirement and published
specification.
• Verification of the program against user scenarios. Use the program to do real tasks
that are challenging and well-specified. For example, create key reports, pictures, page
layouts, or other documents events to match ones that have been featured by competitive
programs as interesting output or applications.
• Verification against every regulation (IRS, SEC, FDA, etc.) that applies to the data or
procedures of the program.
• Usability tests of:
• Every feature / function of the program.
• Every part of the manual.
• Every error message.
• Every on-line help topic.
• Every graph or report provided by the program.
• Localizability / localization tests:
• Every string. Check program's ability to display and use this string if it is modified by
changing the length, using high or low ASCII characters, different capitalization rules,
etc.
• Compatibility with text handling algorithms under other languages (sorting, spell
checking, hyphenating, etc.)
• Every date, number and measure in the program.
• Hardware and drivers, operating system versions, and memory-resident programs that
are popular in other countries.
• Every input format, import format, output format, or export format that would be
commonly used in programs that are popular in other countries.
• Cross-cultural appraisal of the meaning and propriety of every string and graphic
shipped with the program.
TESTINGFAQ 60
- WHAT IF THERE ISN'T ENOUGH TIME FOR THOROUGH
TESTING?
Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every
possible aspect of an application, every possible combination of events, every dependency, or everything
that could go wrong, risk analysis is appropriate to most software development projects. This requires
judgment skills, common sense, and experience. (If warranted, formal methods are also available.)
Considerations can include:
Which functionality is most important to the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which aspects of the application can be tested early in the development cycle?
Which parts of the code are most complex, and thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
Which aspects of similar/related previous projects caused problems?
Which aspects of similar/related previous projects had large maintenance expenses?
Which parts of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Which tests will have the best high-risk-coverage to time-required ratio?
- DEFECT REPORTING
The bug needs to be communicated and assigned to developers that can fix it. After the problem is
resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing
to check that fixes didn't create problems elsewhere. The following are items to consider in the tracking
process:
Complete information such that developers can understand the bug, get an idea of its severity,
and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Current bug status (e.g., 'Open', 'Closed', etc.)
The application name and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics
Test case name/number/identifier
File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem
Severity Level
TESTINGFAQ 61
Tester name
Bug reporting date
Name of developer/group/organization the problem is assigned to
Description of fix
Date of fix
Application version that contains the fix
Verification Details
A reporting or tracking process should enable notification of appropriate personnel at various stages.
Tip 2: A good spec is a tester's best friend. Get design and functional specifications --
even if you have to whine or threaten. Specs help you determine what is a "real" bug and
what is "by design."
Tip 3: KISS (Keep It Simple, Stupid). If you have a form that's supposed to accept only
integers 1 to 10, you can start your tests by entering NULL, 0, 1, 10, 11, 1000000000, -1,
a, and the other acceptable values. You needn't enter every possible key stroke to find a
code hole.
Tip 4: Expect the unexpected. Before you think the previous tip allows you to cut back
your testing, remember the user. If your audience is non-technical, the possible
keystrokes, browser settings, and other "mystery" influences are unlimited. Think like a
user, and stress the Web site the way a user would.
Tip 5: Use all the automated testing tools you can get your hands on. Many useful tools
are on the Web already. With a little searching, you may be able to find the right ones for
you.
TESTINGFAQ 62