Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 62

What is 'Software Quality Assurance'?

Software QA involves the entire software development PROCESS - monitoring and


improving the process, making sure that any agreed-upon standards and procedures are
followed, and ensuring that problems are found and dealt with. It is oriented to
'prevention'.

What is 'Software Testing'?


Testing involves operation of a system or application under controlled conditions and
evaluating the results (eg, 'if the user is in interface A of the application while using
hardware B, and does C, then D should happen'). The controlled conditions should
include both normal and abnormal conditions. Testing should intentionally attempt to
make things go wrong to determine if things happen when they shouldn't or things don't
happen when they should. It is oriented to 'detection'.
Organizations vary considerably in how they assign responsibility for QA and testing.
Sometimes they're the combined responsibility of one group or individual. Also common
are project teams that include a mix of testers and developers who work closely together,
with overall QA processes monitored by project managers. It will depend on what best
fits an organization's size and business structure.

What are some recent major computer system failures caused by software bugs?
In August of 2003 a U.S. court ruled that a lawsuit against a large online brokerage
company could proceed; the lawsuit reportedly involved claims that the company was not
fixing system problems that sometimes resulted in failed stock trades, based on the
experiences of 4 plaintiffs during an 8-month period. A previous lower court's ruling that
"...six miscues out of more than 400 trades does not indicate negligence." was
invalidated.
In April of 2003 it was announced that the largest student loan company in the U.S. made
a software error in calculating the monthly payments on 800,000 loans. Although
borrowers were to be notified of an increase in their required payments, the company will
still reportedly lose $8 million in interest. The error was uncovered when borrowers
began reporting inconsistencies in their bills.
News reports in February of 2003 revealed that the U.S. Treasury Department mailed
50,000 Social Security checks without any beneficiary names. A spokesperson indicated
that the missing names were due to an error in a software change. Replacement checks
were subsequently mailed out with the problem corrected, and recipients were then able
to cash their Social Security checks.
In March of 2002 it was reported that software bugs in Britain's national tax system
resulted in more than 100,000 erroneous tax overcharges. The problem was partly
attibuted to the difficulty of testing the integration of multiple systems.
A newspaper columnist reported in July 2001 that a serious flaw was found in off-the-
shelf software that had long been used in systems for tracking certain U.S. nuclear
materials. The same software had been recently donated to another country to be used in
tracking their own nuclear materials, and it was not until scientists in that country
discovered the problem, and shared the information, that U.S. officials became aware of
the problems.
According to newspaper stories in mid-2001, a major systems development contractor
was fired and sued over problems with a large retirement plan management system.
According to the reports, the client claimed that system deliveries were late, the software
had excessive defects, and it caused other systems to crash.
In January of 2001 newspapers reported that a major European railroad was hit by the
aftereffects of the Y2K bug. The company found that many of their newer trains would
TESTINGFAQ 1
not run due to their inability to recognize the date '31/12/2000'; the trains were started by
altering the control system's date settings.
News reports in September of 2000 told of a software vendor settling a lawsuit with a
large mortgage lender; the vendor had reportedly delivered an online mortgage
processing system that did not meet specifications, was delivered late, and didn't work.
In early 2000, major problems were reported with a new computer system in a large
suburban U.S. public school district with 100,000+ students; problems included 10,000
erroneous report cards and students left stranded by failed class registration systems; the
district's CIO was fired. The school district decided to reinstate it's original 25-year old
system for at least a year until the bugs were worked out of the new system by the
software vendors.
In October of 1999 the $125 million NASA Mars Climate Orbiter spacecraft was
believed to be lost in space due to a simple data conversion error. It was determined that
spacecraft software used certain data in English units that should have been in metric
units. Among other tasks, the orbiter was to serve as a communications relay for the Mars
Polar Lander mission, which failed for unknown reasons in December 1999. Several
investigating panels were convened to determine the process failures that allowed the
error to go undetected.
Bugs in software supporting a large commercial high-speed data network affected 70,000
business customers over a period of 8 days in August of 1999. Among those affected was
the electronic trading system of the largest U.S. futures exchange, which was shut down
for most of a week as a result of the outages.
In April of 1999 a software bug caused the failure of a $1.2 billion U.S. military satellite
launch, the costliest unmanned accident in the history of Cape Canaveral launches. The
failure was the latest in a string of launch failures, triggering a complete military and
industry review of U.S. space launch programs, including software integration and testing
processes. Congressional oversight hearings were requested.
A small town in Illinois in the U.S. received an unusually large monthly electric bill of $7
million in March of 1999. This was about 700 times larger than its normal bill. It turned
out to be due to bugs in new software that had been purchased by the local power
company to deal with Y2K software issues.
In early 1999 a major computer game company recalled all copies of a popular new
product due to software problems. The company made a public apology for releasing a
product before it was ready.
The computer system of a major online U.S. stock trading service failed during trading
hours several times over a period of days in February of 1999 according to nationwide
news reports. The problem was reportedly due to bugs in a software upgrade intended to
speed online trade confirmations.
In April of 1998 a major U.S. data communications network failed for 24 hours, crippling
a large part of some U.S. credit card transaction authorization systems as well as other
large U.S. bank, retail, and government data systems. The cause was eventually traced to
a software bug.
January 1998 news reports told of software problems at a major U.S. telecommunications
company that resulted in no charges for long distance calls for a month for 400,000
customers. The problem went undetected until customers called up with questions about
their bills.
In November of 1997 the stock of a major health industry company dropped 60% due to
reports of failures in computer billing systems, problems with a large database
conversion, and inadequate software testing. It was reported that more than $100,000,000
in receivables had to be written off and that multi-million dollar fines were levied on the
company by government agencies.
TESTINGFAQ 2
A retail store chain filed suit in August of 1997 against a transaction processing system
vendor (not a credit card company) due to the software's inability to handle credit cards
with year 2000 expiration dates.
In August of 1997 one of the leading consumer credit reporting companies reportedly
shut down their new public web site after less than two days of operation due to software
problems. The new site allowed web site visitors instant access, for a small fee, to their
personal credit reports. However, a number of initial users ended up viewing each others'
reports instead of their own, resulting in irate customers and nationwide publicity. The
problem was attributed to "...unexpectedly high demand from consumers and faulty
software that routed the files to the wrong computers."
In November of 1996, newspapers reported that software bugs caused the 411 telephone
information system of one of the U.S. RBOC's to fail for most of a day. Most of the 2000
operators had to search through phone books instead of using their 13,000,000-listing
database. The bugs were introduced by new software modifications and the problem
software had been installed on both the production and backup systems. A spokesman for
the software vendor reportedly stated that 'It had nothing to do with the integrity of the
software. It was human error.'
On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket
failed shortly after launching, resulting in an estimated uninsured loss of a half billion
dollars. It was reportedly due to the lack of exception handling of a floating-point error in
a conversion from a 64-bit integer to a 16-bit signed integer.
Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be
credited with $924,844,208.32 each in May of 1996, according to newspaper reports. The
American Bankers Association claimed it was the largest such error in banking history. A
bank spokesman said the programming errors were corrected and all funds were
recovered.
Software bugs in a Soviet early-warning monitoring system nearly brought on nuclear
war in 1983, according to news reports in early 1999. The software was supposed to filter
out false missile detections caused by Soviet satellites picking up sunlight reflections off
cloud-tops, but failed to do so. Disaster was averted when a Soviet commander, based on
a what he said was a '...funny feeling in my gut', decided the apparent missile attack was a
false alarm. The filtering software code was rewritten.

Why is it often hard for management to get serious about quality assurance?
Solving problems is a high-visibility process; preventing problems is low-visibility. This
is illustrated by an old parable:
In ancient China there was a family of healers, one of whom was known throughout the
land and employed as a physician to a great lord. The physician was asked which of his
family was the most skillful healer. He replied,
"I tend to the sick and dying with drastic and dramatic treatments, and on occasion
someone is cured and my name gets out among the lords."
"My elder brother cures sickness when it just begins to take root, and his skills are known
among the local peasants and neighbors."
"My eldest brother is able to sense the spirit of sickness and eradicate it before it takes
form. His name is unknown outside our home."

Why does software have bugs?


Miscommunication or no communication - as to specifics of what an application should
or shouldn't do (the application's requirements).
Software complexity - the complexity of current software applications can be difficult to
comprehend for anyone without experience in modern-day software development.
TESTINGFAQ 3
Windows-type interfaces, client-server and distributed applications, data
communications, enormous relational databases, and sheer size of applications have all
contributed to the exponential growth in software/system complexity. And the use of
object-oriented techniques can complicate instead of simplify a project unless it is well-
engineered.
Programming errors - programmers, like anyone else, can make mistakes.
Changing requirements - the customer may not understand the effects of changes, or may
understand and request them anyway - redesign, rescheduling of engineers, effects on
other projects, work already completed that may have to be redone or thrown out,
hardware requirements that may be affected, etc. If there are many minor changes or any
major changes, known and unknown dependencies among parts of the project are likely
to interact and cause problems, and the complexity of keeping track of changes may
result in errors. Enthusiasm of engineering staff may be affected. In some fast-changing
business environments, continuously modified requirements may be a fact of life. In this
case, management must understand the resulting risks, and QA and test engineers must
adapt and plan for continuous extensive testing to keep the inevitable bugs from running
out of control .
Time pressures - scheduling of software projects is difficult at best, often requiring a lot
of guesswork. When deadlines loom and the crunch comes, mistakes will be made.
Egos - people prefer to say things like:
'no problem'
'piece of cake'
'I can whip that out in a few hours'
'it should be easy to update that old code'

instead of:
'that adds a lot of complexity and we could end up
making a lot of mistakes'
'we have no idea if we can do that; we'll wing it'
'I can't estimate how long it will take, until I
take a close look at it'
'we can't figure out what that old spaghetti code
did in the first place'

If there are too many unrealistic 'no problem's', the


result is bugs.

Poorly documented code - it's tough to maintain and modify code that is badly written or
poorly documented; the result is bugs. In many organizations management provides no
incentive for programmers to document their code or write clear, understandable code. In
fact, it's usually the opposite: they get points mostly for quickly turning out code, and
there's job security if nobody else can understand it ('if it was hard to write, it should be
hard to read').
Software development tools - visual tools, class libraries, compilers, scripting tools, etc.
often introduce their own bugs or are poorly documented, resulting in added bugs.

How can new Software QA processes be introduced in an existing organization?


A lot depends on the size of the organization and the risks involved. For large
organizations with high-risk (in terms of lives or property) projects, serious management
buy-in is required and a formalized QA process is necessary.

TESTINGFAQ 4
Where the risk is lower, management and organizational buy-in and QA implementation
may be a slower, step-at-a-time process. QA processes should be balanced with
productivity so as to keep bureaucracy from getting out of hand.
For small groups or projects, a more ad-hoc process may be appropriate, depending on
the type of customers and projects. A lot will depend on team leads or managers,
feedback to developers, and ensuring adequate communications among customers,
managers, developers, and testers.
In all cases the most value for effort will be in requirements management processes, with
a goal of clear, complete, testable requirement specifications or expectations.

What is verification? Validation?


Verification typically involves reviews and meetings to evaluate documents, plans, code,
requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings. Validation typically involves actual testing and
takes place after verifications are completed. The term 'IV & V' refers to Independent
Verification and Validation.

What is a 'walkthrough'?
A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or
no preparation is usually required.

What's an 'inspection'?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people
including a moderator, reader, and a recorder to take notes. The subject of the inspection
is typically a document such as a requirements spec or a test plan, and the purpose is to
find problems and see what's missing, not to fix anything. Attendees should prepare for
this type of meeting by reading thru the document; most problems will be found during
this preparation. The result of the inspection meeting should be a written report.
Thorough preparation for inspections is difficult, painstaking work, but is one of the most
cost effective methods of ensuring quality. Employees who are most skilled at
inspections are like the 'eldest brother' in the parable in 'Why is it often hard for
management to get serious about quality assurance?'. Their skill may have low visibility
but they are extremely valuable to any software development organization, since bug
prevention is far more cost-effective than bug detection.

What kinds of testing should be considered?


Black box testing - not based on any knowledge of internal design or code. Tests are
based on requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code.
Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing - the most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires detailed
knowledge of the internal program design and code. Not always easily done unless the
application has a well-designed architecture with tight code; may require developing test
driver modules or test harnesses.
Incremental integration testing - continuous testing of an application as new
functionality is added; requires that various aspects of an application's functionality be
independent enough to work separately before all parts of the program are completed, or
that test drivers be developed as needed; done by programmers or by testers.

TESTINGFAQ 5
Integration testing - testing of combined parts of an application to determine if they
function together correctly. The 'parts' can be code modules, individual applications,
client and server applications on a network, etc. This type of testing is especially relevant
to client/server and distributed systems.
Functional testing - black box type testing geared to functional requirements of an
application; this type of testing should be done by testers. This doesn't mean that the
programmers shouldn't check that their code works before releasing it (which of course
applies to any stage of testing.)
System testing - black-box type testing that is based on overall requirements
specifications; covers all combined parts of a system.
End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves
testing of a complete application environment in a situation that mimics real-world use,
such as interacting with a database, using network communications, or interacting with
other hardware, applications, or systems if appropriate.
Sanity testing - typically an initial testing effort to determine if a new software version is
performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or
destroying databases, the software may not be in a 'sane' enough condition to warrant
further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed, especially
near the end of the development cycle. Automated testing tools can be especially useful
for this type of testing.
Acceptance testing - final testing based on specifications of the end-user or customer, or
based on use by end-users/customers over some limited period of time.
Load testing - testing an application under heavy loads, such as testing of a web site
under a range of loads to determine at what point the system's response time degrades or
fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing.
Also used to describe such tests as system functional testing while under unusually heavy
loads, heavy repetition of certain actions or inputs, input of large numerical values, large
complex queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing.
Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements
documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will
depend on the targeted end-user or customer. User interviews, surveys, video recording
of user sessions, and other techniques can be used. Programmers and testers are usually
not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
Recovery testing - testing how well a system recovers from crashes, hardware failures,
or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or
external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
Exploratory testing - often taken to mean a creative, informal software test that is not
based on formal test plans or test cases; testers may be learning the software as they test
it.
Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers
have significant understanding of the software before testing it.
TESTINGFAQ 6
User acceptance testing - determining if software is satisfactory to an end-user or
customer.
Comparison testing - comparing software weaknesses and strengths to competing
products.
Alpha testing - testing of an application when development is nearing completion; minor
design changes may still be made as a result of such testing. Typically done by end-users
or others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final
bugs and problems need to be found before final release. Typically done by end-users or
others, not by programmers or testers.
Mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.

What are 5 common problems in the software development process?


Poor requirements - if requirements are unclear, incomplete, too general, or not testable,
there will be problems.
Unrealistic schedule - if too much work is crammed in too little time, problems are
inevitable.
Inadequate testing - no one will know whether or not the program is any good until the
customer complains or systems crash.
Futurities - requests to pile on new features after development is underway; extremely
common.
Miscommunication - if developers don't know what's needed or customer's have
erroneous expectations, problems are guaranteed.

What are 5 common solutions to software development problems?


Solid requirements - clear, complete, detailed, cohesive, attainable, testable
requirements that are agreed to by all players. Use prototypes to help nail down
requirements.
Realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-
testing, changes, and documentation; personnel should be able to complete the project
without burning out.
Adequate testing - start testing early on, re-test after fixes or changes, plan for adequate
time for testing and bug-fixing.
Stick to initial requirements as much as possible - be prepared to defend against
changes and additions once development has begun, and be prepared to explain
consequences. If changes are necessary, they should be adequately reflected in related
schedule changes. If possible, use rapid prototyping during the design phase so that
customers can see what to expect. This will provide them a higher comfort level with
their requirements decisions and minimize changes later on.
Communication - require walkthroughs and inspections when appropriate; make
extensive use of group communication tools - e-mail, groupware, networked bug-tracking
tools and change management tools, intranet capabilities, etc.; insure that documentation
is available and up-to-date - preferably electronic, not paper; promote teamwork and
cooperation; use prototypes early on so that customers' expectations are clarified.

What is software 'quality'?


Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously a
TESTINGFAQ 7
subjective term. It will depend on who the 'customer' is and their overall influence in the
scheme of things. A wide-angle view of the 'customers' of a software development
project might include end-users, customer acceptance testers, customer contract officers,
customer management, the development organization's
management/accountants/testers/salespeople, future software maintenance engineers,
stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant
on 'quality' - the accounting department might define quality in terms of profits while an
end-user might define quality as user-friendly and bug-free.

What is 'good code'?


'Good code' is code that works, is bug free, and is readable and maintainable. Some
organizations have coding 'standards' that all developers are supposed to adhere to, but
everyone has different ideas about what's best, or what is too many or too few rules.
There are also various theories and metrics, such as McCabe Complexity metrics. It
should be kept in mind that excessive use of standards and rules can stifle productivity
and creativity. 'Peer reviews', 'buddy checks' code analysis tools, etc. can be used to
check for problems and enforce standards.
For C and C++ coding, here are some typical ideas to consider in setting rules/standards;
these may or may not apply to a particular situation:
Minimize or eliminate use of global variables.
Use descriptive function and method names - use both upper and lower case, avoid
abbreviations, use as many characters as necessary to be adequately descriptive (use of
more than 20 characters is not out of line); be consistent in naming conventions.
Use descriptive variable names - use both upper and lower case, avoid abbreviations, use
as many characters as necessary to be adequately descriptive (use of more than 20
characters is not out of line); be consistent in naming conventions.
Function and method sizes should be minimized; less than 100 lines of code is good, less
than 50 lines is preferable.
Function descriptions should be clearly spelled out in comments preceding a function's
code.
Organize code for readability.
Use whitespace generously - vertically and horizontally
Each line of code should contain 70 characters max.
One code statement per line.
Coding style should be consistent throught a program (eg, use of brackets, indentations,
naming conventions, etc.)
In adding comments, err on the side of too many rather than too few comments; a
common rule of thumb is that there should be at least as many lines of comments
(including header blocks) as lines of code.
No matter how small, an application should include documentation of the overall
program function and flow (even a few paragraphs is better than nothing); or if possible a
separate flow chart and detailed program documentation.
Make extensive use of error handling procedures and status and error logging.
For C++, to minimize complexity and increase maintainability, avoid too many levels of
inheritance in class hierarchies (relative to the size and complexity of the application).
Minimize use of multiple inheritance, and minimize use of operator overloading (note
that the Java programming language eliminates multiple inheritance and operator
overloading.)
For C++, keep class methods small, less than 50 lines of code per method is preferable.
For C++, make liberal use of exception handlers

TESTINGFAQ 8
What is 'good design'?
'Design' could refer to many things, but often refers to 'functional design' or 'internal
design'. Good internal design is indicated by software code whose overall structure is
clear, understandable, easily modifiable, and maintainable; is robust with sufficient error
handling and status logging capability; and works correctly when implemented. Good
functional design is indicated by an application whose functionality can be traced back to
customer and end-user requirements. (See further discussion of functional and internal
design in 'What's the big deal about requirements?' <qatfaq2.html> in FAQ #2.) For
programs that have a user interface, it's often a good idea to assume that the end user will
have little computer knowledge and may not read a user manual or even the on-line help;
some common rules-of-thumb include:
The program should act in a way that least surprises the user
It should always be evident to the user what can be done next and how to exit
The program shouldn't let the users do something stupid without warning them.

What is SEI? CMM? ISO? IEEE? ANSI? Will it help?


SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the
U.S. Defense Department to help improve software development processes.
CMM = 'Capability Maturity Model’ developed by the SEI. It's a model of 5 levels of
organizational 'maturity' that determine effectiveness in delivering quality software. It is
geared to large organizations such as large U.S. Defense Department contractors.
However, many of the QA processes involved are appropriate to any organization, and if
reasonably applied can be helpful. Organizations can receive CMM ratings by
undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic efforts required by
individuals to successfully complete projects. Few if any processes in place; successes
may not be repeatable.

Level 2 - software project tracking, requirements management, realistic planning,


and configuration management processes are in place; successful practices can be
repeated.

Level 3 - standard software development and maintenance processes are integrated


throughout an organization; a Software Engineering Process Group is in place to oversee
software processes, and training programs are used to ensure understanding and
compliance.

Level 4 - metrics are used to track productivity, processes, and products. Project
performance is predictable, and quality is consistently high.

Level 5 - the focus is on continuous process improvement. The impact of new


processes and technologies can be predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations were assessed.


Of those, 27% were rated at Level 1, 39% at 2, 23% at 3, 6% at 4, and 5% at 5. (For
ratings during the period 1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software engineering/maintenance
personnel; 32% of organizations were U.S. federal contractors or agencies. For those
rated at Level 1, the most problematical key process area was in Software Quality
Assurance.
TESTINGFAQ 9
• ISO = 'International Organisation for Standardization' - The ISO 9001:2000
standard (which replaces the previous standard of 1994) concerns quality systems
that are assessed by outside auditors, and it applies to many kinds of production
and manufacturing organizations, not just software. It covers documentation,
design, development, production, testing, installation, servicing, and other
processes. The full set of standards consists of: (a) Q9001-2000 - Quality
Management Systems: Requirements; (b) Q9000-2000 - Quality Management
Systems: Fundamentals and Vocabulary; (c) Q9004-2000 - Quality Management
Systems: Guidelines for Performance Improvements. To be ISO 9001 certified, a
third-party auditor assesses an organization, and certification is typically good for
about 3 years, after which a complete reassessment is required. Note that ISO
certification does not necessarily indicate quality products - it indicates only that
documented processes are followed. Also see <http://www.iso.ch/> for the latest
information. In the U.S. the standards can be purchased via the ASQ web site at
<http://e-standards.asq.org/>
• IEEE = 'Institute of Electrical and Electronics Engineers' - among other
things, creates standards such as 'IEEE Standard for Software Test
Documentation' (IEEE/ANSI Standard 829), 'IEEE Standard of Software Unit
Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software Quality
Assurance Plans' (IEEE/ANSI Standard 730), and others.
• ANSI = 'American National Standards Institute', the primary industrial
standards body in the U.S.; publishes some software-related standards in
conjunction with the IEEE and ASQ (American Society for Quality).
• Other software development process assessment methods besides CMM and ISO
9000 include SPICE, Trillium, TickIT. and Bootstrap.
• See the 'Other Resources' <qatlnks1.html> section for further information
available on the web.
Return to top of this page's FAQ list

What is the 'software life cycle'?


The life cycle begins when an application is first conceived and ends when it is no longer
in use. It includes aspects such as initial concept, requirements analysis, functional
design, internal design, documentation planning, test planning, coding, document
preparation, integration, testing, maintenance, updates, retesting, phase-out, and other
aspects.

Will automated testing tools make testing easier?


• Possibly. For small projects, the time needed to learn and implement them may
not be worth it. For larger projects, or on-going long-term projects they can be
valuable.
• A common type of automated tool is the 'record/playback' type. For example, a
tester could click through all combinations of menu choices, dialog box choices,
buttons, etc. in an application GUI and have them 'recorded' and the results logged
by a tool. The 'recording' is typically in the form of text based on a scripting
language that is interpretable by the testing tool. If new buttons are added, or
TESTINGFAQ 10
some underlying code in the application is changed, etc. the application can then
be retested by just 'playing back' the 'recorded' actions, and comparing the logging
results to check effects of the changes. The problem with such tools is that if there
are continual changes to the system being tested, the 'recordings' may have to be
changed so much that it becomes very time-consuming to continuously update the
scripts. Additionally, interpretation of results (screens, data, logs, etc.) can be a
difficult task. Note that there are record/playback tools for text-based interfaces
also, and for all types of platforms.
• Other automated tools can include:
Code analyzers - monitor code complexity, adherence to standards, etc.

Coverage analyzers - these tools check which parts of the code have been
exercised by a test, and may oriented to code statement coverage, condition
coverage, path coverage, etc.

Memory analyzers - such as bounds-checkers and leak detectors.

Load/performance test tools - for testing client/server and web applications


under various load levels.

Web test tools - to check that links are valid, HTML code usage is correct, client-
side and server-side programs work, a web site's interactions are secure.

Other tools - for test case management, documentation management, bug


reporting, and configuration management.

Explain the software development lifecycle.


There are seven stages of the software development lifecycle
1. Initiate the project – The users identify their Business requirements.
Business Analyst – Collect the requirements form customer
Engagement Team – Financial Details
BRS (Business Requirement Specification)
2. Define the project – The software development team translates the business
requirements into system specifications and put together into System
Specification Document.
Analysis –
Feasibility Study
Resource Estimation
Effect Estimation /Time
Technology
SRS (Software Requirement Specification)
3. Design the system – The System Architecture Team designs the system and
write Functional Design Document. During design phase general solutions re
hypothesized and data and process structures are organized.
Chief Architect - Product divide into different modules based on
classification and functionality settings.
High Level Design / Low Level Design
TDD (Technical Design Document)
4. Build the system – The System Specifications and design documents are
given to the development team code the modules by following the
Requirements and Design document.

TESTINGFAQ 11
5. Test the system - The test team develops the test plan following the
requirements. The software is build and installed on the test platform after
developers have completed development and Unit Testing. The testers test the
software by following the test plan.
6. Deploy the system – After the user-acceptance testing and certification of
the software, it is installed on the production platform. Demos and training are
given to the users.
7. Support the system - After the software is in production, the maintenance
phase of the life begins. During this phase the development team works with
the development document staff to modify and enhance the application and
the test team works with the test documentation staff to verify and validate
the changes and enhancement to the application software.

What is software quality'?


OR
Define software quality for me, as you understand it?
Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable. However, quality is obviously
a subjective term. It will depend on who the 'customer' is and their overall influence
in the scheme of things. Each type of 'customer' will have their own slant on 'quality'
- the accounting department might define quality in terms of profits while an end-
user might define quality as user-friendly and bug-free.

What's the role of documentation in QA?


Critical. (Note that documentation can be electronic, not necessarily paper.) QA
practices should be documented such that they are repeatable. Specifications,
designs, business rules, inspection reports, configurations, code changes, test plans,
test cases, bug reports, user manuals, etc. should all be documented. There should
ideally be a system for easily finding and obtaining documents and determining what
documentation will have a particular piece of information. Change management for
documentation should be used if possible.
At what stage of the SDLC does testing begin in your opinion?
QA process starts from the second phase of the Software Development Life Cycle i.e.
Define the System. Actual Product testing will be done on Test the system
phase(Phase-5). During this phase test team will verify the actual results against
expected results.

Explain the pre testing phase, acceptance testing and testing phase.
Pre testing Phase:
1. Review the requirements document for the testability: Tester will use the
requirement document to write the test cases.
2. Establishing the hard freeze date: Hard freeze date is a date after which
system test team will not accept any more software and documentation
changes from development team, unless they are fixes of severity 1 MR’s.
The date is scheduled so that product test team will have time for final
regression.
3. Writing master test plan: It is written by the lead tester or test coordinator.
Master test plan includes entire testing plan, testing resources and testing
strategy.
4. Setting up MR Tool: The MR tool must be set as soon as you know of the
different modules in the product, the developers and testers on the product,
the hardware platform, and operating system testing will be done.
This information will be available upon the completion of the first draft of the
architecture document. Both testers and developers are trained how to use
the system.
5. Setting up the test environment: The test environment is set on separate
machines, database and network. This task is performed by the technical

TESTINGFAQ 12
support team. First time it takes some time, Afterwards the same
environment can be used by the later releases.
6. Writing the test plan and test cases: Template and the tool is decided to
write the test plan, test cases and test procedures. Expected results are
organized in the test plan according to the feature categories specified in the
requirement document. For each feature positive and negative test cases are
written. Writing test plan requires the complete understanding of the product
and its interfaces with other systems. After test plan is completed, a
walkthrough is conducted with the developers and design team members to
baseline the test plan document.
7. Setting up the test automation tool: Planning of test strategy on how to
automate the testing. Which test cases will be executed for regression
testing. Not all the test cases will be executed during regression testing.
8. Identify acceptance test cases: Select subsets that are expected on the first
day of system test. These tests must pass to accept the product in the
system test.

Acceptance testing phase:


1. When the product enters system test, check it has completed integration test
and must meet the integration test exit criteria.
2. Check integration exit criteria and product test entrance criteria in the master
test plan or test strategy documents.
3. Check the integration testing sign off criteria sheet.
4. Coordinate release with product development.
5. How the code will be migrated from development environment to the test
environment.
6. Installation and acceptance testing.

Product testing phase:


1. Running the test: Execution of test cases and verify if actual functionality of
application matches the expected results.
2. Initial manual testing is recommended to isolate unexpected system behavior.
Once application is stable automated regression test could be generated.
3. Issue MR’s upon detection of the bugs.

What is the value of a testing group? How do you justify your work and
budget?
All software products contain defects/bugs, despite the best efforts of their
development teams. It is important for an outside party (one who is not developer)
to test the product from a viewpoint that is more objective and representative of the
product user.

Testing group test the software from the requirements point of view or what is
required by the user. Testers job is to examine a program and see if it does not do
what it is supposed to do and also see what it does what it is not supposed to do.

What is master test plan? What it contains? Who is responsible for writing
it?
OR
What is a test plan? Who is responsible for writing it? What it contains.
OR
What's a 'test plan'? What did you include in a test plan?
A software project test plan is a document that describes the objectives, scope,
approach, and focus of a software testing effort. The process of preparing a test plan
is a useful way to think through the efforts needed to validate the acceptability of a
software product. The completed document will help people outside the test group
understand the 'why' and 'how' of product validation. It should be thorough enough
to be useful but not so thorough that no one outside the test group will read it. The
TESTINGFAQ 13
following are some of the items that might be included in a test plan, depending on
the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents,
other test plans, etc.
• Relevant standards or legal requirements
• Trace ability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-
info/responsibilities
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature,
functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error
classes
• Test environment - hardware, operating systems, other required software,
data configurations, interfaces to other systems
• Test environment validity analysis - differences between the test and
production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as
screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by
testers to help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties,
deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.

The team-lead or a Sr. QA Analyst is responsible to write this document.


TESTINGFAQ 14
Why is test plan a controlled document?
Because it controls the entire testing process. Testers have to follow this test plan
during the entire testing process.

What information you need to formulate test plan?


Need the Business requirement document to prepare the test plan.

What is MR?
MR is a Modification Request also known as Defect Report, a request to modify the
program so that program does what it is supposed to do.

Why you write MR?


MR is written for reporting problems/errors or suggestions in the software.

What information does MR contain?


OR
Describe me to the basic elements you put in a defect report?
OR
What is the procedure for bug reporting?
The bug needs to be communicated and assigned to developers that can fix it. After
the problem is resolved, fixes should be re-tested, and determinations made
regarding requirements for regression testing to check that fixes didn't create
problems elsewhere. If a problem-tracking system is in place, it should encapsulate
these processes. A variety of commercial problem-tracking/management software
tools are available.
The following are items to consider in the tracking process:
• Complete information such that developers can understand the bug, get an
idea of its severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., 'Released for Retest', 'New', etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case
or if the developer doesn't have easy access to the test case/test script/test
tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that
would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date

TESTINGFAQ 15
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results

What is White box testing/unit testing?


Unit testing - The most 'micro' scale of testing; to test particular functions or code
modules. Typically done by the programmer and not by testers, as it requires
detailed knowledge of the internal program design and code. Not always easily done
unless the application has a well-designed architecture with tight code; may require
developing test driver modules or test harnesses.

What is Black box testing?


Black Box testing is also called system testing which the testers perform. Here the
features and requirements of the product as described in the requirement document
are tested.

What knowledge you require to do white box, integration and black box
testing?
For white box testing you need to understand the internals of the module like data
structures and algorithms and have access to the source code and for black box
testing only understanding/functionality of the application.

What is Regression testing?


Regression testing: Re-testing after fixes or modifications of the software or its
environment. It can be difficult to determine how much re-testing is needed,
especially near the end of the development cycle. Automated testing tools can be
especially useful for this type of testing..

Why do we do regression testing?


In any application new functionalities can be added so the application has to be
tested to see whether the added functionalities have affected the existing
functionalities or not. Here instead of retesting all the existing functionalities baseline
scripts created for these can be rerun and tested.

How do we do regression testing?


Various automation-testing tools can be used to perform regression testing like
WinRunner, Rational Robot and Silk Test.

What is Integration testing?


Testing of combined parts of an application to determine if they function together
correctly. The 'parts' can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to
client/server and distributed systems.

What is the difference between exception and validation testing?


Validation testing aims to demonstrate that the software functions in a manner that
can be reasonably expected by the customer. Testing the software in conformance to
the Software Requirements Specifications.
Exception testing deals with handling the exceptions (unexpected events) while the
AUT is run. Basically this testing involves how to change the control flow of the AUT
when an exception arises.

What is the difference between regression automation tool and performance


automation tool?
Regression testing tools capture test and play them back at a later time. The capture
and playback feature is fundamental to regression testing.

TESTINGFAQ 16
Performance testing tool determine the load a server can handle. And must have
feature to stimulate many users from one machine, scheduling and synchronize
different users, able to measure the network load under different number of
simulated users.

What are the roles of glass-box and black-box testing tools?


Glass-box testing also called as white-box testing refers to testing, with detailed
knowledge of the modules internals. Thus these tools concentrate more on the
algorithms, data structures used in development of modules. These tools perform
testing on individual modules more likely than the whole application. Black-Box
testing tools refer to testing the interface, functionality and performance testing of
the system module and the whole system.

What was the test team hierarchy?


Project Leader
QA lead
QA Analyst
Tester

Which MR tool you used to write MR?


Test Director
Rational ClearQuest.
PVCS Tracker

What are the different automation tools you know?


Automation tools provided by Mercury Interactive - WinRunner, LoadRunner;
Rational – Rational Robot; Segue- SilkTest.

What is the role of a bug tracking system?


Bug tracking system captures, manages and communicates changes, issues and
tasks, providing basic process control to ensure coordination and communication
within and across development and content teams at every step..

What is ODBC?
Open Database Connectivity (ODBC) is an open standard application-programming
interface (API) for accessing a database. ODBC is based on Structured Query
Language (SQL) Call-Level Interface. It allows programs to use SQL requests that
will access databases without having to know the proprietary interfaces to the
databases. ODBC handles the SQL request and converts it into a request the
individual database system understands.

Did you ever have problems working with developers?


NO. I had a good rapport with the developers.

Describe your experience with code analyzers?


Code analyzers generally check for bad syntax, logic, and other language-specific
programming errors at the source level. This level of testing is often referred to as
unit testing and server component testing. I used code analyzers as part of white
box testing.

How do you feel about cyclomatic complexity?


Cyclomatic complexity is a measure of the number of linearly independent paths
through a program module. Cyclomatic complexity is a measure for the complexity of
code related to the number of ways there are to traverse a piece of code. This
determines the minimum number of inputs you need to test all ways to execute the
program.

TESTINGFAQ 17
Who should test your code?
QA Tester

How do you survive chaos?


I survive by maintaining my calm and focusing on the work.

What Process/Methodologies are you familiar with?


 Waterfall methodology
 Spiral methodology
[Or talk about Customized methodology of the specific client]

What you will do during the first day of job?


Get acquainted with my team and application

Tell me about the worst boss you’ve ever had.


Fortunately I always had the best bosses, talking in professional terms I had no
complains on my bosses.

What is a successful product?


A bug free product, meeting the expectations of the user would make the product
successful.

What do you like about Windows?


Interface and User friendliness
Windows is one the best software I ever used. It is user friendly and very easy to
learn.

What is good code?


These are some important qualities of good code
Cleanliness: Clean code is easy to read; this lets people read it with minimum effort
so that they can understand it easily.
Consistency: Consistent code makes it easy for people to understand how a
program works; when reading consistent code; one subconsciously forms a number
of assumptions and expectations about how the code works, so it is easier and safer
to make modifications to it.
Extensibility: General-purpose code is easier to reuse and modify than very specific
code with lots of hard coded assumptions. When someone wants to add a new
feature to a program, it will obviously be easier to do so if the code was designed to
be extensible from the beginning.
Correctness: Finally, code that is designed to be correct lets people spend less time
worrying about bugs and more time enhancing the features of a program.

Who are Kent Beck, Dr Grace Hopper, and Dennis Ritchie?


Kent Beck is the author of Extreme Programming Explained and The Smalltalk Best
Practice Patterns.
Dr. Grace Murray Hopper was a remarkable woman who grandly rose to the
challenges of programming the first computers. During her lifetime as a leader in the
field of software development concepts, she contributed to the transition from
primitive programming techniques to the use of sophisticated compilers.
Dennis Ritchie created the C programming language.

How you will begin improve the QA process?


By following QA methodologies like waterfall, spiral instead of using ad-hoc
procedures.

What is UML and how it is used for testing?


The Unified Modeling Language (UML) is the industry-standard language for
specifying, visualizing, constructing, and documenting the artifacts of software
TESTINGFAQ 18
systems. It simplifies the complex process of software design, making a "blueprint"
for construction. UML state charts provide a solid basis for test generation in a form
that can be easily manipulated. This technique includes coverage criteria that enable
highly effective tests to be developed. A tool has been developed that uses UML state
charts produced by Rational Software Corporation's Rational Rose tool to generate
test data.

What are CMM and CMMI? What is the difference?


The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging
the maturity of the software processes of an organization and for identifying the key
practices that are required to increase the maturity of these processes.

The Capability Maturity Model Integration (CMMI) provides the guidance for
improving your organization's processes and your ability to manage the
development, acquisition, and maintenance of products and services. CMM
Integration places proven practices into a structure that helps your organization
assess its organizational maturity and process area capability, establish priorities for
improvement, and guide the implementation of these improvements.

The new integrated model (CMMI) uses Process Areas (known as PAs), which are
different to the previous model, and covers as well systems as software processes,
rather than only software processes as in the SW-CMM.

Do you have a favorite QA book? Why?


Effective Methods for Software Testing - Perry, William E.

It covers the whole software lifecycle, starting with testing the project plan and
estimates and ending with testing the effectiveness of the testing process. The book
is packed with checklists, worksheets and N-step procedures for each stage of
testing.

When should testing be stopped?


This can be difficult to determine. Many modern software applications are so
complex, and run in such an interdependent environment, that complete testing can
never be done. Common factors in deciding when to stop are:
- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends

When do you start developing your automation tests?


First, the application has to be manually tested. Once the manual testing is over and
baseline is established.

What are positive scenarios?


Testing to see whether the application is doing what it is supposed to do.

What are negative scenarios?


Testing to see whether the application is not doing what it is not suppose to do.

What is quality assurance?


The set of support activities (including facilitation, training, measurement and
analysis) needed to provide adequate confidence that processes are established and
continuously improved in order to produce products that meet specifications and fit
for use.

TESTINGFAQ 19
What is the purpose of the testing?
Testing provides information whether or not a certain product meets the
requirements.

What is the difference between QA and testing?


Quality Assurance is that set of activities that are carried out to set standards and to
monitor and improve performance so that the care provided is as effective and as
safe as possible. Testing provides information whether or not a certain product
meets the requirements. It also provides information where the product fails to
meet the requirements.

What are benefits of the test automation?


1. Fast
2. Reliable
3. Repeatable
4. Programmable
5. Comprehensive
6. Reusable

Describe some problems that you had with automation testing tools
One of the problems with Automation tools is Object recognition

Can test automation improver test effectiveness?


Yes, because of the advantages offered by test automation, which includes
repeatability, consistency, portability and extensive reporting features.

What is the main use of test automation?


Regression Testing.

Does automation replace manual testing?


No, it does not. There could be several scenarios that cannot be automated or simply
too complicated that manual testing would be easier and cost effective. Further
automation tools have several constrains with regard the environment in which they
run and IDEs they support.

How will you choose a tool for test automation?


OR
How we decide which automation tool we are going to use for the
regression testing?
• Based on risk analysis like: personnel skills, companies software resources
• Based on Cost analysis
• Comparing the tools features with test requirement.
• Support for the applications IDE, support for the application
environment/platform.

What could wrong with automation testing?


There are several things. For ex. Script errors can cause a genuine bug to go
undetected or report a bug in the application when the bug does not actually exist.

How will you describe testing activities?


Testing planning, scripting, execution, defect reporting and tracking, regression
testing.

What type of scripting techniques for test automation do you know?


Modular tests and data driven test

What are good principles for test scripts?


1. Portable
TESTINGFAQ 20
2. Repeatable
3. Reusable
4. Maintainable

What type of document do you need for QA, QC and testing?


Following is the list of documents required by QA and QC teams
Business requirements
SRS
Use cases
Test plan
Test cases

What are the properties of a good requirement?


Understandable, Clear, Concise, Total Coverage of the application

What kinds of testing have you done?


Manual, automation, regression, integration, system, stress, performance, volume,
load, white box, user acceptance, recovery.

Have you ever written test cases or did you just execute those written by
others?
Yes, I was involved in preparing and executing test cases in the entire project.

How do you determine what to test?


Depending upon the User Requirement document.

How do you decide when you have ‘tested enough?’


Using Exit Criteria document we can decide that we have done enough testing.

Realizing you won’t be able to test everything-how do you decide what to


test first? OR
What if there isn't enough time for thorough testing?
Use risk analysis to determine where testing should be focused. Since it's rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. This requires judgment skills,
common sense, and experience. (If warranted, formal methods are also available.)
Considerations can include:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development
cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design are unclear or poorly thought
out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

TESTINGFAQ 21
Where do you get your expected results?
User requirement document

If automating-what is your process for determining what to automate and in


what order? OR
Can you automate all the test scripts? Explain? OR
How do you plan test automation? OR
What criteria do you use when determining when to automate a test or
leave it manual?
1. Test that need to be run for every build of the application
2. Tests that use multiple data values for the same actions (data driven tests)
3. Tests that require detailed information from application internals
4. Stress/ load testing

If you were given a program that will average student grades, what kinds of
inputs would you use?
Name of student, Subject, Score

What is the exact difference between Integration and System testing, give
me examples with your project?
Integration testing: An orderly progression of testing in which software
components or hardware components, or both are combined and tested until the
entire system has been integrated.
System testing: The Process of testing an integrated hardware and software
system to verify that the system meets its specified requirements.

How do you go about testing a project?


1. Analyze user requirement documents and other documents like software
specifications, design document etc.
2. Write master test plan which describe the scope, objective, strategy,
risk/contingencies, resources
3. Write system test plan and detailed test cases
4. Execute test cases manually and compare actual results against expected
results.
5. Identify mismatches, report defect to the development team using defect
reporting tool.
6. Track defect, perform regression test to verify that defect is fixed and did not
disturb other parts of the application.
7. Once all the defects are closed and application is stabilized, automate the test
scripts for regression and performance testing.

How do you go about testing a web application?


We check for User interface, Functionality, Interface testing, Compatibility,
Load/Stress, and Security.

Difference between Black and White box testing?


Black box testing: Functional testing based on requirements with no knowledge of
the internal program structure or data. Also known as closed-box testing.
White Box testing: Testing approaches that examine the program structure and
device test data from the program logic.

What is configuration management? Tools used?


Configuration management: helps teams control their day-to-day management of
software development activities as software is created, modified, built and delivered.
Comprehensive software configuration management includes version control,
workspace management, build management, and process control to provide better
project control and predictability
TESTINGFAQ 22
What are Individual test case and Workflow test case? Why we do workflow
scenarios
An individual test is one that is for a single features or requirement. However, it is
important that related sequences of features be tested as well, as these correspond
to units of work that user will typically perform. It will be important for the system
tester to become familiar with what users intend to do with the product and how
they intend to do it. Such testing can reveal errors that might not ordinarily be
caught otherwise. For example while each operations in a series might produce the
correct results it is possible that intermediate results get lost or corrupted between
operations.

What are the testing tools are you familiar with?


TestDirector, WinRunner, LoadRunner, Rational RequisitPro, Rational TestManager,
Rational Robot, Rational ClearQuest and SilkTest.

How did you use automating testing tools in your job?


Automating testing tools are used for preparing and managing regression test scripts
and load and performance tests.

What is data-driven automation?


If you want to perform the same operations with different set of data, we can create
data driven test with loop. In each iteration test is driven by different set of data. In
order for automation to use data to drive the test, we must substitute the fixed
values in the test with variables.

Describe me the difference between validation and verification?


Verification: typically involves reviews and meetings to evaluate documents, plans,
code, requirements, and specifications. This can be done with checklists, issues lists,
walkthroughs, and inspection meetings.

Validation: typically involves actual testing and takes place after verifications are
completed. The term 'IV & V' refers to Independent Verification and Validation

Is coding required in SQA robot?


Yes, to enhance the script for testing the business logic, and when we write the user
define the functions.

What do you mean by “set up the test environment and provide full platform
support?
We need to provide the following for setting up the environment
1) Required software
2) Required hardware
3) Required testing tools
4) Required test data
After providing these we need to provide support for any problems that occur during
the testing process.

If the functionality of an application had an inbuilt bug because of which the


test script fails, would you automate the test?
No, we do the automation once the application is tested manually and it is stabilized.
Automation is for regression testing.

What is the bug reporting tool used?


Rational ClearQuest
TestDirectror
PVCS Tracker

TESTINGFAQ 23
Did use SQA Manager?
Yes. For creating test plan and defect reporting/tracking.

You find a bug and the developer says “It’s not possible” what do u do?
I’ll discuss with him under what conditions (working environment) the bug was
produced. I’ll provide him with more details and the snapshot of the bug.

How do you help developer to track the fault s in the software?


By providing him with details of the defects, which include the environment, test
data, steps followed etc… and helping him to reproduce the defect in his
environment.

Were you able to meet deadlines?


Absolutely.

What is Polymorphism? Give example.


In object-oriented programming, polymorphism refers to a programming language's
ability to process objects differently depending on their data type or class. More
specifically, it is the ability to redefine methods for derived classes.
For example, given a base class shape, polymorphism enables the programmer to
define different circumference methods for any number of derived classes, such as
circles, rectangles and triangles.

No matter what shape an object is, applying the circumference method to it will
return the correct results. Polymorphism is considered to be a requirement of any
true object-oriented programming language (OOPL).

What are the different types of MRs?


MR for suggestions,
MR for defect reports,
MR for documentations changes

What is test Metrics?


Test metrics contains following details:
• Total test
• Test run
• Test passed
• Test failed
• Tests deferred
• Test passed the first time

What is the use of Metrics?


Provide the accurate measurement of test coverage.

If you have shortage of time, how would you prioritize you testing?
Use risk analysis to determine where testing should be focused. Since it's rarely
possible to test every possible aspect of an application, every possible combination of
events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. Considerations can include:
• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development
cycle?
• Which parts of the code are most complex, and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
TESTINGFAQ 24
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance
expenses?
• Which parts of the requirements and design is unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities?
• Which tests will have the best high-risk-coverage to time-required ratio?

What is the impact of environment on the actual results of performance


testing?
Environment plays an important role in the results and effectiveness of test,
particularly in the area of performance testing. Some of the factors will be under our
control, while others will not be. These may involve the DBMS, the operating system
or the network. Some of the items that we cannot control unless you can secure a
stand-alone environment (which will generally be unrealistic) are:
- Other traffic on the network
- Other process running on the server
- Other process running on the DBMS

What is stress testing, performance testing, Security testing, Recovery


testing and volume testing.
Stress testing: Testing the system if it can handle peak usage period loads that
result from large number of simultaneous users, transactions or devices. Monitoring
should be performed for throughput and system stability.

Performance Testing: Testing the system whether the system functions are being
performed in an acceptable timeframe under simultaneous user load. Timings for
both read and update transactions should be gathered to determine whether. This
should be done stand-alone and then in a multi-user environment to determine the
transaction throughput.

Security Testing: Testing the system for its security from unauthorized use and
unauthorized data access.

Recovery Testing: Testing a system to see how it responds to errors and abnormal
conditions, such as system crash, loss of device, communications, or power.

Volume Testing: Testing to the system to determine if it can correctly process large
volumes of data fed to the system. Systems can often respond unpredictably when
large volume causes files to overflow and need extensions.

What criteria you will follow to assign severity and due date to the MR?
Defects (MR) are assigned severity as follows:
Critical: show stoppers (the system is unusable)
High: The system is very hard to use and some cases are prone to convert to critical
issues if not taken care of.
Medium: The system functionality has a major bug but is not too critical but needs
to be fixed in order for the AUT to go to production environment.
Low: cosmetic (GUI related)

What is user acceptance testing?


It is also called as Beta Testing. Once System Testing is done and the system seems
stable to the developers and testers, system engineers usually invite the end users
of the software to see if they like the software. If the users like the software the
way it is then software will be delivered to the user. Otherwise necessary changes
TESTINGFAQ 25
will be made to the software and software will pass through all phases of testing
again.

What is manual testing and what is automated testing?


Manual testing involves testing of software application by manually performing the
actions on the AUT based on test plans.
Automated testing involves testing of a software application by performing the
actions on the AUT by using automated testing tool (such as WinRunner,
LoadRunner) based on test plans

What are the entrance and exit criteria in the system test?
Entrance and exit criteria of each testing phase is written in the master test plan.

Enterence Criteria:
- Integration exit criteria have been successfully met.
- All installation documents are completed.
- All shippable software has been successfully built
- Syate, test plan is baselined by completing the walkthrough of the test plan.
- Test environment should be setup.
- All severity 1 MR’s of integration test phase should be closed.
Exit Criteria:
- All the test cases in the test plan should be executed.
- All MR’s/defects are either closed or deferred.
- Regression testing cycle should be executed after closing the MR’s.
- All documents are reviewed, finilized and signed-off.

If there are no requirements, how will you write your test plan?
If there are no requirements we try to gather as much details as possible from:
• Business Analysts
• Developers (If accessible)
• Previous Version documentation (if any)
• Stake holders (If accessible)
• Prototypes.

What is smoke testing?


The smoke test should exercise the entire system from end to end. It does not have
to be exhaustive, but it should be capable of exposing major problems. The smoke
test should be thorough enough that if the build passes, you can assume that it is
stable enough to be tested more thoroughly.

The daily build has little value without the smoke test. The smoke test is the sentry
that guards against deteriorating product quality and creeping integration problems.
Without it, the daily build becomes just a time-wasting exercise in ensuring that you
have a clean compile every day.

The smoke test must evolve as the system evolves. At first, the smoke test will
probably test something simple, such as whether the system can say, "Hello, World."
As the system develops, the smoke test will become more thorough. The first test
might take a matter of seconds to run; as the system grows, the smoke test can
grow to 30 minutes, an hour, or more.

What is soak testing?


The software system will be run for a total of 14 hours continuously. If the system is
a control system, it will be used to continuously move each of the instrument
mechanisms during this time. Any other system will be expected to perform its
intended function continuously during this period. The software system must not fail
during this period.

TESTINGFAQ 26
What is a pre-condition data?
Data required to setup in the system before the test execution.

What are the different documents in QA?


Requirements Document, Test Plan, Test cases, Test Metrics, Task Distribution
Diagrams (Performance), Transaction Mix, User Profiles, Test Log, Test Incident
Report, Test Summary Report

How do you rate yourself in software testing


Excellent

What are the best web sites that you frequently visit to upgrade your QA
skills?
http://www.softwareqatest.com/
http://sqp.asq.org/

Is defect resolution a technical skill or interpersonal skill from QA


viewpoint?
It is a combination of both, because it deals with the interaction with developer
either directly or indirectly, which needs interpersonal skills and is also based on the
skills of the QA personnel to provide a detailed proof to the developer like snap shots
and system resource utilization and some suggestions which are little bit technical.

What is End to End business logic testing?


Testing the integration of all the modules of the AUT.

What is an equivalence class?


A portion of a component's input or output domains for which the component's
behavior is assumed to be the same from the component's specification

What is the task bar and where does it reside?


The task bar shows a task button for each open application. At a glance, it shows you
which applications are running; you can switch applications by clicking different task
buttons. In most cases the task bar is located at the very bottom of your desktop or
computer screen. However the task bar can be moved to the sides or top.

How do you analyze your test results? What metrics do you try to provide?
OR
How do you view test results?
Test log is created for analyzing the test results. This is a chronological record of the
Test executions and events that happened during testing. It includes the following
sections:

Description: What’s being tested, including Version ID, where testing is being done,
what hardware and all other configuration information.
Activity and Event Entries: What happened including Execution Description: The
procedure used.
Procedure Result: What happened. What did you see and where did you store the
output?
Environment Information: Any changes (hardware substitution) made specifically
for this test.
Unexpected Events: What happened before and after problem/bug occurred.
Incident/Bug Report Identifiers: Problem Report number

If you come onboard, give me a general idea of what your first overall tasks
will be as far as starting a quality effort?
Try to learn about the application, Environment and Prototypes to have the better
understanding of application and existing testing efforts
TESTINGFAQ 27
How do you differentiate the roles of Quality Assurance Manager and
Project Manager?
A quality assurance manager responsibility includes setting up the standards, the
methodology and the strategies for testing the application and providing guidelines
to the QA team. Project Manager is responsible for testing and development
activities.

What do you like about QA?


QA is the field where in one will be working to multiple environments and can learn
more.

Who in the company is responsible for Quality?


Both development and quality assurance departments are responsible for the final
product quality.

Should we test every possible combination/scenario for a program?


Ideally, yes we should test every possible scenario, but this may not always be
possible. It depends on many factors viz., deadlines, budget, complexity of software
and so on. In such cases, we have to prioritize and thoroughly test the critical areas
of the application.

What is client-server architecture?


Client-server architecture, a client is defined as a requester of services and a server
is defined as the provider of services. Communication takes place in the form a
request message from the client to the server asking for some work to be done.
Then the server does the work and sends back the reply.

How Intranet is different from client-server?


Internet applications are essentially client/server applications - with web servers and
'browser'

What is three-tier and multi-tier architecture?


A design that separate (1) client, (2) application, and (3) data each into their own
separate areas that allows for more scalable, robust solutions

A three-tier system is one that has presentation components, business logic and data
access physically running on different platforms. Web applications are perfect for
three-tier architecture, as the presentation layer is necessarily separate, and the
business and data components can be divided up much like a client-server
application

What is Internet?
The Internet, sometimes called simply "the Net," is a worldwide system of computer
networks - a network of networks in which users at any one computer can, if they
have permission, get information from any other computer. Physically, the Internet
uses a portion of the total resources of the currently existing public
telecommunication networks. Technically, what distinguishes the Internet is its use
of a set of protocols called TCP/IP (for Transmission Control Protocol/Internet
Protocol). Two recent adaptations of Internet technology, the intranet and the
extranet, also make use of the TCP/IP protocol.

What is Intranet?
An intranet is a private network that is contained within an enterprise. It may consist
of many interlinked local area networks and also use leased lines in the Wide Area
Network. The main purpose of an intranet is to share company information and
computing resources among employees. An intranet can also be used to facilitate
working in groups and for teleconferences.
TESTINGFAQ 28
What is Extranet?
An extranet is a private network that uses the Internet protocol and the public
telecommunication system to securely share part of a business's information or
operations with suppliers, vendors, partners, customers, or other businesses. An
extranet can be viewed as part of a company's intranet that is extended to users
outside the company. It has also been described as a "state of mind" in which the
Internet is perceived as a way to do business with other companies as well as to sell
products to customers.

What is a byte code file?


Machine-independent code generated by the Java(TM) compiler and executed by the
Java interpreter

What is an applet?
A program written in the Java(TM) programming language to run within a web
browser compatible with the Java platform, such as Hot Java(TM) or Netscape
Navigator(TM).

How applet is different from application?


An applet is an application program that uses the client's web browser to provide a
user interface while an application is a computer program with a user interface.

What is Java Virtual Machine?


Java(TM) Virtual Machine*. The part of the Java Runtime Environment responsible
for interpreting bytecodes

What is ISO-9000?
A set of international standards for both quality management and quality assurance
that has been adopted by over 90 countries worldwide. The ISO 9000 standards
apply to all types of organizations, large and small, and in many industries. The ISO
9000 series classifies products into generic product categories: hardware, software,
processed materials, and services.

What is QMO?
The QMO is a set of processes and guidelines that software systems projects and
products that are built under a contract (with a customer) must follow to comply with
ISO-9000 standards. ISO-9000 states that the guidelines for software development
must be documented.

What is Object Oriented model?


In a Object Oriented model each class is a separate module and has a position in a
class hierarchy. Methods or code in one class can be passed down the hierarchy to a
subclass or inherited from a super class. This is called inheritance.

What is Procedural model?


A term used in contrast to declarative language to describe a language where the
programmer specifies an explicit sequence of steps to follow to produce a result.
Common procedural languages include Basic and C.

What is an Object?
An object is a software bundle of related variables and methods. Software objects
are often used to model real-world objects you find in everyday life

What is class?
A class is a blueprint or prototype that defines the variables and the methods
common to all objects of a certain kind.
TESTINGFAQ 29
What is encapsulation? Give one example.
Encapsulation is the ability to provide a well-defined interface to a set of functions in
a way which hides their internal workings. In object-oriented programming, the
technique of keeping together data structures and the methods (procedures) which
act on them.

What is inheritance? Give example.


In object-oriented programming, the ability to derive new classes from existing
classes. A derived class (or "subclass") inherits the instance variables and methods
of the "base class" (or "superclass"), and may add new instance variables and
methods. New methods may be defined with the same names as those in the base
class, in which case they override the original one.

What is the difference about web-testing and client server testing?


Web applications are essentially client/server applications - with web servers and
'browser' clients. Consideration should be given to the interactions between html
pages, TCP/IP communications, Internet connections, firewalls, applications that run
in web pages (such as applets, javascript, plug-in applications), and applications that
run on the server side (such as cgi scripts, database interfaces, logging applications,
dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers
and browsers, various versions of each, small but sometimes significant differences
between them, variations in connection speeds, rapidly changing technologies, and
multiple standards and protocols. The end result is that testing for web sites can
become a major ongoing effort.

Is a “Fast database retrieval rate” a testable requirement?


This is not a testable requirement. ‘Fast’ is a subjective term. It could mean different
things depending on a person’s perception. For a requirement to be testable, it
should be quantified and repeatable, so that the actual value could be measured
against the expected value.

What different type of test cases you wrote in the test plan?
Test cases for interface, functionality, security, load and performance testing.

What development model should programmers and the test group use?
A Development Model, which helps adopt a structured approach in assessment,
design, integration and implementation of a project and in extending relevant
training and support. Each of these stages is necessarily accompanied with client
inputs, checkpoints and reviews to ensure successful systems implementation.

Basically there are many types of development models to support the development
of high-quality software products. The two most widely used models are Waterfall
and Spiral development model.
Waterfall development model encourages the development team to specify the
business functionality of the software prior to developing a system.
Spiral development model combines the waterfall development model and the
prototype approach, which is a series of partial implementations of the product.

A typical project may include some or all of the following phases:


• Requirements analysis
• Functional specifications
• Architectural design
• Detailed design
• Coding
• Unit testing
• Integration testing
• Deployment and
TESTINGFAQ 30
• Maintenance.

What are the key challenges of load testing?


The key challenge to load testing is handling various components from various
vendors.

Have you done explanatory or specification-driven testing?


Yes, specification-driven testing means checking the product’s conformance with
every statement in every spec, requirements document, etc.

What is the role of QA in development project?


Deploy and enforce standards
Continually improve standards, QA process based on previous experiences
Promote effective means for reporting and communication.

How do you promote the concept of phase containment and defect


prevention?
Phase Containment refers to detecting and correcting defects in the same phase in
which they’re created.
The purpose of Defect Prevention is to identify the cause of defects and prevent
them from recurring.

What is inspection?
Inspection: An inspection is more formalized than a 'walkthrough', typically with 3-
8 people including a moderator, reader, and a recorder to take notes. The subject of
the inspection is typically a document such as a requirements spec or a test plan,
and the purpose is to find problems and see what's missing, not to fix anything.
Attendees should prepare for this type of meeting by reading thru the document;
most problems will be found during this preparation. The result of the inspection
meeting should be a written report. Thorough preparation for inspections is difficult,
painstaking work, but is one of the most cost effective methods of ensuring quality.

What is Walkthrough?
Walkthrough: A 'walkthrough' is an informal meeting for evaluation or informational
purposes. Little or no preparation is usually required.

What if the application has functionality that wasn't in the requirements?


It may take serious effort to determine if an application has significant unexpected or
hidden functionality, and it would indicate deeper problems in the software
development process. If the functionality isn't necessary to the purpose of the
application, it should be removed, as it may have unknown impacts or dependencies
that were not taken into account by the designer or the customer. If not removed,
design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a
result of the unexpected functionality. If the functionality only effects areas such as
minor improvements in the user interface, for example, it may not be a significant
risk.

What are build, version, and release?


Build:

Version:

Release:

How would you build a test team?


In a testing team we should have QA Analysts who analyses the requirements and
develops the testing strategies. The QA Analysts is also involved in writing test plans
TESTINGFAQ 31
and test cases. QA Testers are also needed to do the actual manual and automation
testing using the test plan and test cases. A QA lead should be there to look after the
entire testing process.

What do you like about computers?


Computers are fast, reliable and efficient in doing tasks

What is the role of the test group vis-a vis documentation, tech support, and
so forth?
The role of test group in documentation, tech support depends on the fact that the
application is tested for documentation testing so that it is correct, complete and
easy to understand. Coming to tech support, the application has to be tested for help
testing and once the application is deployed, the test group plays an important role
in providing technical support.

How much interaction with users should testers have, and why?
Testers should have good interaction with the users. It is basically the testers who
make sure the user requirements are satisfied and decide that the system is stable,
the system is error free and can be released for the users.

How should you learn about problems discovered in the field, and what
should you learn from those problems?
We can learn about the problems discovered in the field by the past experiences and
problems encountered specific to this field. We need to make sure that such errors or
defects won’t occur again and how to handle them effectively.

What issues come up in test automation, and how do you manage them?
Basically test automation refers to anything that streamlines the testing process,
allowing things to move along more quickly and with less delay. The issues that
come up in test automation are which test cases to automate and which test cases
not. This depends on the test cases that have to be run more often (regression
testing), that are time consuming and need lot of resources to be done manually like
measuring the performance and load/stress characteristics. We have to manage
them by providing good documentation and proper scheduling of test cases.

How do you get programmers to build testability support into their code?
Providing good documentation and modular programming techniques help to test the
application efficiently. When defects were detected, they should be reported in a
proper way so that they can be reproducible and the input data and output results,
resulted for that defect.

Are the words “Prevention” and “Detection” sound familiar? Explain


Term used to contrast two types of quality activities. Prevention refers to those
activities designed to prevent nonconformance in products and services. Detection
refers to those activities designed to detect nonconformance’s already in products
and services. Another term used to describe this distinction is "designing in quality
vs. inspecting in quality."

In general how do you see automation fitting into the overall process of
testing?
Automation is used for regression testing (repeatability and reusability of scripts).
Using Automation we can make the testing process faster and reliable.

TESTINGFAQ 32
How does unit testing play a role in the development/software lifecycle?
Software passes through various phases of testing like unit, integration, system,
system integration and user acceptance testing. Unit testing is a white-box testing
done by the developer. The developer fully tests the module that he has developed.
He makes sure that there are no logical errors and the module meets the
requirements.

What should Development require of QA?


Development and QA should work together to ensure that their common objective is
reached, viz., release of a high quality software product that meets the end-users
requirements. QA should be involved as early as possible in the software life cycle.
They should test the product from the end-users point of view, report defects, track
defects, ensure defects are fixed and also make suggestions for improvement to the
development team.

What should QA require of Development?


Development and QA should work together to ensure that their common objective is
reached, viz., release of a high quality software product that meets the end-users
requirements. Ideally, the software should be developed keeping in mind the ease of
testing. Development should ensure that the bugs are fixed and testing is kept
informed of any changes in end-user requirements, so that testing can update their
test cases.

How do you introduce a new software QA process?


A: It depends on the size of the organization and the risks involved. For large organizations with
high-risk projects, a serious management buy-in is required and a formalized QA process is
necessary. For medium size organizations with lower risk projects, management and
organizational buy-in and a slower, step-by-step process is required. Generally speaking, QA
processes should be balanced with productivity, in order to keep any bureaucracy from getting
out of hand. For smaller groups or projects, an ad-hoc process is more appropriate. A lot depends
on team leads and managers, feedback to developers and good communication is essential
among customers, managers, developers, test engineers and testers. Regardless the size of the
company, the greatest value for effort is in managing requirement processes, where the goal is
requirements that are clear, complete and
testable.

Q. Give me five common problems that occur during software


development.
A: Poorly written requirements, unrealistic schedules, inadequate testing, adding new features
after development is underway and poor communication.

1. Requirements are poorly written when requirements are unclear, incomplete, too general,
or not testable; therefore there will be problems.
2. The schedule is unrealistic if too much work is crammed in too little time.
3. Software testing is inadequate if none knows whether or not the software is any good
until customers complain or the system crashes.
4. It's extremely common that new features are added after development is underway.
5. Miscommunication either means the developers don't know what is needed, or customers
have unrealistic expectations and therefore problems are guaranteed.

Q. Give me five solutions to problems that occur during software


development.
A: Solid requirements, realistic schedules, adequate testing, firm requirements and good
communication.

TESTINGFAQ 33
1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and
testable. All players should agree to requirements. Use prototypes to help nail down
requirements.
2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug
fixing, re-testing, changes and documentation. Personnel should be able to complete the
project without burning out.
3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan
for sufficient time for both testing and bug fixing.
4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to
defend design against changes and additions, once development has begun and be
prepared to explain consequences. If changes are necessary, ensure they're adequately
reflected in related schedule changes. Use prototypes early on so customers'
expectations are clarified and customers can see what to expect; this will minimize
changes later on.
5. Communicate. Require walkthroughs and inspections when appropriate; make extensive
use of e-mail, networked bug-tracking tools, tools of change management. Ensure
documentation is available and up-to-date. Use documentation that is electronic, not
paper. Promote teamwork and cooperation.

Q. Do automated testing tools make testing easier?


A: Yes and no. For larger projects, or ongoing long-term projects, they can be valuable. But for
small projects, the time needed to learn and implement them is usually not worthwhile. A common
type of automated tool is the record/playback type. For example, a test engineer clicks through all
combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated
testing tool record and log the results. The recording is typically in the form of text, based on a
scripting language that the testing tool can interpret. If a change is made (e.g. new buttons are
added, or some underlying code in the application is changed), the application is then retested by
just playing back the recorded actions and compared to the logged results in order to check
effects of the change. One problem with such tools is that if there are continual changes to the
product being tested, the recordings have to be changed so often that it becomes a very time-
consuming task to continuously update the scripts. Another problem with such tools is the
interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

Q. What makes a good test engineer?


A: Rob Davis is a good test engineer because he

• Has a "test to break" attitude,


• Takes the point of view of the customer,
• Has a strong desire for quality,
• Has an attention to detail, He's also
• Tactful and diplomatic and
• Has good a communication skill, both oral and written. And he
• Has previous software development experience, too.

Good test engineers have a "test to break" attitude, they take the point of view of the customer,
have a strong desire for quality and an attention to detail. Tact and diplomacy are useful in
maintaining a cooperative relationship with developers and an ability to communicate with both
technical and non-technical people. Previous software development experience is also helpful as
it provides a deeper understanding of the software development process, gives the test engineer
an appreciation for the developers' point of view and reduces the learning curve in automated test
tool programming.

Q. What makes a good QA engineer?


A: The same qualities a good test engineer has are useful for a QA engineer. Additionally, Rob
Davis understands the entire software development process and how it fits into the business
approach and the goals of the organization. Rob Davis' communication skills and the ability to
understand various sides of issues are important.
Good QA engineers understand the entire software development process and how it fits into the
TESTINGFAQ 34
business approach and the goals of the organization. Communication skills and the ability to
understand various sides of issues are important.

Q. What makes a good resume?


A: On the subject of resumes, there seems to be an unending discussion of whether you should
or shouldn't have a one-page resume. The followings are some of the comments I have
personally heard: "Well, Joe Blow (car salesman) said I should have a one-page resume." "Well, I
read a book and it said you should have a one page resume." "I can't really go into what I really
did because if I did, it'd take more than one page on my resume." "Gosh, I wish I could put my job
at IBM on my resume but if I did it'd make my resume more than one page, and I was told to
never make the resume more than one page long." "I'm confused, should my resume be more
than one page? I feel like it should, but I don't want to break the rules." Or, here's another
comment, "People just don't read resumes that are longer than one page." I have heard some
more, but we can start with these. So what's the answer? There is no scientific answer about
whether a one-page resume is right or wrong. It all depends on who you are and how much
experience you have. The first thing to look at here is the purpose of a resume. The purpose of a
resume is to get you an interview. If the resume is getting you interviews, then it is considered to
be a good resume. If the resume isn't getting you interviews, then you should change it. The
biggest mistake you can make on your resume is to make it hard to read. Why? Because, for one,
scanners don't like odd resumes. Small fonts can make your resume harder to read. Some
candidates use a 7-point font so they can get the resume onto one page. Big mistake. Two,
resume readers do not like eye strain either. If the resume is mechanically challenging, they just
throw it aside for one that is easier on the eyes. Three, there are lots of resumes out there these
days, and that is also part of the problem. Four, in light of the current scanning scenario, more
than one page is not a deterrent because many will scan your resume into their database. Once
the resume is in there and searchable, you have accomplished one of the goals of resume
distribution. Five, resume readers don't like to guess and most won't call you to clarify what is on
your resume. Generally speaking, your resume should tell your story. If you're a college graduate
looking for your first job, a one-page resume is just fine. If you have a longer story, the resume
needs to be longer. Please put your experience on the resume so resume readers can tell when
and for whom you did what. Short resumes -- for people long on experience -- are not
appropriate. The real audience for these short resumes is people with short attention spans and
low IQs. I assure you that when your resume gets into the right hands, it will be read thoroughly.

Q. What makes a good QA/Test Manager?


A: QA/Test Managers are familiar with the software development process; able to maintain
enthusiasm of their team and promote a positive atmosphere; able to promote teamwork to
increase productivity; able to promote cooperation between Software and Test/QA Engineers,
have the people skills needed to promote improvements in QA processes, have the ability to
withstand pressures and say *no* to other managers when quality is insufficient or QA processes
are not being adhered to; able to communicate with technical and non-technical people; as well
as able to run meetings and keep them focused.

Q. What is the role of documentation in QA?


A: Documentation plays a critical role in QA. QA practices should be documented, so that they
are repeatable. Specifications, designs, business rules, inspection reports, configurations, code
changes, test plans, test cases, bug reports, user manuals should all be documented. Ideally,
there should be a system for easily finding and obtaining of documents and determining what
document will have a particular piece of information. Use documentation change management, if
possible.

Q. What about requirements?


A: Requirement specifications are important and one of the most reliable methods of insuring
problems in a complex software project is to have poorly documented requirement specifications.
Requirements are the details describing an application's externally perceived functionality and
properties. Requirements should be clear, complete, reasonably detailed, cohesive, attainable
and testable. A non-testable requirement would be, for example, "user-friendly", which is too
subjective. A testable requirement would be something such as, "the product shall allow the user

TESTINGFAQ 35
to enter their previously-assigned password to access the application". Care should be taken to
involve all of a project's significant customers in the requirements process. Customers could be
in-house or external and could include end-users, customer acceptance test engineers, testers,
customer contract officers, customer management, future software maintenance engineers,
salespeople and anyone who could later derail the project. If his/her expectations aren't met, they
should be included as a customer, if possible. In some organizations, requirements may end up in
high-level project plans, functional specification documents, design documents, or other
documents at various levels of detail. No matter what they are called, some type of
documentation with detailed requirements will be needed by test engineers in order to properly
plan and execute tests. Without such documentation there will be no clear-cut way to determine if
a software application is performing correctly.

Q. What is a test plan?


A: A software project test plan is a document that describes the objectives, scope, approach and
focus of a software testing effort. The process of preparing a test plan is a useful way to think
through the efforts needed to validate the acceptability of a software product. The completed
document will help people outside the test group understand the why and how of product
validation. It should be thorough enough to be useful, but not so thorough that none outside the
test group will be able to read it.

Q. What is a test case?


A: A test case is a document that describes an input, action, or event and its expected result, in
order to determine if a feature of an application is working correctly. A test case should contain
particulars such as a...

• Test case identifier;


• Test case name;
• Objective;
• Test conditions/setup;
• Input data requirements/steps, and
• Expected results.

Please note, the process of developing test cases can help find problems in the requirements or
design of an application, since it requires you to completely think through the operation of the
application. For this reason, it is useful to prepare test cases early in the development cycle, if
possible.

Q. What should be done after a bug is found?


A: When a bug is found, it needs to be communicated and assigned to developers that can fix it.
After the problem is resolved, fixes should be re-tested. Additionally, determinations should be
made regarding requirements, software, hardware, safety impact, etc., for regression testing to
check the fixes didn't create other problems elsewhere. If a problem-tracking system is in place, it
should encapsulate these determinations. A variety of commercial, problem-
tracking/management software tools are available. These tools, with the detailed input of software
test engineers, will give the team complete information so developers can understand the bug,
get an idea of its severity, reproduce it and fix it.

Q. What is configuration management?


A: Configuration management (CM) covers the tools and processes used to control, coordinate
and track code, requirements, documentation, problems, change requests, designs, tools,
compilers, libraries, patches, changes made to them and who makes the changes. Rob Davis has
had experience with a full range of CM tools and concepts. Rob Davis can easily adapt to your
software tool and process needs.

Q. What if the software is so buggy it can't be tested at all?


A: In this situation the best bet is to have test engineers go through the process of reporting
whatever bugs or problems initially show up, with the focus being on critical bugs. Since this type
TESTINGFAQ 36
of problem can severely affect schedules and indicates deeper problems in the software
development process, such as insufficient unit testing, insufficient integration testing, poor design,
improper build or release procedures, managers should be notified and provided with some
documentation as evidence of the problem.

Q. How do you know when to stop testing?


A: This can be difficult to determine. Many modern software applications are so complex and run
in such an interdependent environment, that complete testing can never be done. Common
factors in deciding when to stop are...

• Deadlines, e.g. release deadlines, testing deadlines;


• Test cases completed with certain percentage passed;
• Test budget has been depleted;
• Coverage of code, functionality, or requirements reaches a specified point;
• Bug rate falls below a certain level; or
• Beta or alpha testing period ends.

Q. What if there isn't enough time for thorough testing?


A: Since it's rarely possible to test every possible aspect of an application, every possible
combination of events, every dependency, or everything that could go wrong, risk analysis is
appropriate to most software development projects. Use risk analysis to determine where testing
should be focused. This requires judgment skills, common sense and experience. The checklist
should include answers to the following questions:
 Which functionality is most important to the project's intended purpose?
 Which functionality is most visible to the user?
 Which functionality has the largest safety impact?
 Which functionality has the largest financial impact on users?
 Which aspects of the application are most important to the customer?
 Which aspects of the application can be tested early in the development cycle?
 Which parts of the code are most complex and thus most subject to errors?
 Which parts of the application were developed in rush or panic mode?
 Which aspects of similar/related previous projects caused problems?
 Which aspects of similar/related previous projects had large maintenance expenses?
 Which parts of the requirements and design are unclear or poorly thought out?
 What do the developers think are the highest-risk aspects of the application?
 What kinds of problems would cause the worst publicity?
 What kinds of problems would cause the most customer service complaints?
 What kinds of tests could easily cover multiple functionalities?
 Which tests will have the best high-risk-coverage to time-required ratio?

Q. What if the project isn't big enough to justify extensive testing?


A: Consider the impact of project errors, not the size of the project. However, if extensive testing
is still not justified, risk analysis is again needed and the considerations listed under "What if there
isn't enough time for thorough testing?" do apply. The test engineer then should do "ad hoc"
testing, or write up a limited test plan based on the risk analysis.

Q. What can be done if requirements are changing continuously?


A: Work with management early on to understand how requirements might change, so that
alternate test plans and strategies can be worked out in advance. It is helpful if the application's
initial design allows for some adaptability, so that later changes do not require redoing the
application from scratch. Additionally, try to...

• Ensure the code is well commented and well documented; this makes changes easier for
the developers.
• Use rapid prototyping whenever possible; this will help customers feel sure of their
requirements and minimize changes.

TESTINGFAQ 37
• In the project's initial schedule, allow for some extra time to commensurate with probable
changes.
• Move new requirements to a 'Phase 2' version of an application and use the original
requirements for the 'Phase 1' version.
• Negotiate to allow only easily implemented new requirements into the project; move more
difficult, new requirements into future versions of the application.
• Ensure customers and management understand scheduling impacts, inherent risks and
costs of significant requirements changes. Then let management or the customers decide
if the changes are warranted; after all, that's their job.
• Balance the effort put into setting up automated testing with the expected effort required
to redo them to deal with changes.
• Design some flexibility into automated test scripts;
• Focus initial automated testing on application aspects that are most likely to remain
unchanged;
• Devote appropriate effort to risk analysis of changes, in order to minimize regression-
testing needs;
• Design some flexibility into test cases; this is not easily done; the best bet is to minimize
the detail in the test cases, or set up only higher-level generic-type test plans;
• Focus less on detailed test plans and test cases and more on ad-hoc testing with an
understanding of the added risk this entails.

Q. What if the application has functionality that wasn't in the


requirements?
A: It may take serious effort to determine if an application has significant unexpected or hidden
functionality, which it would indicate deeper problems in the software development process. If the
functionality isn't necessary to the purpose of the application, it should be removed, as it may
have unknown impacts or dependencies that were not taken into account by the designer or the
customer.
If not removed, design information will be needed to determine added testing needs or regression
testing needs. Management should be made aware of any significant added risks as a result of
the unexpected functionality. If the functionality only affects areas, such as minor improvements
in the user interface, it may not be a significant risk.

Q. How can software QA processes be implemented without


stifling productivity?
A: Implement QA processes slowly over time. Use consensus to reach agreement on processes
and adjust and experiment as an organization grows and matures. Productivity will be improved
instead of stifled. Problem prevention will lessen the need for problem detection. Panics and
burnout will decrease and there will be improved focus and less wasted effort. At the same time,
attempts should be made to keep processes simple and efficient, minimize paperwork, promote
computer-based processes and automated tracking and reporting, minimize time required in
meetings and promote training as part of the QA process. However, no one, especially talented
technical types, like bureaucracy and in the short run things may slow down a bit. A typical
scenario would be that more days of planning and development will be needed, but less time will
be required for late-night bug fixing and calming of irate customers.

Q. What if organization is growing so fast that fixed QA processes


are impossible?
A: This is a common problem in the software industry, especially in new technology areas. There
is no easy solution in this situation, other than...
 Hire good people (i.e. hire Rob Davis)
 Ruthlessly prioritize quality issues and maintain focus on the customer;
 Everyone in the organization should be clear on what quality means to the customer.

Q. How is testing affected by object-oriented designs?


A: A well-engineered object-oriented design can make it easier to trace from code to internal
design to functional design to requirements. While there will be little affect on black box testing
TESTINGFAQ 38
(where an understanding of the internal design of the application is unnecessary), white-box
testing can be oriented to the application's objects. If the application was well designed this can
simplify test design.

Q. Why do you recommended that we test during the design


phase?
A: Because testing during the design phase can prevent defects later on. We recommend
verifying three things...

1. Verify the design is good, efficient, compact, testable and maintainable.


2. Verify the design meets the requirements and is complete (specifies all relationships
between modules, how to pass data, what happens in exceptional circumstances,
starting state of each module and how to guarantee the state of each module).
3. Verify the design incorporates enough memory, I/O devices and quick enough runtime for
the final product.

Q. What is software quality assurance?


A: Software Quality Assurance (SWQA) when Rob Davis does it is oriented to *prevention*. It
involves the entire software development process. Prevention is monitoring and improving the
process, making sure any agreed-upon standards and procedures are followed and ensuring
problems are found and dealt with. Software Testing, when performed by Rob Davis, is also
oriented to *detection*. Testing involves the operation of a system or application under controlled
conditions and evaluating the results. Organizations vary considerably in how they assign
responsibility for QA and testing. Sometimes they are the combined responsibility of one group or
individual. Also common are project teams, which include a mix of test engineers, testers and
developers who work closely together, with overall QA processes monitored by project managers.
It depends on what best fits your organization's size and business structure. Rob Davis can
provide QA and/or SWQA. This document details some aspects of how he can provide software
testing/QA service.

Q. What is quality assurance?


A: Quality Assurance ensures all parties concerned with the project adhere to the process and
procedures, standards and templates and test readiness reviews.
Rob Davis' QA service depends on the customers and projects. A lot will depend on team leads
or managers, feedback to developers and communications among customers, managers,
developers' test engineers and testers.

Q. Process and procedures - why follow them?


A: Detailed and well-written processes and procedures ensure the correct steps are being
executed to facilitate a successful completion of a task. They also ensure a process is repeatable.
Once Rob Davis has learned and reviewed customer's business processes and procedures, he
will follow them. He will also recommend improvements and/or additions.

Q. Standards and templates - what is supposed to be in a


document?
A: All documents should be written to a certain standard and template. Standards and templates
maintain document uniformity. It also helps in learning where information is located, making it
easier for a user to find what they want. Lastly, with standards and templates, information will not
be accidentally omitted from a document. Once Rob Davis has learned and reviewed your
standards and templates, he will use them. He will also recommend improvements and/or
additions.

Q. What are the different levels of testing?


A: Rob Davis has expertise in testing at all testing levels listed below. At each test level, he
documents the results. Each level of testing is either considered black or white box testing.

TESTINGFAQ 39
Q. What is parallel/audit testing?
A: Parallel/audit testing is testing where the user reconciles the output of the new system to the
output of the current system to verify the new system performs the operations correctly.

Q. What is integration testing?


A: Upon completion of unit testing, integration testing begins. Integration testing is black box
testing. The purpose of integration testing is to ensure distinct components of the application still
work in accordance to customer requirements. Test cases are developed with the express
purpose of exercising the interfaces between the components. This activity is carried out by the
test team.
Integration testing is considered complete, when actual results and expected results are either in
line or differences are explainable/acceptable based on client input.

Q47. What is system testing?


A: System testing is black box testing, performed by the Test Team, and at the start of the
system testing the complete system is configured in a controlled environment. The purpose of
system testing is to validate an application's accuracy and completeness in performing the
functions as designed. System testing simulates real life scenarios that occur in a "simulated real
life" test environment and test all functions of the system that are required in real life. System
testing is deemed complete when actual results and expected results are either in line or
differences are explainable or acceptable, based on client input.
Upon completion of integration testing, system testing is started. Before system testing, all unit
and integration test results are reviewed by SWQA to ensure all problems have been resolved.
For a higher level of testing it is important to understand unresolved problems that originate at
unit and integration test levels.

Q. What is security/penetration testing?


A: Security/penetration testing is testing how well the system is protected against unauthorized
internal or external access, or willful damage. This type of testing usually requires sophisticated
testing techniques

Q. What testing roles are standard on most testing projects?


A: Depending on the organization, the following roles are more or less standard on most testing
projects: Testers, Test Engineers, Test/QA Team Lead, Test/QA Manager, System Administrator,
Database Administrator, Technical Analyst, Test Build Manager and Test Configuration Manager.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager.

Q. What is a Test/QA Team Lead?


A: The Test/QA Team Lead coordinates the testing activity, communicates testing status to
management and manages the test team.

Q. What is a Test Engineer?


A: Test Engineers are engineers who specialize in testing. They create test cases, procedures,
scripts and generate data. They execute test procedures and scripts, analyze standards of
measurements, evaluate results of system/integration/regression testing. They also...

• Speed up the work of your development staff;


• Reduce your risk of legal liability;
• Give you the evidence that your software is correct and operates properly;
• Improve problem tracking and reporting;
• Maximize the value of your software;
• Maximize the value of the devices that use it;

TESTINGFAQ 40
• Assure the successful launch of your product by discovering bugs and design flaws,
before users get discouraged, before shareholders loose their cool and before employees
get bogged down;
• Help the work of your development staff, so the development team can devote its time to
build up your product;
• Promote continual improvement;
• Provide documentation required by FDA, FAA, other regulatory agencies and your
customers;
• Save money by discovering defects 'early' in the design process, before failures occur in
production, or in the field;
• Save the reputation of your company by discovering bugs and design flaws; before bugs
and design flaws damage the reputation of your company.

Q. What is a Test Build Manager?


A: Test Build Managers deliver current software versions to the test environment, install the
application's software and apply software patches, to both the application and the operating
system, set-up, maintain and back up test environment hardware. Depending on the project, one
person may wear more than one hat. For instance, a Test Engineer may also wear the hat of a
Test Build Manager.

Q. What is a System Administrator?


A: Test Build Managers, System Administrators, Database Administrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a System Administrator.

Q. What is a Database Administrator?


A: Test Build Managers, System Administrators and Database Administrators deliver current
software versions to the test environment, install the application's software and apply software
patches, to both the application and the operating system, set-up, maintain and back up test
environment hardware. Depending on the project, one person may wear more than one hat. For
instance, a Test Engineer may also wear the hat of a Database Administrator.

Q. What is a Technical Analyst?


A: Technical Analysts perform test assessments and validate system/functional test
requirements. Depending on the project, one person may wear more than one hat. For instance,
Test Engineers may also wear the hat of a Technical Analyst.

Q. What is a Test Configuration Manager?


A: Test Configuration Managers maintain test environments, scripts, software and test data.
Depending on the project, one person may wear more than one hat. For instance, Test Engineers
may also wear the hat of a Test Configuration Manager.

Q. What is a test schedule?


A: The test schedule is a schedule that identifies all tasks required for a successful testing effort,
a schedule of all test activities and resource requirements.

Q. What is software testing methodology?


A: One software testing methodology is the use a three step process of...

1. Creating a test strategy;


2. Creating a test plan/design; and
3. Executing tests.

TESTINGFAQ 41
This methodology can be used and molded to your organization's needs. Rob Davis believes that
using this methodology is important in the development and in ongoing maintenance of his
customers' applications.

Q. What is the general testing process?


A: The general testing process is the creation of a test strategy (which sometimes includes the
creation of test cases), creation of a test plan/design (which usually includes test cases and test
procedures) and the execution of tests.

Q. How do you create a test strategy?


A: The test strategy is a formal description of how a software product will be tested. A test
strategy is developed for all levels of testing, as required. The test team analyzes the
requirements, writes the test strategy and reviews the plan with the project team. The test plan
may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria
and risk assessment.

Inputs for this process:

• A description of the required hardware and software components, including test tools.
This information comes from the test environment, including test tool data.
• A description of roles and responsibilities of the resources required for the test and
schedule constraints. This information comes from manhours and schedules.
• Testing methodology. This is based on known standards.
• Functional and technical requirements of the application. This information comes from
requirements, change request, technical and functional design documents.
• Requirements that the system can not provide, e.g. system limitations.

Outputs for this process:

• An approved and signed off test strategy document, test plan, including test cases.
• Testing issues requiring resolution. Usually this requires additional negotiation at the
project management level.

Q. How do you create a test plan/design?


A: Test scenarios and/or cases are prepared by reviewing functional requirements of the release
and preparing logical groups of functions that can be further broken into test procedures. Test
procedures define test conditions, data to be used for testing and expected results, including
database updates, file outputs, report results. Generally speaking...

• Test cases and scenarios are designed to represent both typical and unusual situations
that may occur in the application.
• Test engineers define unit test requirements and unit test cases. Test engineers also
execute unit test cases.
• It is the test team who, with assistance of developers and clients, develops test cases
and scenarios for integration and system testing.
• Test scenarios are executed through the use of test procedures or scripts.
• Test procedures or scripts define a series of steps necessary to perform one or more test
scenarios.
• Test procedures or scripts include the specific data that will be used for testing the
process or transaction.
• Test procedures or scripts may cover multiple test scenarios.
• Test scripts are mapped back to the requirements and traceability matrices are used to
ensure each test is within scope.
• Test data is captured and baselined, prior to testing. This data serves as the foundation
for unit and system testing and used to exercise system functionality in a controlled
environment.

TESTINGFAQ 42
• Some output data is also baselined for future comparison. Baselined data is used to
support future application maintenance via regression testing.
• A pre-test meeting is held to assess the readiness of the application and the environment
and data to be tested. A test readiness document is created to indicate the status of the
entrance criteria of the release.

Inputs for this process:

• Approved Test Strategy Document.


• Test tools, or automated test tools, if applicable.
• Previously developed scripts, if applicable.
• Test documentation problems uncovered as a result of testing.
• A good understanding of software complexity and module path coverage, derived from
general and detailed design documents, e.g. software design document, source code and
software complexity data.

Outputs for this process:

• Approved documents of test scenarios, test cases, test conditions and test data.
• Reports of software design issues, given to software developers for correction.

Q. How do you execute tests?


A: Execution of tests is completed by following the test documents in a methodical manner. As
each test procedure is performed, an entry is recorded in a test execution log to note the
execution of the procedure and whether or not the test procedure uncovered any defects.
Checkpoint meetings are held throughout the execution phase. Checkpoint meetings are held
daily, if required, to address and discuss testing issues, status and activities.

• The output from the execution of test procedures is known as test results. Test results
are evaluated by test engineers to determine whether the expected results have been
obtained. All discrepancies/anomalies are logged and discussed with the software team
lead, hardware test lead, programmers, software engineers and documented for further
investigation and resolution. Every company has a different process for logging and
reporting bugs/defects uncovered during testing.
• A pass/fail criteria is used to determine the severity of a problem, and results are
recorded in a test summary report. The severity of a problem, found during system
testing, is defined in accordance to the customer's risk assessment and recorded in their
selected tracking tool.
• Proposed fixes are delivered to the testing environment, based on the severity of the
problem. Fixes are regression tested and flawless fixes are migrated to a new baseline.
Following completion of the test, members of the test team prepare a summary report.
The summary report is reviewed by the Project Manager, Software QA (SWQA) Manager
and/or Test Team Lead.
• After a particular level of testing has been certified, it is the responsibility of the
Configuration Manager to coordinate the migration of the release software components to
the next test level, as documented in the Configuration Management Plan. The software
is only migrated to the production environment after the Project Manager's formal
acceptance.
• The test team reviews test document problems identified during testing, and update
documents where appropriate.

Inputs for this process:

• Approved test documents, e.g. Test Plan, Test Cases, Test Procedures.
• Test tools, including automated test tools, if applicable.
• Developed scripts.
• Changes to the design, i.e. Change Request Documents.
• Test data.
TESTINGFAQ 43
• Availability of the test team and project team.
• General and Detailed Design Documents, i.e. Requirements Document, Software Design
Document.
• A software that has been migrated to the test environment, i.e. unit tested code, via the
Configuration/Build Manager.
• Test Readiness Document.
• Document Updates.

Outputs for this process:

• Log and summary of the test results. Usually this is part of the Test Report. This needs to
be approved and signed-off with revised testing deliverables.
• Changes to the code, also known as test fixes.
• Test document problems uncovered as a result of testing. Examples are Requirements
document and Design Document problems.
• Reports on software design issues, given to software developers for correction.
Examples are bug reports on code issues.
• Formal record of test incidents, usually part of problem tracking.

Baselined package, also known as tested source and object code, ready for migration to the next
level.

25). What’s a 'test plan'?


A software project test plan is a document that describes the objectives, scope, approach, and focus of a
software testing effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to
be useful but not so thorough that no one outside the test group will read it. The following are some of the
items that might be included in a test plan, depending on the particular project:

• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilities
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality, process,
system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data configurations,
interfaces to other systems
• Test environment validity analysis - differences between the test and production systems and their
impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
TESTINGFAQ 44
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture
software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track
the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact
persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix - glossary, acronyms, etc.

(See the Bookstore section's 'Software Testing' and 'Software QA' categories for useful books with more
information.)

26). What’s a 'test case'?

• A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. A test case should contain particulars
such as test case identifier, test case name, objective, test conditions/setup, input data
requirements, steps, and expected results.

Note that the process of developing test cases can help find problems in the requirements or design of an
application, since it requires completely thinking through the operation of the application. For this reason,
it's useful to prepare test cases early in the development cycle if possible.

36). What if an organization is growing so fast that fixed QA processes are impossible?
This is a common problem in the software industry, especially in new technology areas. There is no easy
solution in this situation, other than:

• Hire good people


• Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer
• Everyone in the organization should be clear on what 'quality' means to the customer

37). How does a client/server environment affect testing?


Client/server applications can be quite complex due to the multiple dependencies among clients, data
communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited
(as it usually is) the focus should be on integration and system testing. Additionally,
load/stress/performance testing may be useful in determining client/server application limitations and
capabilities. There are commercial tools to assist with such testing. (See the 'Tools' section for web
resources with listings that include these kinds of test tools.)

TESTINGFAQ 45
38). How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration
should be given to the interactions between html pages, TCP/IP communications, Internet connections,
firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and
applications that run on the server side (such as cgi scripts, database interfaces, logging applications,
dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various
versions of each, small but sometimes significant differences between them, variations in connection
speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing
for web sites can become a major ongoing effort. Other considerations might include:

• What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of
performance is required under such loads (such as web server response time, database query
response times). What kinds of tools will be needed for performance testing (such as web load
testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
• Who is the target audience? What kind of browsers will they be using? What kind of connection
speeds will they by using? Are they intra- organization (thus with likely high connection speeds
and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser
types)?
• What kind of performance is expected on the client side (e.g., how fast should pages appear, how
fast should animations, applets, etc. load and run)?
• Will down time for server and content maintenance/upgrades be allowed? how much?
• What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it
expected to do? How can it be tested?
• How reliable are the site's Internet connections required to be? And how does that affect backup
system or redundant connection requirements and testing?
• What processes will be required to manage updates to the web site's content, and what are the
requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
• Which HTML specification will be adhered to? How strictly? What variations will be allowed for
targeted browsers?
• Will there be any standards or requirements for page appearance and/or graphics throughout a site
or parts of a site??
• How will internal and external links be validated and updated? how often?
• Can testing be done on the production system, or will a separate test system be required? How are
browser caching, variations in browser option settings, dial-up connection variabilities, and real-
world internet 'traffic congestion' problems to be accounted for in testing?
• How extensive or customized are the server logging and reporting requirements; are they
considered an integral part of the system and do they require testing?
• How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked,
controlled, and tested?

Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links
concerning web site security in the 'Other Resources' section.

Some usability guidelines to consider - these are subjective and may or may not apply to a given situation
(Note: more information on usability testing issues can be found in articles about web site usability in
the 'Other Resources' section):

• Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger,
provide internal links within the page.
• The page layouts and design elements should be consistent throughout a site, so that it's clear to
the user that they're still within a site.
• Pages should be as browser-independent as possible, or pages should be provided or generated
based on the browser-type.
• All pages should have links external to the page; there should be no dead-end pages.
• The page owner, revision date, and a link to a contact person or organization should be included
on each page.

TESTINGFAQ 46
Many new web site test tools are appearing and more than 230 of them are listed in the 'Web Test Tools'
section.

What is Extreme Programming and what's it got to do with testing?

Extreme Programming (XP) is a software development approach for small teams on risk-
prone projects with unstable requirements. It was created by Kent Beck who described
the approach in his book 'Extreme Programming Explained' (See the Softwareqatest.com
Books page.). Testing ('extreme testing') is a core aspect of Extreme Programming.
Programmers are expected to write unit and functional test code first - before the
application is developed. Test code is under source control along with the rest of the
code. Customers are expected to be an integral part of the project team and to help
develope scenarios for acceptance/black box testing. Acceptance tests are preferably
automated, and are modified and rerun for each of the frequent development iterations.
QA and test personnel are also required to be an integral part of the project team.
Detailed requirements documentation is not used, and frequent re-scheduling, re-
estimating, and re-prioritizing is expected. For more info see the XP-related listings in the
Softwareqatest.com 'Other Resources' section.

What's the difference between load and stress testing?

One of the most common, but unfortunate misuses of terminology is treating "load
testing" and "stress testing" as synonymous. The consequence of this ignorant semantic
abuse is usually that the system is neither properly "load tested" nor subjected to a
meaningful stress test.

1. Stress testing is subjecting a system to an unreasonable load while denying it the


resources (e.g., RAM, disc, mips, interrupts, etc.) needed to process that load. The idea is
to stress a system to the breaking point in order to find bugs that will make that break
potentially harmful. The system is not expected to process the overload without adequate
resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing
data). Bugs and failure modes discovered under stress testing may or may not be repaired
depending on the application, the failure mode, consequences, etc.
The load (incoming transaction stream) in stress testing is often deliberately distorted so
as to force the system into resource depletion.

2. Load testing is subjecting a system to a statistically representative (usually) load.


The two main reasons for using such loads is in support of software reliability testing and
in performance testing. The term "load testing" by itself is too vague and imprecise to
warrant use. For example, do you mean representative load," "overload," "high load,"
etc. In performance testing, load is varied from a minimum (zero) to the maximum level
the system can sustain without running out of resources or having, transactions suffer
(application-specific) excessive delay.

3. A third use of the term is as a test whose objective is to determine the maximum
sustainable load the system can handle.
In this usage, "load testing" is merely testing at the highest transaction arrival rate in
performance testing.

TESTINGFAQ 47
- WEB TESTING SPECIFICS
- Internet Software - Quality Characteristics
1. Functionality - Verified content
2. Reliability - Security and availability
3. Efficiency - Response Times
4. Usability - High user satisfaction
5. Portability - Platform Independence

- WWW Project Peculiarities


 Software Consists of large degree of components
 User Interface is more complex than many GUI based Client-Server applications.
 User may be unknown (no training/ user manuals)
 Security threats come from anywhere
 User load unpredictable

- Basic HTML Testing


 Check for illegal elements present
 Check for illegal attributes present
 Check for tags close
 Check for the tags <HEAD>, <TITLE>, <BODY>, <DOCTYPE>
 Check that all IMG tags should have ALT tag [ALT tags must be suggestive]
 Check for consistency of fonts - colors and font size.
 Check for spelling errors in text and images.
 Check for "Non Sense" mark up

Example for Non Sense mark up


<B>Hello</B> may be written as <B>H</B><B>ell</B><B>o</B>
- Suggestions for fast loading
 Web pages weight should be reduced to lesser size as much as possible
 Don’t knock door of the Database every time. Go for the alternate.
 Example: If your web application has Reports, generate the content of the report in a
Static HTML file in a periodic time. When the user view the report show him the static
HTML content. No need to go to the database and retrieve the data when the user hits
the report link.
 Cached Query - If the data which is fetched using a query only changes periodically
 Then we can cache the query for that period. This will avoid unnecessary database
access.
 Every IMG tags must have WIDTH and HEIGHT attributes.
 IMG - Bad Example
TESTINGFAQ 48
<B> Hello </B>
<IMG SRC = "FAT.GIF">
<B> World </B>
 IMG - Good Example
<B> Hello </B>
<IMG SRC = "FAT.GIF" WIDTH ="120" HEIGHT="150" >
<B> World </B>
 All the photographic images must be in "jpg" format
 Computer created images must be in "gif" format
 Background image should be less than 3.5k [the background image should be same for
all the pages (except for functional reasons)]
 Avoid nested tables.
 Keep table text size to a minimum (e.g. less than 5000 characters)

- Link Testing
 You must ensure that all the hyperlinks are valid
 This applies to both internal and external links
 Internal links shall be relative, to minimize the overhead and faults when the web site is
moved to production environment
 External links shall be referenced to absolute URLs
 External links can change without control - So, automate regression testing
 Remember that external non- home page links are more likely to break
 Be careful at links in "What's New" sections. They are likely to become obsolete
 Check that content can be accessed by means of: Search engine, Site Map
 Check the accuracy of Search Engine results
 Check that web Site Error 404 ("Not Found") is handled by means of a user-friendly
page

- Compatibility Testing

 Check for the site behavior across the industry standard browsers. The main issues
involve how different the browsers handle tables, images, caching and scripting
languages
 In cross browsers testing, check for:
 Behavior of buttons
 Support of Java scripts
 Support of tables
 Acrobat, Real, Flash behavior
 ActiveX control support
TESTINGFAQ 49
 Java compatibility
 Text size

Browser ActiveX VB JavaScri Java Dynamic


Browser Frames CSS 1.0 CSS 2.0
version controls Script pt applets HTML
Internet 4.0 and
Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled
Explorer later
Internet 3.0 and
Enabled Enabled Enabled Enabled Disabled Enabled Enabled Disabled
Explorer later
Netscape 4.0 and
Disabled Disabled Enabled Enabled Enabled Enabled Enabled Enabled
Navigator later
Netscape 3.0 and
Disabled Disabled Enabled Enabled Disabled Enabled Disabled Disabled
Navigator later
Both
Internet
4.0 and
Explorer Disabled Disabled Enabled Enabled Enabled Enabled Enabled Enabled
later
and
Navigator
Both
Internet
3.0 and
Explorer Disabled Disabled Enabled Enabled Disabled Enabled Disabled Disabled
later
and
Navigator
Microsoft Unavailab
Disabled Disabled Disabled Disabled Disabled Disabled Disabled Disabled
Web TV le

- Usability Testing
Aspects to be tested with care:
 Coherence of look and feel
 Navigational aids
 User Interactions
 Printing
With respect to
 Normal behavior
 Destructive behavior
 Inexperienced users

 Usability Tips

• Define categories in terms of user goals


• Name sections carefully
• Think internationally
• Identify the homepage link on every page
• Make sure search is always available
• Test all the browsers your audience will use

TESTINGFAQ 50
• Differentiate visited links from unvisited links
• Never use graphics where HTML text will do
• Make GUI design predictable and consistent
• Check that printed pages fit appropriately to paper pages[consider that many people just
surf and print. Check especially the pages for which format is important. E.g.: an
application form can either be filled on-line or printed/filled/faxed]

- Portability Testing
• Check that links to URLs outside the web site must be in canonical form (Ex:
http://www.arkinsys.com/)
• Check that links to URLs into the web site must be in relative form (e.g.
…./arkinsys/images/images.gif)

- Cookies Testing

What are cookies?

A "Cookie" is a small piece of information sent by the web server to store on a web browser. So it can later
be read back from the browser. This is useful for having the browser remember some specific information.

Why must you test cookies?

• Cookies can expire


• Users can disable them in Browser

How to perform Cookies testing?

• Check the behavior after cookies expiration


• Work with cookies disabled
• Disable cookies mid-way
• Delete web cookies mid-way
• Clear memory and disk cache mid-way

- TESTING - WHEN IS A PROGRAM CORRECT?


There are levels of correctness. We must determine the appropriate level of correctness for each system
because it costs more and more to reach higher levels.
• No syntactic errors
• Compiles with no error messages
• Runs with no error messages
• There exists data which gives correct output
• Gives correct output for required input
• Correct for typical test data

TESTINGFAQ 51
• Correct for difficult test data
• Proven correct using mathematical logic
• Obeys specifications for all valid data
• Obeys specifications for likely erroneous input
• Obeys specifications for all possible input

- TEST PLAN
A software project test plan is a document that describes the objectives, scope, approach, and focus of a
software testing effort. The process of preparing a test plan is a useful way to think through the efforts
needed to validate the acceptability of a software product. The completed document will help people
outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to
be useful but not so thorough that no one outside the test group will read it. The following are some of the
items that might be included in a test plan, depending on the particular project:
• Title of the Project
• Identification of document including version numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans,
etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline - a decomposition of the test approach by test type, feature, functionality,
process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment - hardware, operating systems, other required software, data
configurations, interfaces to other systems
• Test environment setup and configuration issues
• Test data setup requirements
• Database setup requirements

TESTINGFAQ 52
• Outline of system-logging/error-logging/other capabilities, and tools such as screen
capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to
help track the cause or source of bugs
• Test automation - justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution - tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel pre-training needs
• Test site/location
• Relevant proprietary, classified, security, and licensing issues.
• Appendix - glossary, acronyms, etc.

- TEST CASES
- What's a 'test case'?
 A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.
 Note that the process of developing test cases can help find problems in the requirements
or design of an application, since it requires completely thinking through the operation
of the application. For this reason, it's useful to prepare test cases early in the
development cycle if possible.

- TESTING COVERAGE
• Line coverage. Test every line of code (Or Statement coverage: test every statement).
• Branch coverage. Test every line, and every branch on multi-branch lines.
• N-length sub-path coverage. Test every sub-path through the program of length N. For
example, in a 10,000-line program, test every possible 10-line sequence of execution.
• Path coverage. Test every path through the program, from entry to exit. The number of
paths is impossibly large to test.
• Multicondition or predicate coverage. Force every logical operand to take every
possible value. Two different conditions within the same test may result in the same
branch, and so branch coverage would only require the testing of one of them.

TESTINGFAQ 53
• Trigger every assertion check in the program. Use impossible data if necessary.
• Loop coverage. "Detect bugs that exhibit themselves only when a loop is executed more
than once."
• Low Memory Conditions - In the event the application fails due to low memory
conditions, it should fail in graceful manner. This means that error logs should be
written detailing what the conditions were causing the failure to occur, orders the user
may have open are closed, an error message is given informing the user of the condition
and the actions that will be taken, etc
• Every module, object, component, tool, subsystem, etc. This seems obvious until you
realize that many programs rely on off-the-shelf components. The programming staff
doesn't have the source code to these components, so measuring line coverage is
impossible. At a minimum (which is what is measured here), you need a list of all these
components and test cases that exercise each one at least once.
• Fuzzy decision coverage. If the program makes heuristically-based or similarity-based
decisions, and uses comparison rules or data sets that evolve over time, check every rule
several times over the course of training.
• Relational coverage. "Checks whether the subsystem has been exercised in a way that
tends to detect off-by-one errors" such as errors caused by using < instead of <=. This
coverage includes:
• Every boundary on every input variable.
• Every boundary on every output variable.
• Every boundary on every variable used in intermediate calculations.
• Data coverage. At least one test case for each data item / variable / field in the program.
• Constraints among variables: Let X and Y be two variables in the program. X and Y

constrain each other if the value of one restricts the values the other can take. For
example, if X is a transaction date and Y is the transaction's confirmation date, Y can't
occur before X.
• Each appearance of a variable. Suppose that you can enter a value for X on three
different data entry screens, the value of X is displayed on another two screens, and it is
printed in five reports. Change X at each data entry screen and check the effect
everywhere else X appears.
• Every type of data sent to every object. A key characteristic of object-oriented
programming is that each object can handle any type of data (integer, real, string, etc.)
that you pass to it. So, pass every conceivable type of data to every object.
• Handling of every potential data conflict. For example, in an appointment-calendaring
program, what happens if the user tries to schedule two appointments at the same date
and time?
• Handling of every error state. Put the program into the error state, check for effects on
the stack, available memory, handling of keyboard input. Failure to handle user errors
well is an important problem, partially because about 90% of industrial accidents are
TESTINGFAQ 54
blamed on human error or risk-taking. Under the legal doctrine of foreseeable misuse,
the manufacturer is liable in negligence if it fails to protect the customer from the
consequences of a reasonably foreseeable misuse of the product.
• Every complexity / maintainability metric against every module, object, subsystem, etc.
There are many such measures. Jones lists 20 of them. People sometimes ask whether
any of these statistics are grounded in a theory of measurement or have practical value.
But it is clear that, in practice, some organizations find them an effective tool for
highlighting code that needs further investigation and might need redesign.
• Conformity of every module, subsystem, etc. against every corporate coding standard.
Several companies believe that it is useful to measure characteristics of the code, such as
total lines per module, ratio of lines of comments to lines of code, frequency of
occurrence of certain types of statements, etc. A module that doesn't fall within the
"normal" range might be summarily rejected (bad idea) or re-examined to see if there's a
better way to design this part of the program.
• Table-driven code. The table is a list of addresses or pointers or names of modules. In a
traditional CASE statement, the program branches to one of several places depending on
the value of an expression. In the table-driven equivalent, the program would branch to
the place specified in, say, location 23 of the table. The table is probably in a separate
data file that can vary from day to day or from installation to installation. By modifying
the table, you can radically change the control flow of the program without recompiling
or relinking the code. Some programs drive a great deal of their control flow this way,
using several tables. Coverage measures? Some examples:
• check that every expression selects the correct table element
• check that the program correctly jumps or calls through every table element
• check that every address or pointer that is available to be loaded into these tables is valid
(no jumps to impossible places in memory, or to a routine whose starting address has
changed)
• check the validity of every table that is loaded at any customer site.
• Every interrupt. An interrupt is a special signal that causes the computer to stop the
program in progress and branch to an interrupt handling routine. Later, the program
restarts from where it was interrupted. Interrupts might be triggered by hardware events
(I/O or signals from the clock that a specified interval has elapsed) or software (such as
error traps). Generate every type of interrupt in every way possible to trigger that
interrupt.
• Every interrupt at every task, module, object, or even every line. The interrupt handling
routine might change state variables, load data, use or shut down a peripheral device, or
affect memory in ways that could be visible to the rest of the program. The interrupt can
happen at any time-between any two lines, or when any module is being executed. The
program may fail if the interrupt is handled at a specific time. (Example: what if the

TESTINGFAQ 55
program branches to handle an interrupt while it's in the middle of writing to the disk
drive?)
• The number of test cases here is huge, but that doesn't mean you don't have to think
about this type of testing. This is path testing through the eyes of the processor (which
asks, "What instruction do I execute next?" and doesn't care whether the instruction
comes from the mainline code or from an interrupt handler) rather than path testing
through the eyes of the reader of the mainline code. Especially in programs that have
global state variables, interrupts at unexpected times can lead to very odd results.
• Every anticipated or potential race. Imagine two events, A and B. Both will occur, but
the program is designed under the assumption that A will always precede B. This sets up
a race between A and B -if B ever precedes A, the program will probably fail. To achieve
race coverage, you must identify every potential race condition and then find ways,
using random data or systematic test case selection, to attempt to drive B to precede A in
each case.
• Races can be subtle. Suppose that you can enter a value for a data item on two different
data entry screens. User 1 begins to edit a record, through the first screen. In the process,
the program locks the record in Table 1. User 2 opens the second screen, which calls up
a record in a different table, Table 2. The program is written to automatically update the
corresponding record in the Table 1 when User 2 finishes data entry. Now, suppose that
User 2 finishes before User 1. Table 2 has been updated, but the attempt to synchronize
Table 1 and Table 2 fails. What happens at the time of failure, or later if the
corresponding records in Table 1 and 2 stay out of synch?
• Every time-slice setting. In some systems, you can control the grain of switching
between tasks or processes. The size of the time quantum that you choose can make race
bugs, time-outs, interrupt-related problems, and other time-related problems more or less
likely. Of course, coverage is a difficult problem here because you aren't just varying
time-slice settings through every possible value. You also have to decide which tests to
run under each setting. Given a planned set of test cases per setting, the coverage
measure looks at the number of settings you've covered.
• Varied levels of background activity. In a multiprocessing system, tie up the processor
with competing, irrelevant background tasks. Look for effects on races and interrupt
handling. Similar to time-slices, your coverage analysis must specify
• categories of levels of background activity (figure out something that makes sense) and
• all timing-sensitive testing opportunities (races, interrupts, etc.).
• Each processor type and speed. Which processor chips do you test under? What tests do
you run under each processor? You are looking for:
• speed effects, like the ones you look for with background activity testing, and
• consequences of processors' different memory management rules, and
• floating point operations, and
• any processor-version-dependent problems that you can learn about.
TESTINGFAQ 56
• Every opportunity for file / record / field locking.
• Every dependency on the locked (or unlocked) state of a file, record or field.
• Every opportunity for contention for devices or resources.
• Performance of every module / task / object. Test the performance of a module then
retest it during the next cycle of testing. If the performance has changed significantly,
you are either looking at the effect of a performance-significant redesign or at a
symptom of a new bug.
• Free memory / available resources / available stack space at every line or on entry into
and exit out of every module or object.
• Execute every line (branch, etc.) under the debug version of the operating system. This
shows illegal or problematic calls to the operating system.
• Vary the location of every file. What happens if you install or move one of the
program's component, control, initialization or data files to a different directory or drive
or to another computer on the network?
• Check the release disks for the presence of every file. It's amazing how often a file
vanishes. If you ship the product on different media, check for all files on all media.
• Every embedded string in the program. Use a utility to locate embedded strings. Then
find a way to make the program display each string.
• Operation of every function / feature / data handling operation under:
• Every program preference setting.
• Every character set, code page setting, or country code setting.
• The presence of every memory resident utility (inits, TSRs).
• Each operating system version.
• Each distinct level of multi-user operation.
• Each network type and version.
• Each level of available RAM.
• Each type / setting of virtual memory management.
• Compatibility with every previous version of the program.
• Ability to read every type of data available in every readable input file format. If a file
format is subject to subtle variations (e.g. CGM) or has several sub-types (e.g. TIFF) or
versions (e.g. dBASE), test each one.
• Write every type of data to every available output file format. Again, beware of subtle
variations in file formats-if you're writing a CGM file, full coverage would require you
to test your program's output's readability by every one of the main programs that read
CGM files.
• Every typeface supplied with the product. Check all characters in all sizes and styles. If
your program adds typefaces to a collection of fonts that are available to several other
programs, check compatibility with the other programs (nonstandard typefaces will
crash some programs).

TESTINGFAQ 57
• Every type of typeface compatible with the program. For example, you might test the
program with (many different) TrueType and Postscript typefaces, and fixed-sized
bitmap fonts.
• Every piece of clip art in the product. Test each with this program. Test each with other
programs that should be able to read this type of art.
• Every sound / animation provided with the product. Play them all under different
device (e.g. sound) drivers / devices. Check compatibility with other programs that
should be able to play this clip-content.
• Every supplied (or constructible) script to drive other machines / software (e.g.
macros) / BBS's and information services (communications scripts).
• All commands available in a supplied communications protocol.
• Recognized characteristics. For example, every speaker's voice characteristics (for
voice recognition software) or writer's handwriting characteristics (handwriting
recognition software) or every typeface (OCR software).
• Every type of keyboard and keyboard driver.
• Every type of pointing device and driver at every resolution level and ballistic setting.
• Every output feature with every sound card and associated drivers.
• Every output feature with every type of printer and associated drivers at every resolution
level.
• Every output feature with every type of video card and associated drivers at every
resolution level.
• Every output feature with every type of terminal and associated protocols.
• Every output feature with every type of video monitor and monitor-specific drivers at
every resolution level.
• Every color shade displayed or printed to every color output device (video card /
monitor / printer / etc.) and associated drivers at every resolution level. And check the
conversion to grey scale or black and white.
• Every color shade readable or scannable from each type of color input device at every
resolution level.
• Every possible feature interaction between video card type and resolution, pointing
device type and resolution, printer type and resolution, and memory level. This may
seem excessively complex, but some times crash bugs occur only under the pairing of
specific printer and video drivers at a high resolution setting. Other crashes required
pairing of a specific mouse and printer driver, pairing of mouse and video driver, and a
combination of mouse driver plus video driver plus ballistic setting.
• Every type of CD-ROM drive, connected to every type of port (serial / parallel / SCSI)
and associated drivers.
• Every type of writable disk drive / port / associated driver. Don't forget the fun you can
have with removable drives or disks.

TESTINGFAQ 58
• Compatibility with every type of disk compression software. Check error handling for
every type of disk error, such as full disk.
• Every voltage level from analog input devices.
• Every voltage level to analog output devices.
• Every type of modem and associated drivers.
• Every FAX command (send and receive operations) for every type of FAX card under
every protocol and driver.
• Every type of connection of the computer to the telephone line (direct, via PBX, etc.;
digital vs. analog connection and signaling); test every phone control command under
every telephone control driver.
• Tolerance of every type of telephone line noise and regional variation (including
variations that are out of spec) in telephone signaling (intensity, frequency, timing, other
characteristics of ring / busy / etc. tones).
• Every variation in telephone dialing plans.
• Every possible keyboard combination. Sometimes you'll find trap doors that the
programmer used as hotkeys to call up debugging tools; these hotkeys may crash a
debuggerless program. Other times, you'll discover an Easter Egg (an undocumented,
probably unauthorized, and possibly embarrassing feature). The broader coverage
measure is every possible keyboard combination at every error message and every
data entry point. You'll often find different bugs when checking different keys in
response to different error messages.
• Recovery from every potential type of equipment failure. Full coverage includes each
type of equipment, each driver, and each error state. For example, test the program's
ability to recover from full disk errors on writable disks. Include floppies, hard drives,
cartridge drives, optical drives, etc. Include the various connections to the drive, such as
IDE, SCSI, MFM, parallel port, and serial connections, because these will probably
involve different drivers.
• Function equivalence. For each mathematical function, check the output against a
known good implementation of the function in a different program. Complete coverage
involves equivalence testing of all testable functions across all possible input values.
• Zero handling. For each mathematical function, test when every input value,
intermediate variable, or output variable is zero or near-zero. Look for severe rounding
errors or divide-by-zero errors.
• Accuracy of every graph, across the full range of graphable values. Include values that
force shifts in the scale.
• Accuracy of every report. Look at the correctness of every value, the formatting of
every page, and the correctness of the selection of records used in each report.
• Accuracy of every message.
• Accuracy of every screen.
• Accuracy of every word and illustration in the manual.
TESTINGFAQ 59
• Accuracy of every fact or statement in every data file provided with the product.
• Accuracy of every word and illustration in the on-line help.
• Every jump, search term, or other means of navigation through the on-line help.
• Check for every type of virus / worm that could ship with the program.
• Every possible kind of security violation of the program, or of the system while using
the program.
• Check for copyright permissions for every statement, picture, sound clip, or other
creation provided with the program.
• Verification of the program against every program requirement and published
specification.
• Verification of the program against user scenarios. Use the program to do real tasks
that are challenging and well-specified. For example, create key reports, pictures, page
layouts, or other documents events to match ones that have been featured by competitive
programs as interesting output or applications.
• Verification against every regulation (IRS, SEC, FDA, etc.) that applies to the data or
procedures of the program.
• Usability tests of:
• Every feature / function of the program.
• Every part of the manual.
• Every error message.
• Every on-line help topic.
• Every graph or report provided by the program.
• Localizability / localization tests:
• Every string. Check program's ability to display and use this string if it is modified by
changing the length, using high or low ASCII characters, different capitalization rules,
etc.
• Compatibility with text handling algorithms under other languages (sorting, spell
checking, hyphenating, etc.)
• Every date, number and measure in the program.
• Hardware and drivers, operating system versions, and memory-resident programs that
are popular in other countries.
• Every input format, import format, output format, or export format that would be
commonly used in programs that are popular in other countries.
• Cross-cultural appraisal of the meaning and propriety of every string and graphic
shipped with the program.

TESTINGFAQ 60
- WHAT IF THERE ISN'T ENOUGH TIME FOR THOROUGH
TESTING?
Use risk analysis to determine where testing should be focused. Since it's rarely possible to test every
possible aspect of an application, every possible combination of events, every dependency, or everything
that could go wrong, risk analysis is appropriate to most software development projects. This requires
judgment skills, common sense, and experience. (If warranted, formal methods are also available.)
Considerations can include:
Which functionality is most important to the project's intended purpose?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which aspects of the application can be tested early in the development cycle?
Which parts of the code are most complex, and thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
Which aspects of similar/related previous projects caused problems?
Which aspects of similar/related previous projects had large maintenance expenses?
Which parts of the requirements and design are unclear or poorly thought out?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Which tests will have the best high-risk-coverage to time-required ratio?

- DEFECT REPORTING
The bug needs to be communicated and assigned to developers that can fix it. After the problem is
resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing
to check that fixes didn't create problems elsewhere. The following are items to consider in the tracking
process:
Complete information such that developers can understand the bug, get an idea of its severity,
and reproduce it if necessary.
Bug identifier (number, ID, etc.)
Current bug status (e.g., 'Open', 'Closed', etc.)
The application name and version
The function, module, feature, object, screen, etc. where the bug occurred
Environment specifics, system, platform, relevant hardware specifics
Test case name/number/identifier
File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in
finding the cause of the problem
Severity Level
TESTINGFAQ 61
Tester name
Bug reporting date
Name of developer/group/organization the problem is assigned to
Description of fix
Date of fix
Application version that contains the fix
Verification Details
A reporting or tracking process should enable notification of appropriate personnel at various stages.

- TYPES OF AUTOMATED TOOLS


Code analyzers - monitor code complexity, adherence to standards, etc.
Coverage analyzers - these tools check which parts of the code have been exercised by a test, and
may be oriented to code statement coverage, condition coverage, path coverage, etc.
Memory analyzers - such as bounds-checkers and leak detectors.
Load/performance test tools - for testing client/server and web applications under various load
levels.
Web test tools - to check that links are valid, HTML code usage is correct, client-side and
Server-side programs work, a web site's interactions are secure.
Other tools - for test case management, documentation management, bug reporting, and
configuration management.

- TOP TIPS FOR TESTERS


Tip 1: Play nicely. Be part of the Web team. This keeps you in the loop, and gives you a
voice in debates of functionality versus design. Remember, a tester is really a user
advocate. Protect that position by making friends.

Tip 2: A good spec is a tester's best friend. Get design and functional specifications --
even if you have to whine or threaten. Specs help you determine what is a "real" bug and
what is "by design."

Tip 3: KISS (Keep It Simple, Stupid). If you have a form that's supposed to accept only
integers 1 to 10, you can start your tests by entering NULL, 0, 1, 10, 11, 1000000000, -1,
a, and the other acceptable values. You needn't enter every possible key stroke to find a
code hole.

Tip 4: Expect the unexpected. Before you think the previous tip allows you to cut back
your testing, remember the user. If your audience is non-technical, the possible
keystrokes, browser settings, and other "mystery" influences are unlimited. Think like a
user, and stress the Web site the way a user would.

Tip 5: Use all the automated testing tools you can get your hands on. Many useful tools
are on the Web already. With a little searching, you may be able to find the right ones for
you.

TESTINGFAQ 62

You might also like