Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 20

A test plan is a systematic approach to testing a system such as a machine or software.

The
plan typically contains a detailed understanding of what the eventual workflow will be.

Test plans in software development


In software testing, a test plan gives detailed testing information regarding an
upcoming testing effort, including

• Scope of testing.
• Schedule.
• Test Deliverables.
• Release Criteria.
• Risks and Contingencies.

Test plan template, based on IEEE 829 format


1. Test Plan Identifier (TPI).
2. References
3. Introduction
4. Test Items
5. Software Risk Issue
6. Features to be Tested
7. Features not to be Tested
8. Approach
9. Item Pass/Fail Criteria
10. Entry & Exit Criteria
11. Suspension Criteria and Resumption Requirements
12. Test Deliverables
13. Remaining Test Tasks
14. Environmental Needs
15. Staffing and Training Needs
16. Responsibilities
17. Planning Risks and Contingencies
18. Approvals

Test plan identifier

For example: "Master plan for 3A USB Host Mass Storage Driver TP_3A1.0"
Some type of unique company generated number to identify this test plan, its level
and the level of software that it is related to. Preferably the test plan level will be the
same as the related software level. The number may also identify whether the test
plan is a Master plan, a Level plan, an integration plan or whichever plan level it
represents. This is to assist in coordinating software and testware versions within
configuration management.
Keep in mind that test plans are like other software documentation, they are dynamic
in nature and must be kept up to date. Therefore, they will have revision numbers.
You may want to include author and contact information including the revision
history information as part of either the identifier section of as part of the
introduction...

References
List all documents that support this test plan: -
Documents that are referenced include:

• Project Plan.
• System Requirements specifications.
• High Level design document.
• Detail design document.
• Development and Test process standards.
• Methodology.
• Low Level design.

Introduction

State the purpose of the Plan, possibly identifying the level of the plan (master etc.).
This is essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that
contain information relevant to this project/process.
Identify the objective of the plan or scope of the plan in relation to the Software
Project plan that it relates to. Other items may include, resource and budget
constraints, scope of the testing effort, how testing relates to other evaluation
activities (Analysis & Reviews), and possible the process to be used for change
control and communication and coordination of key activities.
As this is the "Executive Summary" keep information brief and to the point.
Intention of this project has to be included

Test items (functions)

These are things you intend to test within the scope of this test plan. Essentially,
something you will test, a list of what is to be tested. This can be developed from the
software application inventories as well as other sources of documentation and
information.
This can be controlled on a local Configuration Management (CM) process if you have
one. This information includes version numbers, configuration requirements where
needed, (especially if multiple versions of the product are supported). It may also
include key delivery schedule issues for critical elements.
This section can be oriented to the level of the test plan. For higher levels it may be
by application or functional area, for lower levels it may be by program, unit, module
or build.

Software risk issues.

Identify what software is to be tested and what the critical areas are, such as:

1. Delivery of a third party product.


2. New version of interfacing software.
3. Ability to use and understand a new package/tool, etc.
4. Extremely complex functions.
5. Modifications to components with a past history of failure.
6. Poorly documented modules or change requests.

There are some inherent software risks such as complexity; these need to be
identified.

1. Safety.
2. Multiple interfaces.
3. Impacts on Client.
4. Government regulations and rules.

Another key area of risk is a misunderstanding of the original requirements. This can
occur at the management, user and developer levels. Be aware of vague or unclear
requirements and requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify
potential areas within the software that are risky. If the unit testing discovered a
large number of defects or a tendency towards defects in a particular area of
the software, this is an indication of potential future problems. It is the
nature of defects to cluster and clump together. If it was defect ridden
earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several [brainstorming]
sessions.

• Start with ideas, such as, what worries me about this project/application.

Features to be tested

This is a listing of what is to be tested from the user's viewpoint of what the system
does. This is not a technical description of the software, but a USER'S view of the
functions.
Set the level of risk for each feature. 'Use a simple rating scale such as High,
Medium and Low(H, M, L). These types of levels are understandable to a
User'. You should be prepared to discuss why a particular level was chosen.
Sections 4 and 6 are very similar, and the only true difference is the point of view.
Section 4 is a technical type description including version numbers and other
technical information and Section 6 is from the User’s viewpoint. Users do not
understand technical software terminology; they understand functions and processes
as they relate to their jobs.

Features not to be tested

This is a listing of what is 'not' to be tested from both the user's viewpoint of what the
system does and a configuration management/version control view. This is not a
technical description of the software, but a user's view of the functions.
Identify why the feature is not to be tested, there can be any number of reasons.

• Not to be included in this release of the Software.


• Low risk, has been used before and was considered stable.
• Will be released but not tested or documented as a functional part of the
release of this version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be
tested are directly affected by the levels of acceptable risk within the project, and
what does not get tested affects the level of risk of the project.

Approach (strategy)

This is your overall test strategy for this test plan; it should be appropriate to the
level of the plan (master, acceptance, etc.) and should be in agreement with all
higher and lower levels of plans. Overall rules and processes should be identified. It is
important to have instruction as to what is necessary in a test plan before trying to
create one's own strategy. Make sure that you are apprenticed in this area before
trying to teach yourself this important step in engineering.

• Are any special tools to be used and what are they?


• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test
level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or
are untestable be processed?

If this is a master test plan the overall project testing approach and coverage
requirements must also be identified.
Specify if there are special requirements for the testing.

• Only the full component will be tested.


• A specified segment of grouping of features/components must be tested
together.

Other information that may be useful in setting the approach are:

• MTBF, Mean Time Between Failures - if this is a valid measurement for the test
involved and if the data is available.
• SRE, Software Reliability Engineering - if this methodology is in use and if the
information is available.

How will meetings and other organizational processes be handled?

Item pass/fail criteria

Show stopper issues. Specify the criteria to be used to determine whether each test
item has passed or failed. Show Stopper severity requires definition within each
testing context.
Entry & exit criteria

Specify the criteria to be used to start testing and how you know when to stop the
testing process.

Suspension criteria & resumption requirements

Suspension criteria specify the criteria to be used to suspend all or a portion of the
testing activities while resumption criteria specify when testing can resume after it
has been suspended.

• Unavailability of external dependent systems during execution.


• When a defect is introduced that cannot allow any further testing.
• Critical path deadline is missed so that the client will not accept delivery even
if all testing is completed.
• A specific holiday shuts down both development and testing.

System Integration Testing in the Integration environment may be resumed under


the following circumstances:

• When the external dependent systems become available again.


• When a fix is successfully implemented and the Testing Team is notified to
continue testing.
• The contract is renegotiated with the client to extend delivery.
• The holiday period ends.

Suspension criteria assumes that testing cannot go forward and that going backward
is also not possible. A failed build would not suffice as you could generally continue to
use the previous build. Most major or critical defects would also not constituted
suspension criteria as other areas of the system could continue to be tested.

Test deliverables

List documents, reports, charts, that will be presented to stakeholders on a regular


basis during testing and when testing has been completed.

Remaining test tasks

If this is a multi-phase process or if the application is to be released in increments


there may be parts of the application that this plan does not address. These areas
need to be identified to avoid any confusion should defects be reported back on
those future functions. This will also allow the users and testers to avoid incomplete
functions and prevent waste of resources chasing non-defects.
If the project is being developed as a multi-party process, this plan may only cover a
portion of the total functions/features. This status needs to be identified so that those
other areas have plans developed for them and to avoid wasting resources tracking
defects that do not relate to this plan.
When a third party is developing the software, this section may contain descriptions
of those test tasks belonging to both the internal groups and the external groups..

Environmental needs
Are there any special requirements for this test plan, such as:

• Special hardware such as simulators, static generators etc.


• How will test data be provided. Are there special collection requirements or
specific ranges of data that must be provided?
• How much testing will be done on each component of a multi-part feature?
• Special power requirements.
• An environment where there is more feedback than needs improvement and
meets expectations
• Specific versions of other supporting software.
• Restricted use of the system during testing.

Staffing and training needs

Training on the application/system.


Training for any test tools to be used.
The Test Items and Responsibilities sections affect this section. What is to be tested
and who is responsible for the testing and training.

Responsibilities

Who is in charge?
Don't leave people in charge of the test plan who have never done anything
resembling a test plan before; This is vital, they will learn nothing from it and the test
will fail.
This issue includes all areas of the plan. Here are some examples:

• Setting risks.
• Selecting features to be tested and not tested.
• Setting overall strategy for this level of plan.
• Ensuring all required elements are in place for testing.
• Providing for resolution of scheduling conflicts, especially, if testing is done on
the production system.
• Who provides the required training?
• Who makes the critical go/no go decisions for items not covered in the test
plans?
• Who is responsible for this risk.

Planning risks and contingencies

What are the overall risks to the project with an emphasis on the testing process?

• Lack of personnel resources when testing is to begin.


• Lack of availability of required hardware, software, data or tools.
• Late delivery of the software, hardware or tools.
• Delays in training on the application and/or tools.
• Changes to the original requirements or designs.
• Complexities involved in testing the applications

Specify what will be done for various events, for example:


Requirements definition will be complete by January 1, 20XX, and, if the
requirements change after that date, the following actions will be taken:

• The test schedule and development schedule will move out an appropriate
number of days. This rarely occurs, as most projects tend to have fixed
delivery dates.
• The number of tests performed will be reduced.
• The number of acceptable defects will be increased.
o These two items could lower the overall quality of the delivered
product.
• Resources will be added to the test team.
• The test team will work overtime (this could affect team morale).
• The scope of the plan may be changed.
• There may be some optimization of resources. This should be avoided, if
possible, for obvious reasons.

Management is usually reluctant to accept scenarios such as the one above even
though they have seen it happen in the past.
The important thing to remember is that, if you do nothing at all, the usual result is
that testing is cut back or omitted completely, neither of which should be an
acceptable option.

Approvals

Who can approve the process as complete and allow the project to proceed to the
next level (depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is:

• The audience for a unit test level plan is different from that of an integration,
system or master level plan.
• The levels and type of knowledge at the various levels will be different as well.
• Programmers are very technical but may not have a clear understanding of
the overall business process driving the project.
• Users may have varying levels of business acumen and very little technical
skills.
• Always be wary of users who claim high levels of technical skills and
programmers that claim to fully understand the business process. These types
of individuals can cause more harm than good if they do not have the skills
they believe they possess.

Glossary

Used to define terms and acronyms used in the document, and testing in general, to
eliminate confusion and promote consistent communications.
Changes Made by Abhishek Menon
Regression Testing
Regression testing is any type of software testing which seeks to uncover
regression bugs. Regression bugs occur whenever software functionality that
previously worked as desired, stops working or no longer works in the same way that
was previously planned. Typically regression bugs occur as an unintended
consequence of program changes.

Background
Experience has shown that as software is developed, this kind of reemergence of
faults is quite common. Sometimes it occurs because a fix gets lost through poor
revision control practices (or simple human error in revision control), but often a fix
for a problem will be "fragile" in that it fixes the problem in the narrow case where it
was first observed but not in more general cases which may arise over the lifetime of
the software. Finally, it has often been the case that when some feature is
redesigned, the same mistakes will be made in the redesign that were made in the
original implementation of the feature.
Therefore, in most software development situations it is considered good practice
that when a bug is located and fixed, a test that exposes the bug is recorded and
regularly retested after subsequent changes to the program. Although this may be
done through manual testing procedures using programming techniques, it is often
done using automated testing tools. Such a test suite contains software tools that
allow the testing environment to execute all the regression test cases automatically;
some projects even set up automated systems to automatically re-run all regression
tests at specified intervals and report any regressions. Common strategies are to run
such a system after every successful compile (for small projects), every night, or
once a week. Those strategies can be automated by an external tool, such as
BuildBot.
Regression testing is an integral part of the extreme programming software
development method. In this method, design documents are replaced by extensive,
repeatable, and automated testing of the entire software package at every stage in
the software development cycle.

Regression test generation


Effective regression tests generate sufficient code execution coverage to exercise all
meaningful code branches. Therefore, software testing is a combinatorial problem.
However, in practice many combinations are unreachable so the problem size is
greatly reduced. Every boolean decision statement requires at least two tests: one
with an outcome of "true" and one with an outcome of "false". As a result, for every
line of code written, programmers often need 3 to 5 lines of test code.
Traditionally, in the corporate world, regression testing has been performed by a
software quality assurance team after the development team has completed work.
However, defects found at this stage are the most costly to fix. This problem is being
addressed by the rise of developer testing. Although developers have always written
test cases as part of the development cycle, these test cases have generally been
either functional tests or unit tests that verify only intended outcomes. Developer
testing compels a developer to focus on unit testing and to include both positive and
negative test cases.
When regression test generation is supported by a sustainable process for ensuring
that test case failures are reviewed daily and addressed immediately, the end result
is a regression suite that evolves with the application, and becomes more robust and
more intelligent each day. [3] If such a process is not implemented and ingrained into
the team's workflow, the application may evolve out of sync with the generated test
suite—- increasing false positives and reducing the effectiveness of the test suite.

Types of regression
• Local - changes introduce new bugs.
• Unmasked - changes unmask previously existing bugs.
• Remote - Changing one part breaks another part of the program. For example,
Module A writes to a database. Module B reads from the database. If changes
to what Module A writes to the database break Module B, it is remote
regression.

Test case
A test case in software engineering is a set of conditions or variables under which a
tester will determine if a requirement or use case upon an application is partially or
fully satisfied. It may take many test cases to determine that a requirement is fully
satisfied.

Test cases are often incorrectly referred to as test scripts. Test scripts are lines of
code used mainly in automation tools.

Formal requirement-based test cases


In order to fully test that all the requirements of an application are met, there must
be at least one test case for each requirement unless a requirement has sub-
requirements. In that situation, each sub-requirement must have at least one test
case. This is frequently done using a traceability matrix. Some methodologies, like
RUP, recommend creating at least two test cases for each requirement. One of them
should perform positive testing of requirement and other should perform negative
testing. Written test cases should include a description of the functionality to be
tested, and the preparation required to ensure that the test can be conducted.

What characterizes a formal, written test case is that there is a known input and an
expected output, which is worked out before the test is executed. The known input
should test a precondition and the expected output should test a postcondition.

Informal requirement-based test cases


For application without formal requirements, test cases can be written based on the
accepted normal operation of programs of a similar class. In some schools of testing,
test cases are not written at all but the activities and results are reported after the
tests have been run.

Typical Test Cases format


Test Cases usually have the following components.

• Test Case Summary


• Configuration
• Initial Condition
• Steps to run the test case
• Expected behavior/outcome

Test suite
In software engineering, a test suite, also known as a validation suite, is a collection
of test cases that are intended to be used as input to a software program to show
that it has some specified set of behaviors. Test suites are used to group similar test
cases together. A system might e.g. have a smoke test suite that consists only of
smoke tests or a test suite for some specific functionality in the system.
A test suite often contains detailed instructions or goals for each collection of test
cases and information on the system configuration to be used during testing. A group
of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They may also
be called a test script, or even a test scenario.

Different types of test suites


An executable test suite is a test suite that is ready to be executed. This usually
means that there exists a test harness that is integrated with the suite and such that
the test suite and the test harness together can work on a sufficiently detailed level
to correctly communicate with the system under test (SUT).
A test suite for a primality testing subroutine might consist of a list of numbers
and their primality (prime or composite), along with a testing subroutine. The testing
subroutine would supply each number in the list to the primality tester, and verify
that the result of each test is correct.
The counterpart of an executable test suite is an abstract test suite. However, often
terms test suites and test plans are used, roughly with the same meaning as
executable and abstract test suites, respectively.

Test script
A test script in Software Testing is a set of instructions that will be performed on the
System Under Test to test that the system functions as expected. These steps can be
executed manually or automatically.
There are various means for executing test scripts.

• Short program written in a programming language used to test part of the


functionality of a software system. Test scripts written as a short program can
either be written using a special manual functional GUI test tool (like HP
QuickTest Professional or Rational Software) or in a well-known programming
language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Python, or more
recently, Ruby).
• Extensively parameterized short programs a.k.a. Data-driven testing
• Reusable steps created in a table a.k.a. keyword-driven - or table-driven
testing

Automated Testing
Automated testing has a major advantage in that these types of tests may be
executed 24/7 without the need for a continuous presence of people. Another
advantage over manual testing in that it is easily repeatable, and thus is favoured
when doing regression testing. This however is not always the case as automated
tests may be poorly written and can break during playback. Since most systems are
designed with human interaction in mind, it is good practice that a human tests the
system at some point. It is worth considering automating tests if they are to be
executed several times, for example as part of regression testing. However one
shouldn't fall in to the trap of spending more time automating a test then it would
take to simply execute it manually, unless it is planned to be executed several times.

Use case
A use case is a description of a system's behaviour as it responds to a request that
originates from outside of that system.

The use case technique is used in software and systems engineering to capture the
functional requirements of a system. Use cases describe the interaction between a
primary actor—the initiator of the interaction—and the system itself, represented as
a sequence of simple steps. Actors are something or someone which exist outside the
system under study, and that take part in a sequence of activities in a dialogue with
the system, to achieve some goal: they may be end users, other systems, or
hardware devices. Each use case is a complete series of events, described from the
point of view of the actor.

Each use case focuses on describing how to achieve a goal or task. For most software
projects this means that multiple, perhaps dozens, of use cases are needed to define
the scope of the new system.

Use cases should not be confused with the features of the system under
consideration. A use case may be related to one or more features, a feature may be
related to one or more use cases.

A use case defines the interactions between external actors and the system under
consideration to accomplish a goal. An actor specifies a role played by a person or
thing when interacting with the system. The same person using the system may be
represented as two different actors because they are playing different roles. For
example, "Joe" could be playing the role of a Customer when using an Automated
Teller Machine to withdraw cash, or playing the role of a Bank Teller when using the
system to restock the cash drawer.

Use cases treat the system as a black box, and the interactions with the system,
including system responses, are perceived as from outside the system. This is a
deliberate policy, because it forces the author to focus on what the system must do,
not how it is to be done, and avoids the trap of making assumptions about how the
functionality will be accomplished.

Use cases may be described at the abstract level (business use case, sometimes
called essential use case), or at the system level (system use case). The differences
between these is the scope.

• The business use case is described in technology free terminology which


treats the business process as a black box and describes the business process
that is used by its business actors (people or systems external to the
business) to achieve their goals (e.g., manual payment processing, expense
report approval, manage corporate real estate.) The business use case will
describe a process that provides value to the business actor, and it describes
what the process does. Business Process Mapping is another method for this
level of business description.
• The system use cases are normally described at the system functionality
level (for example, create voucher) and specify the function or the service
system provides for the user. A system use case will describe what the actor
achieves interacting with the system. For this reason it is recommended that a
system use case specification begin with a verb (e.g., create voucher, select
payments, exclude payment, cancel voucher.) Generally, the actor could be a
human user or another system interacting with the system being defined.
A use case should:

• Describe what the system shall do for the actor to achieve a particular goal.
• Include no implementation-specific language.
• Be at the appropriate level of detail.
• Not include detail regarding user interfaces and screens. This is done in user-
interface design.

Traceability matrix
In a software development process, a traceability matrix is a table that correlates
any two baselined documents that require a many to many relationship to
determine the completeness of the relationship. It is often used with high-level
requirements (sometimes known as marketing requirements) and detailed
requirements of the software product to the matching parts of high-level design,
detailed design, test plan, and test cases.
Common usage is to take the identifier for each of the items of one document and
place them in the left column. The identifiers for the other document are placed
across the top row. When an item in the left column is related to an item across the
top, a mark is placed in the intersecting cell. The number of relationships are added
up for each row and each column. This value indicates the mapping of the two items.
Zero values indicate that no relationship exists. It must be determined if one must be
made. Large values imply that the item is too complex and should be simplified.

Functional requirements
In software engineering, a functional requirement defines a function of a software
system or its component. A function is described as a set of inputs, the behavior, and
outputs (see also software).

Functional requirements may be calculations, technical details, data manipulation


and processing and other specific functionality that show how a use case is to be
fulfilled. They are supported by non-functional requirements, which impose
constraints on the design or implementation (such as performance requirements,
security, or reliability).

As defined in requirements engineering, functional requirements specify particular


behaviors of a system. This should be contrasted with non-functional requirements
which specify overall characteristics such as cost and reliability.

Typically, a requirements analyst generates functional requirements after building


use cases. However, this may have exceptions since software development is an
iterative process and sometimes certain requirements are conceived prior to the
definition of the use cases. Both artifacts (use cases documents and requirements
documents) complement each other in a bidirectional process.

Process
A typical functional requirement will contain a unique name and number, a brief
summary, and a rationale. This information is used to help the reader understand
why the requirement is needed, and to track the requirement through the
development of the system.
The core of the requirement is the description of the required behavior, which must
be a clear and readable description of the required behavior. This behavior may
come from organizational or business rules, or it may be discovered through
elicitation sessions with users, stakeholders, and other experts within the
organization. Many requirements will be uncovered during the use case
development. When this happens, the requirements analyst should create a
placeholder requirement with a name and summary, and research the details later,
to be filled in when they are better known.
Software requirements must be clear, correct, unambiguous, specific, and verifiable.

Test suite
In software engineering, a test suite, also known as a validation suite, is a collection
of test cases that are intended to be used as input to a software program to show
that it has some specified set of behaviors. Test suites are used to group similar test
cases together. A system might e.g. have a smoke test suite that consists only of
smoke tests or a test suite for some specific functionality in the system.
A test suite often contains detailed instructions or goals for each collection of test
cases and information on the system configuration to be used during testing. A group
of test cases may also contain prerequisite states or steps, and descriptions of the
following tests.
Collections of test cases are sometimes incorrectly termed a test plan. They may
also be called a test script, or even a test scenario.

Different types of test suites


An executable test suite is a test suite that is ready to be executed. This usually
means that there exists a test harness that is integrated with the suite and such
that the test suite and the test harness together can work on a sufficiently detailed
level to correctly communicate with the system under test (SUT).
A test suite for a primality testing subroutine might consist of a list of numbers
and their primality (prime or composite), along with a testing subroutine. The testing
subroutine would supply each number in the list to the primality tester, and verify
that the result of each test is correct.
The counterpart of an executable test suite is an abstract test suite. However, often
terms test suites and test plans are used, roughly with the same meaning as
executable and abstract test suites, respectively.
SOME IMPORTANT KEYS
What is Software Quality Assurance?

Quality Assurance makes sure the project will be completed based on the previously
agreed specifications, standards and functionality required without defects and
possible problems. It monitors and tries to improve the development process from
the beginning of the project to ensure this. It is oriented to "prevention".

When should QA testing start in a project? Why?

QA is involved in the project from the beginning. This helps the teams communicate
and understand the problems and concerns, also gives time to set up the testing
environment and configuration. On the other hand, actual testing starts after the test
plans are written, reviewed and approved based on the design documentation.

What is Software Testing?

Software testing is oriented to "detection". It's examining a system or an application


under controlled conditions. It's intentionally making things go wrong when they
should not and things happen when they should not.

What is Software Quality?

Quality software is reasonably bug-free, delivered on time and within budget, meets
requirements and/or expectations, and is maintainable.

What is Verification and Validation?


Verification is preventing mechanism to detect possible failures before the testing
begin. It involves reviews, meetings, evaluating documents, plans, code, inpections,
specifications etc. Validation occurs after verification and it's the actual testing to
find defects against the functionality or the specifications.

What is Test Plan?

Test Plan is a document that describes the objectives, scope, approach, and focus of
a software testing effort.

What is Test Case?

A test case is a document that describes an input, action, or event and an expected
response, to determine if a feature of an application is working correctly. A test case
should contain particulars such as test case identifier, test case name, objective, test
conditions/setup, input data requirements, steps, and expected results.

What is Good Code?

Good code is code that works according to the requirements, bug free, readable,
expandable in the future and easily maintainable.

What is Good Design?

In good design, the overall structure is clear, understandable, easily modifiable, and
maintainable. Works correctly when implemented and functionality can be traced
back to customer and end-user requirements.

Who is Good Test Engineer?

Good test engineer has the ability to think the unthinkable, has the test to break
attitute, strong desire to quality and attention to detail.

What is Walkthrough?

Walkthrough is quick and informal meeting for evaluation purposes.

What is Software Life Cycle?

The Software Life Cycle begins when an application is first conceived and ends when
it is no longer in use. It includes aspects such as initial concept, requirements
analysis, functional design, internal design, documentation planning, test planning,
coding, document preparation, integration, testing, maintenance, updates, retesting,
phase-out, and other aspects.

What is Inspection?

The purpose of inspection is trying to find defects and problems mostly in documents
such as test plans, specifications, test cases, coding etc. It helps to find the problems
and report it but not to fix it. It is one of the most cost effective methods of software
quality. Many people can join the inspections but normally one moderator, one
reader and one note taker are mandatory.

What are the benefits of Automated Testing?

It's very valuable for long term and on going projects. You can automize some or all
of the tests which needs to be run from time to time repeatedly or diffucult to test
manually. It saves time and effort, also makes testing possible out of working hours
and nights. They can be used by different people and many times in the future. By
this way, you also standardize the testing process and you can depend on the results.

What do you imagine are the main problems of working in a geographically


distributed team?

The main problem is the communication. To know the team members, sharing as
much information as possible whenever you need is very valuable to solve the
problems and concerns. On the other hand, increasing the wired communication as
much as possible, seting up meetings help to reduce the miscommunication
problems.

What are the common problems in Software Development Process?

Poor requirements, unrealistic schedule, inadequate testing, miscommunication and


additional requirement change after development begins.

What are Test Types ?

· Black box testing - You don't need to know the internal design or have deep knowledge about
the code to conduct this test. It's mainly based on functionality and specifications, requirements.

· White box testing - This test is based on knowledge of the internal design and code. Tests are
based on code statements, coding styles, etc.

· unit testing - the most 'micro' scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a well-
designed architecture with tight code, may require developing test driver modules or test
harnesses.

· incremental integration testing - continuous testing of an application as new functionality is


added; requires that various aspects of an application's functionality be independent enough to
work separately before all parts of the program are completed, or that test drivers be developed
as needed; done by programmers or by testers.

· integration testing - testing of combined parts of an application to determine if they function


together correctly. The 'parts' can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems.
· functional testing - black-box type testing geared to functional requirements of an application;
this type of testing should be done by testers. This doesn't mean that the programmers shouldn't
check that their code works before releasing it (which of course applies to any stage of testing.)

· system testing - black-box type testing that is based on overall requirements specifications;
covers all combined parts of a system.

· end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing
of a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.

· sanity testing or smoke testing - typically an initial testing effort to determine if a new software
version is performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting
databases, the software may not be in a 'sane' enough condition to warrant further testing in its
current state.

· regression testing - re-testing after fixes or modifications of the software or its environment. It
can be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of testing.

· acceptance testing - final testing based on specifications of the end-user or customer, or based
on use by end-users/customers over some limited period of time.

· load testing - testing an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system's response time degrades or fails.

Load Tests are end-to-end performance tests under anticipated production


load. The objective such tests are to determine the response times for various time
critical transactions and business processes and ensure that they are within
documented expectations (or Service Level Agreements - SLAs). Load tests also
measures the capability of an application to function correctly under load, by
measuring transaction pass/fail/error rates. An important variation of the load test is
the Network Sensitivity Test, which incorporates WAN segments into a load test as
most applications are deployed beyond a single LAN.

· stress testing - term often used interchangeably with 'load' and 'performance' testing. Also
used to describe such tests as system functional testing while under unusually heavy loads,
heavy repetition of certain actions or inputs, input of large numerical values, large complex
queries to a database system, etc.

Stress testing refers to a type of testing that is so harsh, it is expected to push


the program to failure. For example, we might flood a web application with data,
connections, and so on until it finally crashes. The fact of the crash might be
unremarkable. The consequences of the crash, what else fails, what data are
corrupted and so forth, are the results of interest for the stress tester.

· performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally
'performance' testing (and any other 'type' of testing) is defined in requirements documentation or
QA or Test Plans.
· usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on
the targeted end-user or customer. User interviews, surveys, video recording of user sessions,
and other techniques can be used. Programmers and testers are usually not appropriate as
usability testers.

· install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

· recovery testing - testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.

· failover testing - typically used interchangeably with 'recovery testing'

· security testing - testing how well the system protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated testing techniques.

· compatability testing - testing how well software performs in a particular


hardware/software/operating system/network/etc. environment.

· exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it.

· ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.

· context-driven testing - testing driven by an understanding of the environment, culture, and


intended use of software. For example, the testing approach for life-critical medical equipment
software would be completely different than that for a low-cost computer game.

· user acceptance testing - determining if software is satisfactory to an end-user or customer.

· comparison testing - comparing software weaknesses and strengths to competing products.

· alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.

· beta testing - testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.

· mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.

Quality Control (QC) vs. Quality Assurance (QA)


Testing is a Quality Control function. What is Quality Control? QC includes any activity that
examines products to determine if they meet their specifications. As this definition applies to
software, the "specification" may be either written specs, or needs as defined by the customer or
end users. Not only is testing a QC activity, but so are walkthroughs, reviews, or inspections of
work products like requirements, designs, code and documents. If none of these activities is a
QA activity, then what do we do in our organization that is QA? Unfortunately, most of us are
doing little or no Quality Assurance. QA includes any activity that focuses on ensuring that the
needed levels of quality are achieved. In essence, if QC is about detecting defects, the QA is
about avoiding them! QA activities include identifying problems so steps can be taken in the
future to avoid those same problems. It includes engineering the processes that are used by the
team (analysts, developers, testers, writers) so that high quality products can be built efficiently.
And it includes ensuring that each team member is performing his or her job consistently and
producing consistently good results.

You might also like