Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 141

Aim and Objective of the Subject

Aim

 Finding defects which may get created by the programmer while developing the
software.
 Gaining confidence in and providing information about the level of quality.
 To prevent defects.
 To make sure that the end result meets the business and user requirements.
 To ensure that it satisfies the BRS that is Business Requirement Specification and
SRS that is System Requirement Specifications.
 To gain the confidence of the customers by providing them a quality product.

Objective

The student should be made to:

 Expose the criteria for test cases.


 Learn the design of test cases.
 Be familiar with test management and test automation techniques.
 Be exposed to test metrics and measurements.

1
Francis Xavier Engineering College, Tirunelveli –3
Department of Information Technology
Detailed Lesson Plan
Name of the Subject& Code: SOFTWARE TESTING & IT 6004
Text Book
1. Srinivasan Desikan and Gopalaswamy Ramesh, “Software Testing – Principles and
Practices”, Pearson Education, 2006.
2. Ron Patton, “Software Testing”, Second Edition, Sams Publishing, Pearson Education,
2007.

Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
UNIT I – INTRODUCTION

1 1 Testing as an Engineering Activity 1 1 T1

2 1 Testing as a Process, Testing axioms 1 2 T1


3 1 Basic Definitions 1 3 T1
4 1 Software Testing Principles 1 4 T1
The Tester’s Role in a Software Development
5 1 1 5 T1
Organization
6 1 Origins of Defects, Cost of Defects 1 6 T1
Defect Classes, The Defect Repository and Test
7 1 1 7 T1
Design, Defect Examples
Developer/Tester Support of Developing a
8 1 1 8 T1
Defect Repository
9 1 Defect Prevention strategies. 1 9 T1
UNIT II – TEST CASE DESIGN
10 2 Test Case Design Strategies 1 10 T1
11 2 Using Black Box Approach to Test Case Design, 1 11 T1
2
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
Random Testing, Requirements based testing
12 2 Boundary Value Analysis 1 12 T1
Equivalence Class Partitioning, state based
13 2 1 13 T2
testing

14 2 cause effect graphing , compatibility testing 1 14 T1

15 2 user documentation testing , domain testing 1 15 T1


Using White Box Approach to Test design, Test
16 2 Adequacy Criteria, static testing vs. structural 1 16 T1
testing
Code functional testing, Coverage and Control
17 2 1 17 T1
Flow Graphs, Covering Code Logic
Paths, code complexity testing, Evaluating Test
18 2 1 18 T1
Adequacy Criteria.
UNIT III- LEVELS OF TESTING

19 3 The Need for Levels of Testing – Unit Test 1 19 T1

Unit Test Planning, Designing the Unit Tests,


20 3 The Test Harness, Running the Unit tests and 1 20 T1
Recording results
Integration tests, Designing Integration Tests,
21 3 1 21 T1
Integration Test Planning
21 3 Scenario testing, defect bash elimination 1 21 T1
System Testing- Acceptance testing, performance
23 3 1 23 T2
testing
24 3 Regression testing, Internationalization testing 1 24 T1

3
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
Ad-hoc testing, Alpha, Beta Tests, Testing OO
25 3 1 25 T2
systems, usability and accessibility testing
26 3 Configuration testing , Compatibility testing 1 26 T2
27 3 Testing the documentation –Website testing 1 27 T2
UNIT IV – TEST MANAGEMENT
People and organizational issues in testing,
28 4 1 28 T2
organization structures for testing teams
29 4 Testing services, Test Planning 1 29 T1
30 4 Test Plan Components 1 30 T1
31 4 Test Plan Attachments 1 31 T1
32 4 Locating Test Items, test management 1 32 T1
33 4 Test process, Reporting Test Results 1 33 T1
The role of three groups in Test Planning and
34 4 1 34 T1
Policy Development
35 4 Introducing the test specialist 1 35 T1
Skills needed by a test specialist, Building a
36 4 1 36 T1
Testing Group
UNIT V - EXPERT SYSTEMS
37 5 Software test automation 1 37 T2
38 5 Skill needed for automation 1 38 T2
39 5 Scope of automation 1 39 T2
40 5 Design and architecture for automation 1 40 T2
41 5 Requirements for a test tool 1 41 T2
42 5 Challenges in automation 1 42 T2
43 5 Test metrics and measurements 1 43 T2

4
Hours Books
Sl. Cum
Unit Topic / Portions to be Covered Req/ Referr
No Hrs
Planned ed
44 5 Project, progress metrics 1 44 T2
45 5 Productivity metrics 1 45 T2

5
UNIT: 1
INTRODUCTION

Testing as an Engineering Activity – Testing as a Process – Testing axioms – Basic


definitions – Software Testing Principles – The Tester’s Role in a Software Development
Organization – Origins of Defects – Cost of defects – Defect Classes – The Defect
Repository and Test Design – Defect Examples – Developer/Tester Support of
Developing a Defect Repository – Defect Prevention strategies.

PART- A

1) Differentiate between verification and validation?(Nov/Dec 2009,May/June 2012)

Verification Validation

1.It is process of evaluating a software 1. It is the process of evaluating a


system to determine whether product software system during or at the end of
of a given development phase satisfy the cycle in order to determine whether
the condition imposed at the start that it satisfies the specified requirements.
phase. 2. Validation is usually associated
2. Verification is usually associated with Traditional execution _based
with activities such as inspections and testing, i.e., Exercising the code with
reviews of the s/w deliverables. test cases.

2) Define the term Testing. (May/Jun 2014)

Testing is generally described as a group of procedures carried out to evaluate


some aspect of a piece of software. Testing can be described as a process used for

6
revealing defects in software, and for establishing that the software has attained a
specified degree of quality with respect to selected attributes.

3) Differentiate between testing and debugging. ( Nov/Dec 2008,May/Jun 2013)

Testing Debugging
Testing as a dual purpose process Debugging or fault localization is the
 Reveal defects process of
 And to evaluate quality attributes  Locating the fault or defect
 Repairing the code, and
 Retesting the code.

4) List the Quality Attributes. (Apr/May 2015)

 Correctness
 Reliability
 Usability
 Integrity
 Portability
 Maintainability
 Interoperability

5) Define the term Debugging or fault localization. (May/Jun 213)


Debugging or fault localization is the process of
 Locating the fault or defect
 Repairing the code, and
 Retesting the code.

7
6) Compare Error, Faults (Defects) and failures.(Nov 2015,2014,May 14,17, Nov 16)

An error is mistake or misconception or misunderstanding on the part of a


software developer.
A fault is introduced into the software as the result of an error. It is an anomaly in
the software that may cause it to behave incorrectly and not according to its
specification.
A failure is the inability of a software or component to perform its required
functions within specified performance requirements.

7) Define Test Cases. (May/Jun 2016,17,Nov/Dec 2015)

A test case in a practical sense is attest related item which contains the following
information.
 A set of test inputs.
 Execution conditions.
 Expected outputs.

8) Define process in the context of software quality. (Nov/Dec 2009,May/Jun 2016


Nov/Dec 2014,Nov/Dec 2013,May/Jun 2012)

Process, in the software engineering domain, is a set of methods, practices,


Standards, documents, activities, polices, and procedures that software engineers use to
develop and maintain a software system and its associated artifacts, such as project and
test plans, design documents, code, and manuals.

8
9) List the sources of Defects or Origins of defects. Or list the classification of
defect. (May/June 2009, May 17)
 Education
 Communication
 Oversight
 Transcription
 Process

10)Define Software Quality. (May/Jun 2016)

Quality relates to the degree to which a system, system component, or process meets
specified requirements

11)Define Test Oracle, Test Bed. (Apr/May 2015)

Test Bed: It is a platform for experimentation of large development projects. Test beds


allow for rigorous, transparent, and replicable testing of scientific theories, computational
tools, and new technologies.
Test Oracle: Test Oracle is a document, or a piece of software that allows tester to
determine whether a test has been passed or failed.

12)Define Software Testing. (May/Jun 2014, Nov -16)

Process used to reveal the defects and establishing software to attain a specified quality.

9
PART- B

1) What is the Role of process in Software quality?


(Nov/Dec 2014, 2013, May/Jun 2014)

 The software engineering process is the set of methods, practices, standards,


documents, activities, policies and procedures that software engineers used to
develop and maintain a software system and its associated artifacts such as
projects and test plans, design documents, code and manuals.
 Types of manuals namely
 user manual
 troubleshoot manual
 administration manual
 Quality factors involved in high quality software products:
 usability
 reliability
 testability
 maintainability
 Different models are used in software engineering process such as
 CMM
 spice
 TMM
Testing as a Process: Software development process has includes five sub process
namely
 Requirement / analysis process
 Product specification process
 Design process
 Implementation process

10
 The testing process further involves in two processes namely verification and
validation.
 The technical aspects of the testing relate to the techniques, method, measurement,
and tools used to ensure that the software under test is as defect free and reliable.
 The testing itself is related to two processes called verification, validation.
Validation: It is the process of evaluate a software system during or at the end of the
cycle in order to determine whether it satisfies the specified requirements.
Verification: It is process of evaluating a software system to determine whether product
of a given development phase satisfy the condition imposed at the start that phase.
Software testing: Testing is generally described as a group of procedures carried out to
evaluate source aspects of a piece of software.
Purpose of testing processes: Testing has a dual purpose process namely reveal defects
and to evaluate quality attributes of software.
Requirements
Analysis Software Development
Product Process
Process
Specification
Process Design Process

Testing Process
Verification Validation
Process Process
Fig: Example processes embedded in the software development process
Debugging: It is the process of locating the fault or defect, repairing the code and
retesting the code

11
2) List the different Software Testing Principles and explain.
(May/Jun 16,17,13,Apr/May 2015,Nov/Dec 2014,16)

 Testing principles are important to test specialists because they provide the
foundation for developing testing knowledge and acquiring testing skills.
 They also provide guidance for defining testing activities as performed in the
practice of a test specialist, A principle can be defined as;
 A general or fundamental, law, doctrine, or assumption,
 A rule or code for conduct,
 The laws or facts of nature underlying the working of an artificial device.
The principles as stated below only related to execution-based testing.
Principle1: Testing is the process of exercising a software component using a
selected set of tests cases, with the internet.
 Revealing defects, and
 Evaluating quality.
 Software engineers have made great progress in developing methods to prevent and
eliminate defects. However, defects do occur, and they have a negative impact on a
software quality. This principle supports testing as an execution-based activity to
detect defects.
 The term defect as used in this and in subsequent principle represents any
deviations in the software that have negative impact on its functionality,
performance, reliability, security and other of its specified quality attributes.
Principle-2:When the test objectives is to detect defects, then a good test case is one
that has a high probability of revealing a yet undetected defects.
 The goal for the test is to prove / disprove the hypothesis that is, determine if the
specific defect is present / absent.
 A tester can justify the expenditure of the resources by careful test design so that
principle two is supported.

12
Principle-3: Test result should be inspected meticulously.
 Tester need to carefully inspect and interpret test results. Several erroneous and
costly scenarios may occur if care is not taken.
Example: A failure may be overloaded, and the test may be granted a pass status
when in reality the software has failed the test. Testing may continue based on
erroneous test result. The defect may be revealed at some later stage of testing, but in
that case it may be make costly and difficult to locate and repair.
Principle-4: A test case must contain the expected output or result.
 The test case is of no value unless there is an explicit statement of the expected
outputs or results.
Example:
A specific variable value must be observed or a certain panel button that must light
up.
Principle-5: Test cases should be developed for both valid and invalid input
conditions.
 The tester must not assume that the software under test will always be provided
with valid inputs.
 Inputs may be incorrect for several reasons.
Example:
Software users may have misunderstandings, or lack information about the nature
of the inputs. They often make typographical errors even when compute / correct
information are available. Device may also provide invalid inputs due to erroneous
conditions and malfunctions.
Principle-6: The probability of the existence of additional defects in a software
component is proportional to the number of defects already defected in that
component.
Example:

13
If there are two components A and B and testers have found 20 defects in A and 3
defects in B, then the probability of the existence of additional defects in A is higher
than B.
Principle-7: Testing should be carried out by a group that is independent of the
development group.
 Tester must realize that
1. Developers have a great deal of pride in their work and
2. On practical level it may be difficult for them to conceptualize where
defects could be found.
Principle-8: Tests must be repeatable and reusable
 This principle calls for experiments in the testing domain to require recording of
the exact condition of the test, any special events that occurred, equipment used,
and a careful accounting of the results.
 This information invaluable to the developers when the code is returned for
debugging so that they can duplicate test conditions.
Principle-9: Testing should be planned.
 Test plan should be developed for each level of testing, and objective for each
level should be described in the associated plan.
 The objectives should be stated as quantitatively as possible plan, with their
precisely specified objectives.
Principle-10: Testing activities should be integrated into the software life cycle.
 It is no longer feasible to postpone testing activities until after the code has been
written.
 Test planning activities into the software lifecycle starting as early as in the
requirements analysis phases, and continue on throughout the software lifecycle in
parallel with development activities.
Principle-11: Testing is a creative and challenging task.
Difficult and challenges for the tester includes,

14
 A tester needs to have comprehensive knowledge of the software engineering
discipline.
 A tester needs to have knowledge from both experience and education as to how
software is specified, designed and developed.
 A tester needs to be able to manage many details.
 A tester needs to have knowledge of fault type and where faults of a certain type
night occur in code constructs.
 A tester needs a reason like scientist and propose hypotheses that related to
presence of specific type of defects.
 A tester needs to design and record test procedure for running the tests.
 A tester to plan for testing and allocate the proper resources.

3) Explain in detail about the Origins of defects. (May/Jun 2016,2014,2013,17)

Origins of Defects
 Defects have determined effects on software users, and software engineers work
very hard to produce high-quality software with a low number of defects.
 But even under the best of development circumstances errors are made, resulting
in defects beings injected in the software during the phase of the software
lifecycle.

Defect Sources
Lack of education
Poor communication
Oversight
Transcription
Immature process
Impact of S/W artifacts
Errors,Faults,Defects,Failures

Impact from user’s view


Poor quality software
User dissatisfaction
15
Fig: Origins of Defects

 Tester as doctors need to have knowledge about possible defects in order to


develop defect hypotheses, they use the hypotheses to;
 Design test cases
 Design test procedure
 Assemble test sets
 Select the testing levels appropriate for the tests
 Evaluate the results of the test.
A successful testing experiment will prove the hypothesis is true that is, the
hypothesized defect was present. Then the software can be repaired.
Developer/Tester Support for Developing a Defect Repository
The benefits of developing a defect repository to store defect information. As software
engineers and test specialists we should follow the examples of engineers in other
disciplines who have realized the usefulness of defect data. A requirement for repository
development should be a part of testing and/or debugging policy statements. The defect
data is useful for test planning, a TMM level 2 maturity goals.
It helps you to select applicable testing techniques, design (and reuse) the test cases
you need, and allocate the amount of resources you will need to devote to detecting and
removing these defects. This in turn will allow you to estimate testing schedules and
costs. The defect data can support debugging activities as well.
A defect repository can help to support achievement and continuous implementation of
several TMM maturity goals including con trolling and monitoring of test, software
quality evaluation and control, test measurement, and test process improvement.

16
Test
planning
Defect
and Test Repository
case
Control
developm
ling
ent
and
monitor
Defect
ing
Preventi
on Quality Test
Test
Evaluat Process
Measur
ion Improv
ement
Control ement
Figure: Defect Repository

4) Explain in detail about Defect Classes, Defect Repository and Test Design.
(Nov/Dec 15,16,Apr/May
2015)
 Defect can be classified in many ways. It is important for an organization to adapt
a single classification scheme and apply it to projects. No matter which
classification scheme is selected, some defects will fit into more than one class or
category. Because of this problem, developers, testers, and SQA staff should try to
be as consistent as possible when recording defect data.
 Defect classes are classified into four types namely
 requirement/specification  design defect class
defect class  coding defect class
 testing defect class
Requirement/Specification Defect Class:

17
 functional description defects- the overall description of what the product does,
and how it should behave is incorrect, ambiguous, and/or incomplete
 Feature defects- distinguishing characteristics of a software component or
system.
 Failure interaction defects- these are due to an incorrect description of how the
features should interact.
 Interface description defects- these occur in the description of how the target
software is to interface with external software, hardware and users.

Defect Classes
Functional Description
Features, Features Interaction, Defects Repository
Interface Description Defect classes
Severity
Occurrences

Design Defect Classes


Algorithm and processing
Control, logic, & sequence Data
Module interface description
External interface description Test Defect Classes
Test hardness
Test design
Test procedure

Coding Defect Classes


Algorithm and processing
Control, logic, & sequences
Data flow, module interface
Code documentation
External flow, software

Fig: Defect Classes and Defect Repository


Design defects:
 Algorithms and processing defects- these occur when the processing steps in the
algorithm as described by the pseudo code are incorrect.

18
 Control logic and sequence defects- Control defects occur when logic flow in the
pseudo code is not collect.
 Data Defects- These are associated with in collect design of data structures.
 Module Interface Description Defects- This include in correct, missing, and /or
inconsistent defects of parameter types.
 Functional Description Defects- This includes incorrect missing, and/ or unclear
defects of design elements. These defects are best defected during a design review.
 External Interface Description defects- these are derived four incorrect design
description for inter faces with COTS components, external software systems,
databases, nd hardware devices.
Coding Defects
 Algorithmic and processing Defects- Adding levels of programming detail to
design, code-related algorithmic and processing defects now include unchecked
overflow and underflow conditions , comparing inappropriate data types,
converting one data type to another, incorrect ordering of arithmetic operators ,
misuse or omission of parenthesis , precision loss an incorrect use of signs.
 Control logic and sequence Defects- On the coding level these would include
incorrect expression of case statements incorrect iterations of loops.
 Typographical Defects- These are syntax errors.
 Initialization Defects- These occur when initialization statements are omitted or
are incorrect this may occur because of misunderstandings or lack of
communication between programmers and / or programmers and designers,
carelessness of the programming environment.
 Data Flow Defects- These are certain reasonable operational sequence that data
should flow through.
 Data Defects- These are indicated by incorrect implementation of data structures.

19
 Module Interface Defects- As in the case of module design elements, interface
defects in the code may be due to using incorrect or inconsistent parameter type an
incorrect number of parameters.
 Code Documentation Defects – When the documentation does not reflect what
the programs actually does, or is in complete or ambiguous, this is called a code
documentation defect.
 External hardware, software interface defects – These defects arise from
problems related to system called links to database, input/output sequence ,
memory usage , interrupts and exception handling , data exchange with hardware ,
protocols , formats, interfaces with build files , and fixing sequences.
Testing Defects
Defects are not confined to code and it related artifacts. Test plans , tests cases,
test hardness and test procedures can also contain defects . Defect in test plans are best
detected using review techniques.
 Test Case Design and Test Procedure Defects- These would encompass
incorrect, incomplete, missing, inappropriate test cases and test procedures.
 Test hardness defect - Test Harness, also known as automated test framework
mostly used by developers. A test harness provides stubs and drivers, which will
be used to replicate the missing items, which are small programs that interact with
the software under test.

5) Describe the various factors to measure the software quality.

Software Quality
 Quality relates to the degree to which a system, system component, or process
meets specified requirements.
 Quality relates to the degree to which a system, system component, or process
meets customer or user needs, or expectations. In order to determine whether a

20
system, system component, or process is of high quality we use what are called
quality attributes. These are characteristics that reflect quality. For software
artifacts we can measure the degree to which they possess a given quality attribute
with quality metrics.
Quality metrics
 A metric is a quantitative measure of the degree to which a system, system
component, or process possesses a given attribute.
 There are product and process metrics. A very commonly used example of a
software product metric is software size, usually measured in lines of code (LOC).
Two examples of commonly used process metrics are costs and time required for a
given task. Quality metrics are a special kind of metric.
A quality metric is a quantitative measurement of the degree to which an item
possesses a given quality attribute.
Many different quality attributes have been described for software.
 Correctness is the degree to which the system performs its intended function.
 Reliability is the degree to which the software is expected to perform its required
functions under stated conditions for a stated period of time.
 Usability relates to the degree of effort needed to learn, operate, prepare input, and
interpret output of the software
 Integrity relates to the system’s ability to withstand both intentional and
accidental attacks
 Portability relates to the ability of the software to be transferred from one
environment to another
 Maintainability is the effort needed to make changes in the software
 Interoperability is the effort needed to link or couple one system to another.
Another quality attribute that should be mentioned here is testability. This attribute
is of more interest to developers/testers than to clients. It can be expressed in the
following two ways:
 The amount of effort needed to test the software to ensure it performs needed),
21
 The ability of the software to reveal defects under testing conditions (some
software is designed in such a way that defects are well hidden during ordinary
testing conditions).
Testers must work with analysts, designers and, developers throughout the software
life system to ensure that testability issues are addressed.
Software Quality Assurance Group
The software quality assurance (SQA) group in an organization has ties to quality
issues. The group serves as the customers’ representative and advocate. Their
responsibility is to look after the customers’ interests.
The software quality assurance (SQA) group is a team of people with the
necessary training and skills to ensure that all necessary actions are taken during the
development process so that the resulting software conforms to established technical
requirements.
Reviews
In contrast to dynamic execution-based testing techniques that can be used to
detect defects and evaluate software quality, reviews are a type of static testing technique
that can be used to evaluate the quality of a software artifact such as a requirements
document, a test plan, a design document, a code component. Reviews are also a tool that
can be applied to revealing defects in these types of documents.
Definition: A review is a group meeting whose purpose is to evaluate a software artifact
or a set of software artifacts.

22
UNIT II

TEST CASE DESIGN

Test case Design Strategies – Using Black Box Approach to Test Case Design – Random
Testing –Requirements based testing – Boundary Value Analysis – Equivalence Class
Partitioning – State based testing – Cause-effect graphing – Compatibility testing – user
documentation testing – domain testing – Using White Box Approach to Test design –
Test Adequacy Criteria – static testing vs.structural testing – code functional testing –
Coverage and Control Flow Graphs – Covering Code Logic – Paths – code complexity
testing – Evaluating Test Adequacy Criteria.
PART A
1. What is the need for code functional testing to design test case?

Functional Testing is a testing technique that is used to test the features,


functionality of the system or software; it should cover all the scenarios including failure
paths and boundary cases.
2. Compare black box and white box testing.(May/Jun 2016,2013,17,Nov/Dec 2015)

Black box testing White box Testing


Black box testing , the tester has no The White box approach focuses on the
Knowledge of its inner structure(i.e. how it inner structure of the software to be tested.
woks)The tester only has knowledge of
what it does(Focus only input & output)
Black box approach is usually applied large White box approach is usually applied
size piece of software. small size piece of software.
Black box testing sometimes called White box sometimes called clear or glass
functional or specification testing. box testing.

23
3. Draw the tester’s view of black box and white box testing.

Test Strategy Tester’s View


Inputs
Black box (No Knowledge about inner structure,
Focus only input and output)
Outputs

White box (focuses on the inner structure of the


software)

4. List the Knowledge Sources & Methods of black box and white box testing.

Test
Knowledge Sources Methods
Strategy

1.Requirements 1. Equivalence class partitioning (ECP)


document 2. Boundary value analysis (BVA)
Black box
2.Specifications 3. State Transition testing.(STT)
3.Domain Knowledge 4. Cause and Effect Graphing.
4.Defect analysis data 5. Error guessing
1. High level design 1. Statement testing
2. Detailed design 2. Branch testing
3. Control flow graphs 3. Path testing
White box
4.Cyclomatic 4. Data flow testing
complexity 5. Mutation testing
6. Loop testing

5. Define Control Flow Graph. (Nov/Dec 2013)


24
A control flow graph (CFG) is a representation using graph notation, of all paths
that might be traversed through a program during its execution

6. How static testing is different from structural testing? (Nov/Dec 2014)

Static Testing code is not executed. Rather it manually checks the code,
requirement documents, and design documents to find errors. Hence, the name "static".
Structural testing, also known as glass box testing or white box testing is an
approach where the tests are derived from the knowledge of the software's structure or
internal implementation.

7. What are the factors affecting less than 100% degree of coverage?

 The nature of the unit


 Some statements/branches may not be reachable.
 The unit may be simple, and not mission, or safety, critical, and so
complete coverage is thought to be unnecessary.
 The lack of resources
 The time set aside for testing is not adequate to achieve complete coverage
for all of the units.
 There is a lack of tools to support complete coverage
 Other project related issues such as timing, scheduling and marketing constraints.

8. List the various iterations of Loop testing.


 Zero iteration of the loop
 One iteration of the loop
 Two iterations of the loop

25
 K iterations of the loop where k<n
 n-1 iterations of the loop
 n+1 iterations of the loop

9. What are the basic primes for all structured program. (May/Jun 2013)

 Sequential ( e.g., Assignment statements)


 Condition (e.g., if/then/else statements)
 Iteration (e.g., while, for loops)
The graphical representation of these three primes are given
Sequence Condition Iteration
False

True False True

10. Write the formula for cyclomatic complexity. (Nov/Dec 14,16)

The complexity value is usually calculated from control flow graph(G) by the formula.
V (G) = E-N+2
Where the value E is the number of edges in the control flow graph
The value N is the number of nodes.
11. Define: Test Adequacy Criteria. (Apr/May 2015)

26
A software test adequacy criteria is a predicate that defines what properties of a
program must be exercised to constitute a thorough test.

12 .What is positive and negative testing? (May/Jun 2014)

Positive testing is that testing which attempts to show that a given module of an
application does what it is supposed to do.
Negative testing is that testing which attempts to show that the module does not
do anything that it is not supposed to do.

13. Write the samples of cause and effect notations. May / June 15

PART- B

27
1. Explain in detail about the Test case design strategies.

The smart tester is to understand the functionality, input/output domain and the
environment for use of the code being tested. For certain types of testing the user must
also understand in detail how the code is constructed.
Roles of a Smart Tester:
 Reveal defects
 Can be used to evaluate software performance, usability & reliability.
 Understand the functionality, input/output domain and the environment for use of
the code being tested
Test Case Design Strategies and Techniques
Test Knowledge
Tester's View Techniques / Methods
Strategies sources
Requirements
Black-box
Inputs document Equivalence class partitioning
testing
Specifications Boundary value analysis
(not code-
User manual Cause effect graphing
based)
Models Error guessing
(sometimes
Outputs Domain knowledge Random testing
called
Defect analysis data State-transition testing
functional
Intuition Scenario-based testing
testing)
Experience
White-box Program code Control flow testing/coverage:
testing Control flow graphs - Statement coverage
(also called Data flow graphs - Branch (or decision) coverage
code-based Cyclomatic - Condition coverage
or structural complexity - Branch and condition
testing) High-level design coverage

28
- Modified condition/ decision
coverage
- Multiple condition coverage
Detailed design - Independent path coverage
- Path coverage
Data flow testing/ coverage
Class testing/coverage
Mutation testing
Figure: Two basic Testing Strategies
Using the Black Box Approach to Test Case Design
 Black box test strategy considers only inputs and outputs as a basis for designing
test cases.
 This is prohibitively expensive even if the target software is a simple software
unit. The goal for the smart tester is to effectively use the resources available by
developing a set of test cases that gives the maximum yield of defects for the time and
effort spent.
 To help achieve this goal using the black box approach we can select from several
methods.
 Random Testing
 Equivalence Class Partitioning
 Boundary Value Analysis
 Other black box test design approaches
 Cause-and-Effect Graphing
 State Transition Testing
 Error Guessing

2. Explain the different Types of black box testing.(May/Jun 17,14,Nov/Dec 14,16)

29
Random Testing
 Each software module or system has an input domain from which test input
data is selected.
 If a tester randomly selects inputs from the domain, this is called random
testing.
 Example)- if the valid input domain for a module is all positive integers
between 1 and 100, the tester using this approach would randomly, or
unsystematically, select values from within that domain; for example, the
values 55, 24, 3 might be chosen.
Issues in Random Testing:
 Are the three values adequate to show that the module meets its specification
when the tests are run?
 Should additional or fewer values be used to make the most effective use of
resources?
 Are there any input values, other than those selected, more likely to reveal
defects? For example, should positive integers at the beginning or end of the
domain be specifically selected as inputs?
 Should any values outside the valid domain be used as test inputs? For
example, should test data include floating point values, negative values, or
integer values greater than 100?

30
Equivalence Class Partitioning
If a tester is viewing the software-under-test as a black box with well-defined inputs and
outputs, a good approach to selecting test inputs is to use a method called equivalence
class partitioning.
 Equivalence class partitioning results in a partitioning of the input domain of
the software-under- test.
 It eliminates the need for exhaustive testing, which is not feasible.
 It guides a tester in selecting a subset of test inputs with a high probability of
detecting a defect.
 It allows a tester to cover a larger domain of inputs/outputs with a smaller
subset selected from an equivalence class. Most equivalence class partitioning
takes place for the input domain.
 The tester must consider both valid and invalid equivalence classes. Invalid
classes represent erroneous or unexpected inputs.
 Equivalence classes may also be selected for output conditions.
 The derivation of input or outputs equivalence classes is a heuristic process.
List of conditions
 ‘‘If an input condition for the software-under-test is specified as a range of values,
select one valid equivalence class that covers the allowed range and two invalid
equivalence classes, one outside each end of the range.’’
For example, suppose the specification for a module says that an input, the length
of a widget in millimeters, lies in the range 1–499; then select one valid
equivalence class that includes all values from 1 to 499. Select a second
equivalence class that consists of all values less than 1, and a third equivalence
class that consists of all values greater than 499.
 ‘‘If an input condition for the software-under-test is specified as a number of
values, then select one valid equivalence class that includes the allowed number of

31
values and two invalid equivalence classes that are outside each end of the allowed
number.’’
 ‘‘If an input condition for the software-under-test is specified as a set of valid
input values, then select one valid equivalence class that contains all the members
of the set and one invalid equivalence class for any value outside the set.’’
 ‘‘If an input condition for the software-under-test is specified as a “must be”
condition, select one valid equivalence class to represent the “must be” condition
and one invalid class that does not include the “must be” condition.’’
 ‘‘If the input specification or any other information leads to the belief that an
element in an equivalence class is not handled in an identical way by the software-
under-test, then the class should be further partitioned into smaller equivalence
classes.’’
Boundary Value Analysis

Figure. Boundaries of an equivalence partition


 If an input condition for the software-under-test is specified as a range of values,
develop valid test cases for the ends of the range, and invalid test cases for
possibilities just above and below the ends of the range.
 For example if a specification states that an input value for a module must lie in
the range between -1.0 and +1.0, valid tests that include values for ends of the
range, as well as invalid test cases for values just above and below the ends,
should be included. This would result in input values of -1.0, -1.1, and 1.0, 1.1.
 If an input condition for the software-under-test is specified as a number of values,
develop valid test cases for the minimum and maximum numbers as well as
32
invalid test cases that include one lesser and one greater than the maximum and
minimum.
 For example, for the real-estate module mentioned previously that specified a
house can have one to four owners, tests that include 0,1 owners and 4,5 owners
would be developed.
 If the input or output of the software-under-test is an ordered set, such as a table or
a linear list, develop tests that focus on the first and last elements of the set.

3. Explain the Other Black box test design Approaches in detail.(May/Jun 2016)

The steps in developing test cases with a cause-and-effect graph are as follows:
 The tester must decompose the specification of a complex software component
into lower-level units.
 For each specification unit, the tester needs to identify causes and their effects. A
cause is a distinct input condition or an equivalence class of input conditions.
 An effect is an output condition or a system transformation. Putting together a
table of causes and effects helps the tester to record the necessary details.
 Nodes in the graph are causes and effects.
 Causes are placed on the left side of the graph and effects on the right. Logical
relationships are expressed using standard logical operators such as AND, OR, and
NOT, and are associated with arcs.
 The graph may be annotated with constraints that describe combinations of causes
and/or effects that are not possible due to environmental or syntactic constraints.
1. The graph is then converted to a decision table.
2. The columns in the decision table are transformed into test cases.
Example
The following example illustrates the application of this technique. Suppose we
have a specification for a module that allows a user to perform a search for a character in

33
an existing string. The specification states that the user must input the length of the string
and the character to search for.

AND ^

Effect 3 occurs if both causes 1 and 2 are present

Effect 2 occurs if cause 1 occurs

\\
Effect 2 occurs if cause 1 does not occur
Fig: Samples of cause-and-effect graph notations
 If the string length is out-of-range an error message will appear. If the character
appears in the string, its position will be reported. If the character is not in the
string the message “not found” will be output.

The input conditions or causes are as follows:


C1: Positive integer from 1 to 80
C2: Character to search for is in string

The output conditions, or effects are:


E1: Integer out of range
E2: Position of character in string
E3: Character not found
The rules or relationships can be described as follows:

34
If C1 and C2, then E2.
If C1 and not C2, then E3.
If not C1, then E1.
 Based on the causes, effects, and their relationships, a cause-and-effect graph to represent
this information is shown in the following Figure.

E1

C1 ^
E2

C2

^ E3

Figure - Cause-and-effect graph for the character search example

 The decision table reflects the rules and the graph and shows the effects for all
possible combinations of causes. Columns list each combination of causes, and
each column represents a test case. Given n causes this could lead to a decision
table with 2n entries, thus indicating a possible need for many test cases.

Inputs Length Character to search for


Outputs

T1 5 C 3
T2 5 W Not found
T3 90 Integer not found
<TABLE- Decision table for character search example>

35
T1 T2 T3
C1 1 1 0
C2 1 0 -

E1 0 0 1
E2 1 0 0
E3 0 1 0

<Table>
4. Explain in detail about the types of white box testing & additional white box test
design approaches. (Nov / Dec 16, May 17)

Decision (Or) Branch Coverage


 Execute every branch of a program :
each possible outcome of each decision occurs at least once
 Example:
o simple decision: IF b THEN s1 ELSE s2
o multipledecision:
CASE x OF
2:
3:
 Stronger than statement coverage
o IF THEN without ELSE – if the condition is always true all the statements
are executed, but branch coverage is not achieved

36
Data Flow Testing
 Add data flow information to the control flow graph
o statements that write variables (a value is assigned or changed)
o statements that read variables
 Generate test cases that exercise all write-read pairs of statements for each variable
 Several variants of this technique
Example
1 PROGRAM sum ( maxint, N : INT )
2 INT result := 0 ; i := 0 ;
3 IF N < 0
4 THEN N := - N ;
5 WHILE ( i < N ) AND ( result <= maxint )
6 DO i := i + 1 ;
7 result := result + i ;
37
8 OD;
9 IF result <= maxint
10 THEN OUTPUT ( result )
11 ELSE OUTPUT ( “too large” )
12 END.

write-read pairs for variable result:

Control Flow Testing/Coverage


Logic Elements Considered For Coverage And Control Flow Graph In White Box
Test Design
 Program Statements
 Decision/Branch
 Conditions
 Combination of Decisions & Conditions
 Paths
Control flow analysis
1 PROGRAM sum ( maxint, N : INT )
2 INT result := 0 ; i := 0 ;
3 IF N < 0
4 THEN N := - N ;

38
5 WHILE ( i < N ) AND ( result <= maxint )
6 DO i := i + 1 ;
7 result := result + i ;
8 OD;
9 IF result <= maxint
10 THEN OUTPUT ( result )
11 ELSE OUTPUT ( “too large” )
12 END.

Start

result := 0;
i := 0;

N<0 Yes N := -N;

No

(i < N) and i := i+1;


Yes
(result <= maxint) result := result + i;

No

Yes result <= maxint No

output(result); output(“too large”);

Exit

Cyclomatic Complexity (Or) Independent Path (Or )Basis Path Coverage


 Obtain a maximal set of linearly independent paths (also called a basis of
independent paths)
o If each path is represented as a vector with the number of times that each
edge of the control flow graph is traversed, the paths are linearly
independent if it is not possible to express one of them as a linear
combination of the others
39
 Generate a test case for each independent path
 The number of linearly independent paths is given by the McCabe's cyclomatic
complexity of the program
o Number of edges - Number of nodes + 2 in the control flow graph
o Measures the structural complexity of the program
 Problem: some paths may be impossible to execute
 Also called structured testing (see McCabe for details)
 McCabe's argument: this approach produces a number of test cases that is
proportional to the complexity of the program (as measured by the cyclomatic
complexity), which, in turn, is related to the number of defects expected.

5. How to evaluating Test adequacy Criteria? Explain. (May/June2013, 17)

40
 The goal for white box testing is to ensure that the internal components of a
program are working properly. A common focus is on structural elements such as
statements and branches. The tester develops test cases that exercise these
structural elements to determine if defects exist in the program structure.
 The application scope of adequacy criteria also includes:
 helping testers to select properties of a program to focus on during test;
 helping testers to select a test data set for a program based on the selected
properties;
 supporting testers with the development of quantitative objectives for
testing;
 indicating to testers whether or not testing can be stopped for that program.
 A test data set is statement, or branch, adequate if a test set T for program P
causes all the statements, or branches, to be executed respectively.
 “A selection criteria can be used for selecting the test cases or for checking
whether or not a selected test suite is adequate, that is, to decide whether or not the
testing can be stopped”
 Adequacy criteria - Criteria to decide if a given test suite is adequate, i.e., to give
us “enough” confidence that “most” of the defects are revealed
 In practice, reduced to coverage criteria
 Coverage criteria
 Requirements/specification coverage
 At least one test case for each requirement
 Cover all statements in a formal specification
 Model coverage
 State-transition coverage, Use-case and scenario coverage
 Code coverage
 Statement coverage, Data flow coverage
 Fault coverage

41
Testers can use the axioms to
 recognize both strong and weak adequacy criteria; a tester may decide to use a
weak criterion, but should be aware of its weakness with respect to the properties
described by the axioms; focus attention on the properties that an effective test
data adequacy criterion should exhibit;
 select an appropriate criterion for the item under test;
 Stimulate thought for the development of new criteria; the axioms are the
framework with which to evaluate these new criteria.
The axioms are based on the following set of assumptions:
 programs are written in a structured programming language;
 programs are SESE (single entry/single exit);
 All input statements appear at the beginning of the program;
 All output statements appear at the end of the program.
The axioms/properties described by Weyuker are the following:
 Applicability Property
 No exhaustive Applicability Property
 Monotonicity Property
 Inadequate Empty Set
 Antiextensionality Property
 General Multiple Change Property
 Antidecomposition Property
 Ant composition Property
 Renaming Property
 Complexity Property

6. Compare functional and structural testing with its advantage and


disadvantages. (Nov/Dec 2015,Nov/Dec 2014)

42
 Both Structural and Functional Technique is used to ensure adequate testing
 Structural analysis basically test the uncover error occur during the coding of the
program.
 Functional analysis basically test he uncover occur during implementing
requirements and design specifications.
 Functional testing basically concern about the results but not the processing.
 Structural testing is basically concern both the results and also the process.
 Structural testing is used in all the phases where design , requirements and
algorithm is discussed.
 The main objective of the Structural testing to ensure that the functionality is
working fine and the product is technically good enough to implement in the real
environment.
 Functional testing is some times called as black box testing, no need to know
about the coding of the program.
 Structural testing is some times called as white box testing because knowledge of
code is very much essential. We need the understand the code written by other
users.
Various Structural Testing are
 Stress Testing
 Execution Testing
 Operations Testing
 Recovery Testing
 Compliance Testing
 Security Testing
Static testing is a software testing method that involves examination of
the program's code and its associated documentation but does not require the program be
executed. Dynamic testing, the other main category of software testing methods, involves

43
interaction with the program while it runs. The two methods are frequently used together
to try to ensure the functionality of a program.
 Static testing may be conducted manually or through the use of
various software testing tools. Specific types of static software
testing include code analysis, inspection, code reviews and
walkthroughs.

7. Explain the following testing concepts: (Nov/Dec 2015)

1) Dynamic versus static testing


Static Testing Dynamic Testing

1. Static Testing is white box testing which


1. Dynamic Testing on the other hand is
is done at early stage if development life
done at the later stage of development
cycle. It is more cost effective than
lifecycle.
dynamic testing

2. Static testing has more statement 2. Dynamic Testing has less statement
coverage than dynamic  testing in shorter stage because it is covers limited area of
time code

3. It is done before code deployment 3. It is done after code deployment

4.  It is performed in Verification Stage 4. It is done in Validation Stage

5. This type of testing is done without the 5. This type of execution is done with the
execution of code. execution of code.

6. Static testing gives assessment of code as 6. Dynamic Testing gives bottlenecks  of
well as documentation. the software system.

7.  In Static Testing techniques a checklist 7. In Dynamic Testing technique the test
is prepared for testing process cases are executed.

44
8. Static Testing Methods include 8. Dynamic testing involves functional and
Walkthroughs, code review. nonfunctional testing

 
2) Manual versus Automatic Testing
Executing the test cases manually without any tool support is known as
manual testing. Taking tool support and executing the test cases by using
automation tool is known as automation testing.
Following table shows the difference between manual testing and
automation testing.
Manual Testing Automation Testing
1. Time consuming and tedious: Since
1. Fast Automation runs test cases
test cases are executed by human
significantly faster than human resources.
resources so it is very slow and tedious.
2. Huge investment in human
2. Less investment in human resources:Test
resources: As test cases need to be
cases are executed by using automation tool so
executed manually so more testers are
less tester are required in automation testing.
required in manual testing.
3. Less reliable: Manual testing is less
3. More reliable: Automation tests perform
reliable as tests may not be performed
precisely same operation each time they are
with precision each time because of
run.
human errors.
4. Non-programmable: No programming 4. Programmable: Testers can program
can be done to write sophisticated tests sophisticated tests to bring out hidden
which fetch hidden information. information.

45
UNIT – III

LEVELS OF TESTING

The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit
Tests – The Test Harness – Running the Unit tests and Recording results – Integration
tests – Designing Integration Tests – Integration Test Planning – Scenario testing –
Defect bash elimination System Testing – Acceptance testing – Performance testing –
Regression Testing –Internationalization testing – Ad-hoc testing – Alpha, Beta Tests –
Testing OO systems – Usability and Accessibility testing – Configuration testing –
Compatibility testing – Testing the documentation –Website testing.

PART- A

1. Define Unit Test and characterized the unit test. (May/Jun 2012)
At a unit test a single component is tested. A unit is the smallest possible testable
software component. It can be characterized in several ways
 A unit in a typical procedure oriented software systems.
 It performs a single cohesive function.
 It can be compiled separately.
 It contains code that can fit on a single page or a screen.

2. Define Alpha and Beta Test.(May/Jun 2016,May/Jun 2014, Nov / Dec 16)
 Alpha test is for developer’s to use the software and note the problems.
 Beta test who use it under real world conditions and report the defect to the
developing organization.

46
3. What are the approaches are used to develop the software?
There are two major approaches to software development
 Bottom-Up
 Top Down

4. List the issues of class testing.


Issue1: Adequately Testing classes
Issue2: Observation of object states and state changes.
Issue3: The retesting of classes-I
Issue4: The retesting of classes-II

5. Define test Harness. (May/Jun 2013,17)


The auxiliary code developed into support testing of units and components is
called a test harness. The harness consists of drivers that call the target code and stubs
that represent modules it calls.

6. Define Test incident report.


The tester must determine from the test whether the unit has passed or failed the
test. If the test is failed, the nature of the problem should be recorded in what is
sometimes called a test incident report.

7. Write the activities followed in internationalization testing’s. (Apr/May 2015)


 Enable the code
 Enable the testing
 Locale Testing
 I18 Testing
 Fake language testing
 Language Testing

47
 Localization Testing
 Release

8. Write the different Levels of Software Testing. (Nov/Dec 2015,May/Jun 2013)


 Unit testing
 Integration testing
 System testing
 Acceptance testing

9. List out the various types of system testing. (Nov/Dec 2014,16)


 Functional test
 Performance test
 Stress test
 Configuration test
 Security test
 Recovery test

10. Define regression testing. (May/Jun 2016)


Regression testing is the process of testing changes to computer programs to make
sure that the older programming still works with the new changes. Regression testing is a
normal part of the program development process and, in larger companies, is done by
code testing specialists
11. Define Ad-hoc testing. (Nov/ Dec 2015,May/Jun 2014)
Ad hoc testing is an informal and improvisational approach to assessing the
viability of a product. An ad hoc test is usually only conducted once unless a defect is
found. Commonly used in software development, ad hoc testing is performed without a
plan of action and any actions taken are not typically documented.

48
12. Write the purpose of Defect Bash testing. (Apr/May 2015)
It is an ad hoc testing where people performing different role in an organization
test the product together at the same time. The testing by all the participants during defect
bashing is not based on written test cases. What is to be tested is left to an individual's
decision and creativity. This is usually done when the software is close to being ready to
release.

13. Mention any two needs for levels of testing.


There are generally four recognized levels of tests: unit testing, integration testing,
component interface testing, and system testing. Tests are frequently grouped by where
they are added in the software development process, or by the level of specificity of the
test. The main levels are unit-, integration-, and system testing that are distinguished by
the test target without implying a specific process model. Other test levels are classified
by the testing objective.

49
PART – B

1. Explain in detail about the need for levels of testing.


(May/Jun
2014,Nov/Dec 2014)
Different Levels of Software Testing
 Unit testing
 Integration testing
 System testing
 Acceptance testing
The Need for Level of Software Testing
 Unit test - Individual component.
 A unit is the smallest possible testable software component.
It can be characterized in several ways. For example, a unit in a typical procedure-
oriented software system:
 performs a single cohesive function;
 can be compiled separately;
 is a task in a work breakdown structure (from the manager’s point of view);
 contains code that can fit on a single page or screen.
 Integration test - component groups.
 Integration test for procedural code has two major goals
 To detect that occur on the interface of units.
 To assemble the individual units into working subsystems and finally a
complete system that is ready for system test.
 System test - system as a whole.
 Types of system test are
 Functional test
 Performance test

50
 Stress test
 Configuration test
 Security test
 Recovery test
 Acceptance test - system as a whole with customer requirements.
 For tailor made software(customized software):
 acceptance tests – performed by users/customers
 much in common with system test
 For packaged software (market made software):
 alpha testing – on the developers site
 beta testing – on a user site

2. Explain in detail about unit testing. (May/Jun 2016, 2013)


 A unit is the smallest possible testable software component.
 It can be characterized in several ways. For example, a unit in a typical procedure-
oriented software system:
 Performs a single cohesive function;
 Can be compiled separately;
 Is a task in a work breakdown structure (from the manager’s point of view);
Procedure-
 Contains code that can fit on a single page or screen.
sized
Some components suitable for unit test:
reusable
components
Class/ (Small Sized
Procedure
Objects COTS
s and
and Components
functions or
Methods
components
from an in-
Fig. Some components suitable for unit test
house reuse
library)
51
The principal goal for unit testing
 The principal goal for unit testing is insure that each individual software unit is
functioning according to its specification. Good testing practice calls for unit tests
that are planned and public.
 Planning includes designing tests to reveal defects such as functional description
defects, algorithmic defects, data defects, and control logic and sequence defects.
The unit should be tested by an independent tester (someone other than the
developer) and the test results and defects found should be recorded as a part of
the unit history.
 To prepare for unit test the developer/tester must perform several tasks. These are:
 plan the general approach to unit testing;
 design the test cases, and test procedures (these will be attached to the test
plan);
 define relationships between the tests;
 prepare the auxiliary code necessary for unit test.
The Task Required for Preparing Unit Test by the Developer/Tester
To prepare for unit test by the developer/tester must perform several tasks. They are
 Plan the general approach to unit testing.
 Design the test cases, and test procedures.
 Define the relationship between the tests.
 Prepare the support code necessary for unit test.
The Tasks Required for Planning of a Unit Test
 Describe unit test approach and risks.
 Identify unit features to be tested.
 Add levels of detail to the plan.
The Components Suitable for Conduct the Unit Test
 Procedure and function
52
 Class/object and manuals.
 Procedure-sized reusable component.
Designing the Unit Tests
 It is important to specify
 the test cases (including input data, and expected outputs for each
test case)
 the test procedures (steps required run the tests).
 Developers/testers should also describe the relationships between the tests.
 Test suites can be defined that bind related tests together as a group.
 All of this test design information is attached to the unit test plan.
 Test cases, test procedures, and test suites may be reused from past projects if the
organization has been careful to store them so that they are easily retrievable and
reusable.
 We design test cases for functions and procedures. They are also useful for
designing tests for the individual methods (member functions) contained in a class.
This approach gives the tester the opportunity to exercise logic structures and/or
data flow sequences, or to use mutation analysis, all with the goal of evaluating the
structural integrity of the unit.
 In the case of a smaller-sized COTS component selected for unit testing, a black
box test design approach may be the only option. It should be mentioned that for
units that perform mission/safely/business critical functions, it is often useful and
prudent to design stress, security, and performance tests at the unit level if
possible.
The Test Harness The auxiliary code developed to support testing of units and
components is called a test harness. The harness consists of drivers that call the target
code and stubs that represent modules it calls.

53
Dri
ver
Call and
pass
paramet
Unit Under Rest
er

Ca Ca Acknowl
ll Su ll Su edge
b1 b2
Fig. The test harness
Running the Unit Tests And Recording Results
Unit tests can begin when
 the units becomes available from the developers (an estimation of
availability is part of the test plan),
 the test cases have been designed and reviewed, and
 the test harness, and any other supplemental supporting tools, are available.
TABLE- Summary work sheet for unit test results

When a unit fails a test there may be several reasons for the failure. The most likely
reason for the failure is a fault in the unit implementation (the code). Other likely causes
that need to be carefully investigated by the tester are the following:

54
• a fault in the test case specification (the input or the output was not specified
correctly);
• a fault in test procedure execution (the test should be rerun);
• a fault in the test environment (perhaps a database was not set up properly);
• a fault in the unit design (the code correctly adheres to the design specification,
but the latter is incorrect).
The causes of the failure should be recorded in a test summary report, which is a
summary of testing activities for all the units covered by the unit test plan.

3. Explain in detail about Unit Test Planning. (Nov/Dec 2015, May/Jun 2013)
 A general unit test plan should be prepared. It may be prepared as a component of
the master test plan or as a stand-alone plan.
 It should be developed in conjunction with the master test plan and the project
plan for each project.
 Documents that provide inputs for the unit test plan are the project plan, as well
the requirements, specification, and design documents that describe the target
units.
 Components of a unit test plan are described in detail the IEEE Standard for
Software Unit Testing.
Phase 1: Describe Unit Test Approach and Risks
In this phase of unit testing planning the general approach to unit testing is outlined. The
test planner:
 identifies test risks;
 describes techniques to be used for designing the test cases for the
units;
 describes techniques to be used for data validation and recording of
test results;

55
 describes the requirements for test harnesses and other software that
interfaces with the units to be tested, for example, any special
objects needed for testing object-oriented units.
Phase 2: Identify Unit Features to be tested
 This phase requires information from the unit specification and detailed design
description.
 The planner determines which features of each unit will be tested
 for example: functions, performance requirements, states, and state transitions,
control structures, messages, and data flow patterns.
Phase 3: Add Levels of Detail to the Plan
 In this phase the planner refines the plan as produced in the previous two
phases. The planner adds new details to the approach, resource, and scheduling
portions of the unit test plan.
 As an example, existing test cases that can be reused for this project can be
identified in this phase.
 Unit availability and integration scheduling information should be included in
the revised version of the test plan.
 The planner must be sure to include a description of how test results will be
recorded.
Unit Test on Class / Objects:
Unit testing on object oriented systems
 Testing levels in object oriented systems
 operations associated with objects
 usually not tested in isolation because of encapsulation and dimension (too
small)
 classes -> unit testing
 clusters of cooperating objects -> integration testing
 the complete OO system -> system testing

56
 Complete test coverage of a class involves
 Testing all operations associated with an object
 Setting and interrogating all object attributes
 Exercising the object in all possible states
 Inheritance makes it more difficult to design object class tests as the information to
be tested is not localised
Challenges/issues of Class Testing
Some of these issues are described follow:
 Issue 1: Adequately Testing Classes
The potentially high costs for testing each individual method in a class have
been described. These high costs will be particularly apparent when there are
many methods in a class; the numbers can reach as high as 20 to 30. Finally, a
tester might use a combination of approaches, testing some of the critical
methods on an individual basis as units, and then testing the class as a whole.
 Issue 2: Observation of Object States and State Changes
Methods may not return a specific value to a caller. They may instead change
the state of an object. The state of an object is represented by a specific set of
values for its attributes or state variables.
 Issue 3: Encapsulation
– Difficult to obtain a snapshot of a class without building extra methods
which display the classes’ state
 Issue 4 :Inheritance
– Each new context of use (subclass) requires re-testing because a method
may be implemented differently (polymorphism).
– Other unaltered methods within the subclass may use the redefined
method and need to be tested

57
 Issue 5:White box tests
– Basis path, condition, data flow and loop tests can all be applied to
individual methods within a class.

4. Explain in detail about Integration Test. (May/Jun 16,Apr/May 15, Nov 16)

The Major Goals of Integration Test


 To detect that occur on the interface of units.
 To assemble the individual units into working subsystems and finally a
complete system that is ready for system test.
Cluster Test Plan Used In Integration Testing For OO Systems
 A cluster consists of classes that are related, for example, they may work
together (cooperate) to support a required functionality for the complete system.
 The clusters Test Plan include the following items:
 A natural languages description of the function of the cluster to be tested;
 List of classes in the cluster;
 clusters this cluster is dependent on;
 A set of cluster test cases.
Design an Integration Test
 Testing of groups of components integrated to create a sub-system
 Usually the responsibility of an independent testing team (except sometimes in
small projects)
 Integration testing should be black-box testing with tests derived from the
specification
 A principal goal is to detect defects that occur on the interfaces of units
 Main difficulty is localising errors
Test drivers and stubs
 Auxiliary code developed to support testing

58
 Test drivers
 call the target code
 simulate calling units or a user
 where test procedures and test cases are coded (for automatic test case
execution) or a user interface is created (for manual test case execution)
 Test stubs
 simulate called units
 simulate modules/units/systems called by the target code
Incremental integration testing

Approaches to integration testing


 Top-down testing
– Start with high-level system and integrate from the top-down replacing
individual components by stubs where appropriate
 Bottom-up testing
– Integrate individual components in levels until the complete system is
created
 In practice, most integration involves a combination of these strategies
 Appropriate for systems with a hierarchical control structure
– Usually the case in procedural-oriented systems

59
– Object-oriented systems may not have such a hierarchical control structure
Top-down integration testing

Bottom-up integration testing

Advantages and disadvantages


 Architectural validation: Top-down integration testing is better at discovering
errors in the system architecture
 System demonstration: Top-down integration testing allows a limited
demonstration at an early stage in the development
 Test implementation: Often easier with bottom-up integration testing
 Test observation: Problems with both approaches. Extra code may be required
to observe tests

60
Integration Test Planning
 Planning can begin when high-level design is complete so that the system
architecture is defined.
 Other documents relevant to integration test planning are the requirements
document, the user manual, and usage scenarios.
 These documents contain structure charts, state charts, data dictionaries, cross-
reference tables, module interface descriptions, data flow descriptions,
messages and event descriptions, all necessary to plan integration tests.

5. Explain the different types of system testing with examples. (May/Jun 2013)

Fig. Types of system tests


Functional Testing
 Ensure that the behavior of the system adheres to the requirements
specification
 Black-box in nature

61
 Equivalence class partitioning, boundary-value analysis and state-based testing
are valuable techniques
 Document and track test coverage with a (tests to requirements) traceability
matrix
 A defined and documented form should be used for recording test results from
functional and other system tests
 Failures should be reported in test incident reports
 Useful for developers (together with test logs)
 Useful for managers for progress tracking and quality assurance
purposes
 The tests should focus on the following goals.
 All types or classes of legal inputs must be accepted by the software.
 All classes of illegal inputs must be rejected (however, the system
should remain available).
 All possible classes of system output must exercised and examined.
 All effective system states and state transitions must be exercised and
examined.
 All functions must be exercised.
Performance Testing
 Goals:
 See if the software meets the performance requirements
 See whether there any hardware or software factors that impact on the
system's performance
 Provide valuable information to tune the system
 Predict the system's future performance levels
 Results of performance test should be quantified, and the corresponding
environmental conditions should be recorded
 Resources usually needed

62
 a source of transactions to drive the experiments, typically a load
generator
 an experimental test bed that includes hardware and software the system
under test interacts with
 instrumentation of probes that help to collect the performance data
(event logging, counting, sampling, memory allocation counters, etc.)
 a set of tools to collect, store, process and interpret data from probes
Configuration Testing
 Configuration testing allows developers/testers to evaluate system performance
and availability when hardware exchanges and reconfigurations occur.
 Configuration testing also requires many resources including the multiple
hardware devices used for the tests. If a system does not have specific
requirements for device configuration changes then large-scale configuration
testing is not essential.
 Several types of operations should be performed during configuration test.
Some sample operations for testers are
(i) rotate and permutate the positions of devices to ensure physical/ logical
device permutations work for each device (e.g., if there are two printers A and
B, exchange their positions);
(ii) induce malfunctions in each device, to see if the system properly handles
the malfunction;
(iii) induce multiple device malfunctions to see how the system reacts. These
operations will help to reveal problems (defects) relating to hardware/ software
interactions when hardware exchanges, and reconfigurations occur.
The Objectives of Configuration Testing
 Show that all the configuration changing commands and menus work properly.
 Show that all the interchangeable devices are really interchangeable, and that
they each enter the proper state for the specified conditions.

63
 Show that the systems’ performance level is maintained when devices are
interchanged, or when they fail.
Security Testing
 Evaluates system characteristics that relate to the availability, integrity and
confidentiality of system data and services
 Computer software and data can be compromised by
 criminals intent on doing damage, stealing data and information, causing
denial of service, invading privacy
 errors on the part of honest developers/maintainers (and users?) who
modify, destroy, or compromise data because of misinformation,
misunderstandings, and/or lack of knowledge
 Both can be perpetuated by those inside and outside on an organization
 Attacks can be random or systematic. Damage can be done through various means
such as:
 Viruses
 Trojan horses
 Trap doors
 illicit channels.
 The effects of security breaches could be extensive and can cause:
 Loss of information
 corruption of information
 Misinformation
 privacy violations
 Denial of service
 Other Areas to focus on Security Testing: password checking, legal and illegal
entry with passwords, password expiration, encryption, browsing, trap doors,
viruses.
 Usually the responsibility of a security specialist

64
Recovery Testing: Subject a system to losses of resources in order to determine if it can
recover properly from these losses
 Especially important for transaction systems
 Example: loss of a device during a transaction
 Tests would determine if the system could return to a well-known state, and that
no transactions have been compromised
 Systems with automated recovery are designed for this purpose
 Areas to focus [Beizer] on Recovery Testing:
 Restart : the ability of the system to restart properly on the last checkpoint
after a loss of a device
 Switchover : the ability of the system to switch to a new processor, as a
result of a command or a detection of a faulty processor by a monitor
 In each of these testing situations all transactions and processes must be carefully
examined to detect:
 loss of transactions;
 merging of transactions;
 incorrect transactions;
 an unnecessary duplication of a transaction.
A good way to expose such problems is to perform recovery testing under a stressful
load. Transaction inaccuracies and system crashes are likely to occur with the result
that defects and design flaws will be revealed.
Acceptance Test, Alpha and Beta Testing
 For tailor made software(customized software):
 acceptance tests are performed by users/customers
 much in common with system test
 For packaged software (market made software):
 alpha testing will be conducted on the developers site
 beta testing will be conducted on a user site

65
6. Discuss the levels of testing adapted to test OO systems (Apr/May 2015)
The shift from traditional to object-oriented environment involves looking at and
reconsidering old strategies and methods for testing the software. The traditional
programming consists of procedures operating on data, while the object-oriented
paradigm focuses on objects that are instances of classes. In object-oriented (OO)
paradigm, software engineers identify and specify the objects and services provided by
each object. In addition, interaction of any two objects and constraints on each identified
object are also determined. The main advantages of OO paradigm include increased
reusability, reliability, interoperability, and extendibility.
With the adoption of OO paradigm, almost all the phases of software development have
changed in their approach, environments, and tools. Though OO paradigm helps make
the designing and development of software easier, it may pose new kind of problems.
Thus, testing of software developed using OO paradigm has to deal with the new
problems also. Note that object-oriented testing can be used to test the object-oriented
software as well as conventional software.
OO program should be tested at different levels to uncover all the errors. At the
algorithmic level, each module (or method) of every class in the program should be tested
in isolation. For this, white-box testing can be applied easily. As classes form the main
unit of object-oriented program, testing of classes is the main concern while testing an
OO program. At the class level, every class should be tested as an individual entity. At
this level, programmers who are involved in the development of class conduct the testing.
Test cases can be drawn from requirements specifications, models, and the language
used. In addition, structural testing methods such as boundary value analysis are
extremely used. After performing the testing at class level, cluster level testing should be
performed. As classes are collaborated (or integrated) to form a small subsystem (also
known as cluster), testing each cluster individually is necessary. At this level, focus is on
testing the components that execute concurrently as well as on the interclass interaction.
Hence, testing at this level may be viewed as integration testing where units to be
66
integrated are classes. Once all the clusters in the system are tested, system level testing
begins. At this level, interaction among clusters is tested.
Usually, there is a misconception that if individual classes are well designed and have
proved to work in isolation, then there is no need to test the interactions between two or
more classes when they are integrated. However, this is not true because sometimes there
can be errors, which can be detected only through integration of classes. Also, it is
possible that if a class does not contain a bug, it may still be used in a wrong way by
another class, leading to system failure.
Developing Test Cases in Object-oriented Testing
The methods used to design test cases in OO testing are based on the conventional
methods. However, these test cases should encompass special features so that they can be
used in the object-oriented environment. The points that should be noted while
developing test cases in an object-oriented environment are listed below.
1. It should be explicitly specified with each test case which class it should test.
2. Purpose of each test case should be mentioned.
3. External conditions that should exist while conducting a test should be clearly
stated with each test case.
4. All the states of object that is to be tested should be specified.
5. Instructions to understand and conduct the test cases should be provided with each
test case.
Object-oriented Testing Methods
As many organizations are currently using or targeting to switch to the OO paradigm, the
importance of OO software testing is increasing. The methods used for performing
object-oriented testing are discussed in this section.

67
                                                      
State-based testing is used to verify whether the methods (a procedure that is executed by
an object) of a class are interacting properly with each other. This testing seeks to
exercise the transitions among the states of objects based upon the identified inputs.
For this testing, finite-state machine (FSM) or state-transition diagram representing the
possible states of the object and how state transition occurs is built. In addition, state-
based testing generates test cases, which check whether the method is able to change the
state of object as expected. If any method of the class does not change the object state as
expected, the method is said to contain errors.
To perform state-based testing, a number of steps are followed, which are listed below.
1. Derive a new class from an existing class with some additional features, which are
used to examine and set the state of the object.
2. Next, the test driver is written. This test driver contains a main program to create
an object, send messages to set the state of the object, send messages to invoke methods
of the class that is being tested and send messages to check the final state of the object.
3. Finally, stubs are written. These stubs call the untested methods.
Fault-based Testing
Fault-based testing is used to determine or uncover a set of plausible faults. In other
words, the focus of tester in this testing is to detect the presence of possible faults. Fault-
based testing starts by examining the analysis and design models of OO software as these

68
models may provide an idea of problems in the implementation of software. With the
knowledge of system under test and experience in the application domain, tester designs
test cases where each test case targets to uncover some particular faults.
The effectiveness of this testing depends highly on tester experience in application
domain and the system under test. This is because if he fails to perceive real faults in the
system to be plausible, testing may leave many faults undetected. However, examining
analysis and design models may enable tester to detect large number of errors with less
effort. As testing only proves the existence and not the absence of errors, this testing
approach is considered to be an effective method and hence is often used when security
or safety of a system is to be tested.
Integration testing applied for OO software targets to uncover the possible faults in both
operation calls and various types of messages (like a message sent to invoke an object).
These faults may be unexpected outputs, incorrect messages or operations, and incorrect
invocation. The faults can be recognized by determining the behavior of all operations
performed to invoke the methods of a class.
Scenario-based Testing
Scenario-based testing is used to detect errors that are caused due to incorrect
specifications and improper interactions among various segments of the software.
Incorrect interactions often lead to incorrect outputs that can cause malfunctioning of
some segments of the software. The use of scenarios in testing is a common way of
describing how a user might accomplish a task or achieve a goal within a specific context
or environment. Note that these scenarios are more context- and user specific instead of
being product-specific. Generally, the structure of a scenario includes the following
points.
1. A condition under which the scenario runs.
2. A goal to achieve, which can also be a name of the scenario.
3. A set of steps of actions.
4. An end condition at which the goal is achieved.
5. A possible set of extensions written as scenario fragments.
69
Scenario- based testing combines all the classes that support a use-case (scenarios are
subset of use-cases) and executes a test case to test them. Execution of all the test cases
ensures that all methods in all the classes are executed at least once during testing.
However, testing all the objects (present in the classes combined together) collectively is
difficult. Thus, rather than testing all objects collectively, they are tested using either top-
down or bottom-up integration approach.
This testing is considered to be the most effective method as scenarios can be organized
in such a manner that the most likely scenarios are tested first with unusual or exceptional
scenarios considered later in the testing process. This satisfies a fundamental principle of
testing that most testing effort should be devoted to those paths of the system that are
mostly used.
Challenges in Testing Object-oriented Programs
Traditional testing methods are not directly applicable to OO programs as they involve
OO concepts including encapsulation, inheritance, and polymorphism. These concepts
lead to issues, which are yet to be resolved. Some of these issues are listed below.
1. Encapsulation of attributes and methods in class may create obstacles while
testing. As methods are invoked through the object of corresponding class, testing cannot
be accomplished without object. In addition, the state of object at the time of invocation
of method affects its behavior. Hence, testing depends not only on the object but on the
state of object also, which is very difficult to acquire.
2. Inheritance and polymorphism also introduce problems that are not found in
traditional software. Test cases designed for base class are not applicable to derived class
always (especially, when derived class is used in different context). Thus, most testing
methods require some kind of adaptation in order to function properly in an OO
environment.

7. a. Consider the following set of requirements for the triangle problem:


R1: If x < y + z or y <x + z or z < x + y then it is a triangle
R2: If x = y and x # z and y # z then it is a scalene triangle
70
R3: If x = y or x = z or y = z then it is an isosceles triangle
R4: If x = y and y = z and z = x then it is an equilateral triangle
R5: If x > y + z or y > x + z or z > x + y then it is impossible to construct
a triangle. Now, consider the following causes and effects for the triangle
problem :
Causes (inputs) :
 C1 : Side “x” is less than sum of “y” and “z”
 C2 : Side “y” is less than sum of “x” and “z”
 C3 : Side “z” is less then sum of “x” and “y”
 C4 : Side “x” is equal to side “y”
 C5 : Side “x” is equal to side “z”
 C6 : Side “y” is equal to side “z”
Effects:
 E1 : Not a triangle
 E2 : Scalene triangle
 E3 : Isosceles triangle
 E4 : Equilateral triangle
 E5 : Impossible
What is a cause-effect graph ?Model a cause-effect graph for the above.
April / May 16

Cause-Effect Graph graphically shows the connection between a given outcome and all
issues that manipulate the outcome. Cause Effect Graph is a black box testing technique.
It is also known as Ishikawa diagram because of the way it looks, invented by Kaoru
Ishikawa or fish bone diagram. It is generally uses for hardware testing but now adapted
to software testing, usually tests external behavior of a system. It is a testing technique
that aids in choosing test cases that logically relate Causes (inputs) to Effects (outputs) to
produce test cases. A “Cause” stands for a separate input condition that fetches about an
71
internal change in the system. An “Effect” represents an output condition, a system
transformation or a state resulting from a combination of causes.

7b. Consider the following fragment of code :


i = 0;
while (i < n – 1) do
j = i + 1;
while (j < n) do
if A [i] < A [j] then
72
swap (A[i], A [j]);
end do;
i = i + 1;
end do;
Identify bug (s) if any in the above program segment, modify the code if you have
identified bug (s). Construct a control flow graph and compute Cyclomatic
complexity. May / June 2016

Control Flow graph

Cyclomatic complexity
 V(G) = 9 - 7 + 2 = 4
 V(G) = 3 + 1 = 4 (Condition nodes are 1,2 and 3 nodes)
 Basis Set - A set of possible execution path of a program
 1, 7
 1, 2, 6, 1, 7
 1, 2, 3, 4, 5, 2, 6, 1, 7
 1, 2, 3, 5, 2, 6, 1, 7

73
UNIT IV
TEST MANAGEMENT

People and organizational issues in testing – Organization structures for testing teams –
testing services –Test Planning – Test Plan Components – Test Plan Attachments –
Locating Test Items –test management – test process – Reporting Test Results – The role
of three groups in Test Planning and Policy Development – Introducing the test specialist
– Skills needed by a test specialist – Building a Testing Group.

PART- A

1. Define Goal and Policy.


A goal can be described as a statement of intent or a statement of a
accomplishment that an individual or an org wants to achieve.
A Policy can be defined as a high-level statement of principle or course of action
that is used to govern a set of activities in an organization.

2. Write the business impacts of globalization. (Nov/Dec 2013)


Since the markets are global, the need that a product must satisfy the requirements
are increasing, hence it is impossible to meet all the requirements.

3. Define Milestones.
Milestones are tangible events that are expected to occur at a certain time in the
Project’s lifetime. Managers use them to determine project status.

4. Define a Work Breakdown Structure.(WBS) (Apr/May 2015)


A Work Breakdown Structure (WBS) is a hierarchical or treelike representation of
all the tasks that are required to complete a project.

74
 The first is the "80 hour rule" which means that no single activity or group of
activities at the lowest level of detail of the WBS to produce a single
deliverable should be more than 80 hours of effort.
 The second rule of thumb is that no activity or group of activities at the lowest
level of detail of the WBS should be longer than a single reporting period.
Thus if the project team is reporting progress monthly, then no single activity
or series of activities should be longer than one month long.
 The last heuristic is the "if it makes sense" rule. Applying this rule of thumb,
one can apply "common sense" when creating the duration of a single activity
or group of activities necessary to produce a deliverable defined by the WBS.

5. Write the approaches to test cost Estimation?


 The COCOMO model and heuristics
 Use of test cost drivers
 Test tasks
 Tester/developer ratios
 Expert judgment

6. What is the function of Test Item Transmittal Report or Locating Test Items?
(May/Jun 2013)
Suppose a tester is ready to run tests on the data described in the test plan. We
needs to be able to locate the item and have knowledge of its current status. This is the
function of the Test Item Transmittal Report. Each Test Item Transmittal Report has a
unique identifier.

75
7. Define Test incident Report
The tester should record in attest incident report (sometimes called a problem
report) any event that occurs during the execution of the tests that is
unexpected ,unexplainable, and that requires a follow- up investigation.

8. Define Test Log. (Nov/Dec 2015)


The Test log should be prepared by the person executing the tests. It is a diary of
the events that take place during the test. It supports the concept of a test as a repeatable
experiment.

9. List out any two organizational issues in testing. (Nov/Dec 2013)


 lack of independence
 unclear testing responsibilities

10. What are the skills needed by a test specialist?


(May/Jun16, Apr/May15,Nov/Dec 14)
 Personal and managerial Skills
 Technical Skills

11. List the organization structure of testing teams.(May/Jun 2016)

Project Manager

Test Manager Development Manager

Tester Programmers

76
12. Write the Components of test plan. (Nov/Dec 2014)
a. Test plan identifier ,
b. Introduction
c. Item to be tested
d. Features to be tested
e. Approach
f. Pass/fail criteria

13. What role do user/clients play in the development of test plan for the
projects? (Nov/Dec 2015)

a. Provide an overview of the test plan.


b. Specify the goals/objectives.
c. Specify any constraints.

14. Differentiate effort and schedule. (April / May 2015)


Scheduling of testing activities is dependent on dates for the completion and
delivery of software items to testing. Prepare a schedule showing testing activities
with estimated dates and revise as necessary during iteration and stage level planning. 
Test effort refers to the expenses for (still to come) tests. There is a relation
with test costs and failure costs (direct, indirect, costs for fault correction). Some
factors which influence test effort are: maturity of the software development
process, quality and testabilityof the testobject, test infrastructure, skills of staff
members, quality goals and test strategy.

77
78
PART- B

1. Explain in detail about Test planning

 A plan is a document that provides a framework or approach for achieving a set of


goals. A plan can be defined in the following way.
 A plan is a document that provides a framework or approach for achieving a set of
goals.
 A plan also contains milestones.
 Milestones are tangible events that are expected to occur at a certain time in the
project’s lifetime. Managers use them to determine project status.
 Tracking the actual occurrence of the milestone events allows a manager to
determine if the project is progressing as planned. Finally, a plan should assess the risks
involved in carrying out the project.
 Test plans for software projects are very complex and detailed documents. The
planner usually includes the following essential high-level items.
 Overall test objectives. As testers, why are we testing, what is to be achieved
by the tests, and what are the risks associated with testing this product?
 What to test (scope of the tests). What items, features, procedures, functions,
objects, clusters, and subsystems will be tested?
 Who will test. Who are the personnel responsible for the tests?
 How to test. What strategies, methods, hardware, software tools, and
techniques are going to be applied? What test documents and deliverable should
be produced?
 When to test. What are the schedules for tests? What items need to be
available?
 When to stop testing. It is not economically feasible or practical to plan to test
until all defects have been revealed.

79
 All of the quality and testing plans should also be coordinated with the overall
software project plan.
 A sample plan hierarchy is shown in the following Figure. At the top of the plan
hierarchy there may be a software quality assurance plan.
 This plan gives an overview of all verification and validation activities for the
project, as well as details related to other quality issues such as audits, standards,
configuration control, and supplier control.

Software quality assurance (V & V ) plan

Master test plan Review plan: Inspections and Walkthroughs

Unit Test plan Integration test plan System test plan Acceptance test plan

Figure: A Hierarchy of Test plans

 Below that in the plan hierarchy there may be a master test plan that includes an
overall description of all execution-based testing for the software system.
 A master verification plan for reviews inspections/walkthroughs would also fit in
at this level.
 The master test plan itself may be a component of the overall project plan or exist
as a separate document.

2. Briefly Explain about the Test Plan Components. (May/Jun 2016, Nov / Dec 16)
80
 Test plan identifier
 Can serve to identify it as a configuration item
 Introduction (why)
 Overall description of the project, the software system being developed or
maintained, and the software items and/or features to be tested
 Overall description of testing goals (objectives) and the testing approaches
to be used
 References to related or supporting documents
 Test items (what)
 List the items to be tested: procedures, classes, modules, libraries,
components, subsystems, systems, etc.
 Include references to documents where these items and their behaviors are
described (requirements and design documents, user manuals, etc.)
 List also items that will not be tested
 Features to be tested (what)
 Features are distinguishing characteristics (functionalities, quality
attributes). They are closely related to the way we describe software in
terms of its functional and quality requirements
 Identify all software features and combinations of software features to be
tested. Identify the test design specification associated with each feature
and each combination of features.
 Features not to be tested (what)
 Identify all features and significant combinations of features that will not be
tested and the reasons.

 Approach (how)

81
 Description of test activities, so that major testing tasks and task durations
can be identified
 For each feature or combination of features, the approach that will be taken
to ensure that each is adequately tested
 Tools and techniques
 Expectations for test completeness (such as degree of code coverage for
white box tests)
 Testing constraints, such as time and budget limitations
 Stop-test criteria
 Item pass-fail criteria
 Given a test item and a test case, the tester must have a set of criteria to
decide whether the test has been passed or failed upon execution
 The test plan should provide a general description of these criteria
 Failures to a certain severity level may be accepted
 Suspension criteria and resumption requirements
 Specify the criteria used to suspend all or a portion of the testing activity on
the test items associated with this plan
 Specify the testing activities that must be repeated, when testing is resumed
 Testing is done in cycles: test – fix - (resume) test (suspend) – fix - ...
 Tests may be suspended when a certain number of critical defects has been
observed
 Test deliverables
 Test documents (possibly a subset of the ones described in the IEEE
standard)
 Test harness (drivers, stubs, tools developed especially for this project, etc.)
 Testing Tasks
 Identify all test-related tasks, inter-task dependencies and special skills
required

82
 Environmental needs
 Software and hardware needs for the testing effort
 Responsibilities
 Roles and responsibilities to be fulfilled
 Actual staff involved
 Staffing and training needs
 Description of staff and skills needed to carry out test-related
responsibilities
 Scheduling
 Task durations and calendar
 Milestones
 Schedules for use of staff and other resources (tools, laboratories, etc.)
 Risks and contingencies
 Risks should be (i) identified, (ii) evaluated in terms of their probability of
occurrence, (iii) prioritized, and (iv) contingency plans should be developed
that can be activated if the risk occurs
 Example of a risk: some test items not delivered on time to the testers
 Example of a contingency plan: flexibility in resource allocation so that
testers and equipment can operate beyond normal working hours (to
recover from delivery delays)
 Testing costs (not included in the IEEE standard)
 Kinds of costs:
 costs of planning and designing the tests
 costs of acquiring the hardware and software necessary
 costs of executing the tests
 costs of recording and analyzing test results
 tear-down costs to restore the environment
 Cost estimation may be based on:

83
 Models (such as COCOMO for project costs) and heuristics (such as
50% of project costs)
 Test tasks and WBS
 Developer/tester ratio (such as 1 tester to 2 developers)
 Test impact items (such as number of procedures) and test cost
drivers (or factors, such as KLOC)
 Expert judgment(Delphi)
 Approvals
 Dates and signatures of those that must approve the test plan

3. Explain in detail about Test Plan Attachments.

Test Design Specifications:


 A test design specification describes how a group of features and/or test items is
tested by a set of test cases and test procedures.
 May include a (test case to) features/requirements traceability matrix
Contents:
 Test Design Specification Identifier
 Features to be tested
 Test items and features covered by this document
 Approach refinements
 Test techniques
 Test case identification
 Feature pass/fail criteria
Test Case Specifications Contents:
 Test case specification identifier
 Test items
 List of items and features to be tested by this test case
 Input specifications
84
 Output specifications
 Environmental needs
 Special procedural requirements
 Intercase dependencies
Test Procedure Specifications
 A procedure in general is a sequence of steps required to carry out a
specific task.
Contents:
 Test procedure specification identifier
 Purpose
 Specific requirements
 Procedure steps
 Log, set up, proceed, measure, shut down, restart, stop, wrap up,
contingencies
Locating Test Items: Test Item Transmittal Report
Accompanies a set of test items that are delivered for testing.
Contents
 Transmittal report identifier
 Transmitted items
 version/revision level
 references to the items documentation and the test plan related to the
transmitted items
 persons responsible for the items
 Location
 Status
 deviations from documentation, from previous transmissions or from
test plan
 incident reports that are expected to be resolved

85
 pending modifications to documentation
 Approvals

4. How to Report Test Results? Explain.

 The test plan and its attachments are test-related documents that are prepared prior
to test execution. There are additional documents related to testing that are
prepared during and after execution of the tests.
 The IEEE Standard for Software Test Documentation describes the following
documents:
Test Log
 Records detailed results of test execution
Contents
 Test log identifier
 Description
 Identify the items being tested including their version/revision levels
 Identify the attributes of the environments in which the testing is
conducted
 Activity and event entries
 Execution description
 Procedure results
 Environmental information
 Anomalous events
 Incident report identifiers
Test Incident Report
Also called a problem report
Contents:
 Test incident report identifier

86
 Summary
 Summarize the incident
 Identify the test items involved indicating their version/revision level
 References to the appropriate test procedure specification, test case
specification, and test log
 Incident description
 inputs, expected results, actual results, anomalies, date and time,
procedure step, environment, attempts to repeat, testers, observers
 any information useful for reproducing and repairing
 Impact
 If known, indicate what impact this incident will have on test plans,
test design specifications, test procedure specifications, or test case
specifications
 severity rating

FIG. Test-related documents as recommended by IEEE


Test Summary Report
Contents:
87
Test summary report identifier
 Summary
 Summarize the evaluation of the test items
 Identify the items tested, indicating the environment in which the
testing activities took place
 Variances
 of the test items from their original design specifications
 Comprehensiveness assessment
 Evaluate the comprehensiveness of the testing process against the
comprehensiveness criteria specified in the test plan if the plan exists
 Identify features or feature combinations that were not sufficiently
tested and explain the reasons
 Summary of results
 Summarize the results of testing
 Identify all resolved incidents and summarize their resolutions
 Identify all unresolved incidents.
 Evaluation
 Provide an overall evaluation of each test item including its
limitations
 This evaluation shall be based upon the test results and the item level
pass/fail criteria
 An estimate of failure risk may be included
 Summary of activities
 Summarize the major testing activities and events
 Summarize resource consumption data, e.g., total staffing level, total
machine time, and total elapsed time used for each of the major
testing activities

88
5. What are the skills Needed by test specialist and Explain.(Nov/Dec 2015,16)

Skills Needed by a Test Specialist

Given the nature of technical and managerial responsibilities assigned to the tester that
are listed in Section 8.0, many managerial and personal skills are necessary for success in
the area of work. On the personal and managerial level a test specialist must have:

 organizational, and planning skills;


 the ability to keep track of, and pay attention to, details;
 the determination to discover and solve problems;
 the ability to work with others and be able to resolve conflicts;
 the ability to mentor and train others;
 the ability to work with users and clients;
 strong written and oral communication skills;
 the ability to work in a variety of environments;
 the ability to think creatively
The first three skills are necessary because testing is detail and problem oriented. In
addition, testing involves policymaking, knowledge of different types of application
areas, planning, and the ability to organize and monitor information, tasks, and people.
Testing also requires interactions with many other engineering professionals such as
project managers, developers, analysts, process personal, and software quality assurance
staff. Test professionals often interact with clients to prepare certain types of tests, for
example acceptance tests. Testers also have to prepare test-related documents and make
presentations. Training and mentoring of new hires to the testing group is also a part of
the tester’s job. In addition, test specialists must be creative, imaginative, and experiment
oriented. They need to be able to visualize the many ways that a software item should be
tested, and make hypotheses about the different types of defects that could occur and the
different ways the software could fail. On the technical level testers need to have:
89
 an education that includes an understanding of general software engineering
principles, practices, and methodologies;
 strong coding skills and an understanding of code structure and behavior;
 a good understanding of testing principles and practices;
 a good understanding of basic testing strategies, methods, and techniques;
 the ability and experience to plan, design, and execute test cases and test
procedures on multiple levels (unit, integration, etc.);
 a knowledge of process issues;
 knowledge of how networks, databases, and operating systems are organized and
how they work;
 a knowledge of configuration management;
 a knowledge of test-related documents and the role each documents plays in the
testing process;
 the ability to define, collect, and analyze test-related measurements;
 the ability, training, and motivation to work with testing tools and equipment;
 a knowledge of quality issues.

6. With neat diagram explain in detail about Organization in test team.


(Nov/Dec 14,Apr/May 2015,Nov/Dec 13,16)
DIMENSIONS OF ORGANIZATION STRUCTURES
Organization structures directly relate to some of the people issues discussed in the
previous chapter. In addition, the study of organization structures is important from the
point of view of effectiveness because an appropriately designed organization structure
can provide accountability to results. This accountability can promote better teamwork
among the different constituents and create in better focus in the work. In addition,
organization structures provide a road map for the team members to envision their career
paths.
The organization structures are based on two dimensions.
90
 The first dimension is organization type and
 The second dimension is on geographic distribution.
The organization is broadly classified into two categories— 
 product organizations and
 services organizations.
Product organizations produce software products and have a “womb to tomb” (or
conception through design, development, testing and maintenance to product
obsolescence) responsibility for the entire product. Testing happens to be one of the
phases or groups that are within the organization.
Service organizations do not have complete product responsibility. In the testing
context, they are external organizations that provide testing services to other
organizations that require them. In essence, testing services are out-sourced to such
organizations. Such testing services organizations provide specialized and committed
personnel for testing. They also undertake specialized and niche areas of examining
performance testing, internationalization testing, and so on.
A second factor that plays a significant role in deciding organization structures is
the geographic distribution of teams. Product or service organizations that are involved in
testing can be either single-site or multi-site. In a single-site team, all members are
located at one place while in a multi-site team, the team is scattered across multiple
locations. Multi-site teams introduce cultural and other factors that influence organization
structures.

STRUCTURES IN SINGLE-PRODUCT COMPANIES


Product companies in general have a high-level organization structure similar to the one
shown in the figure shown below:
 

91
Figure - Organization structure of a multi-product company.
The CTO's office sets the high-level technology directions for the company. A business
unit is in charge of each product that the company produces. (Sometimes the business
unit may also handle related products to form a product line.) A product business unit is
organized into a product management group and a product delivery group. The product
management group has the responsibility of merging the CTO's directions with specific
market needs to come out with a product road map. The product delivery group is
responsible for delivering the product and handles both the development and testing
functions. We use the term “project manager” to denote this head. Sometimes the term
“development manager” or “delivery manager” is also used.
The figure above shows a typical multi-product organization. The internal organization of
the delivery teams varies with different scenarios for single-and multi-product
companies, as we will discuss below.
Testing Team Structures for Single-Product Companies
Most product companies start with a single product. During the initial stages of evolution,
the organization does not work with many formalized processes. The product delivery
team members distribute their time among multiple tasks and often wear multiple hats.
All the engineers report into the project manager who is in charge of the entire project,
with very little distinction between testing function and development functions. Thus,
there is only a very thin line separating the “development team and “testing team.”

92
The model in Figure given below is applicable in situations where the product is in the
early stages of evolution. A project manage handles part or all of a product.
 

Figure Typical organization structures in early stages of a product.


This model offers some advantages that are well suited to small organizations.
 Exploits the rear-loading nature of testing activities   
 Enables engineers to gain experience in all aspects of life cycle   
 Is amenable to the fact that the organization mostly only has informal processes   
 Some defects may be detected early   
 Accountability for testing and quality reduces   
 Developers do not in general like testing and hence the effectiveness of testing
suffers
 Schedule pressures generally compromise testing   
 Developers may not be able carry out the different types of tests   
As the product matures and the processes evolve, a homogeneous single-product
organization doing both development and testing, splits into two distinct groups, one for
development and one for testing. These two teams are considered as peer teams and both
report to the project manager in charge of the entire product. In this model, some of the
disadvantages of the previous model are done away with.
 

93
Figure - Separate groups for testing and development.
1. There is clear accountability for testing and development. The results and the
expectations from the two teams can be more clearly set and demarcated.
2. Testing provides an external perspective. Since the testing and development teams
are logically separated, there is not likely to be as much bias as in the previous
case for the testers to prove that the product works. This external perspective can
lead to uncovering more defects in the product.
3. Takes into account the different skill sets required for testing. As we have seen in
the earlier chapters, the skill sets required for testing functions are quite different
from that required for development functions. This model recognizes the
difference in skill sets and proactively address the same.
There are certain precautions that must be taken to make this model effective. First, the
project manager should not buckle under pressure and ignore the findings and
recommendations of the testing team by releasing a product that fails the test criteria.
Second, the project manager must ensure that the development and testing teams do not
view each other as adversaries. This will erode the teamwork between the teams and
ultimately affect the timeliness and quality of the product. Third, the testing team must
participate in the project decision making and scheduling right from the start so that they
do not come in at the “crunch time” of the project and face unrealistic schedules or
expectations.
Component-Wise Testing Teams: Even if a company produces only one product, the
product is made up of a number of components that fit together as a whole. In order to
provide better accountability, each component may be developed and tested by separate

94
teams and all the components integrated by a single integration test team reporting to the
project manager. The structure of each of the component teams can be either a coalesced
development-testing team (as in the first model above) or a team with distinct
responsibilities for testing and development. This is because not all components are of the
same complexity, not all components are at the same level of maturity. Hence, an
informal mix-and-match of the different organization structures for the different
components, with a central authority to ensure overall quality will be more effective. The
figure given below depicts this model.

Figure - Component-wise organization.


STRUCTURES FOR MULTI-PRODUCT COMPANIES
When a company becomes successful as a single-product company, it may decide to
diversify into other products. In such a case, each of the products is considered as a
separate business unit, responsible for all activities of a product. In addition, as before,
there will be common roles like the CTO.
The organization of test teams in multi-product companies is dictated largely by
the following factors.
 How tightly coupled the products are in terms of technology   
 Dependence among various products   
 How synchronous are the release cycles of products   
 Customer base for each product and similarity among customer bases for various
products   
95
A central “test think-tank/brain trust” team, which formulates the test strategy for
the organization
1. One test team for all the products
2. Different test teams for each product (or related products)
3. Different test teams for different types of tests
4. A hybrid of all the above models
Testing Teams as Part of “CTO's Office"
In a number of situations, the participation of the testing teams comes later in the product
life cycle while the design and development teams get to participate early. However,
testability of a product is as important (if not more important) as its development. Hence,
it makes sense to assign the same level of importance to testing as to development. One
way to accomplish this is to have a testing team report directly to the CTO as a peer to
the design and development teams. The advantages that this model brings to the table are
as follows.
1. Developing a product architecture that is testable or suitable for testing. For
example, the non-functional test requirements are better addressed during
architecture and design; by associating the testing team with the CTO, there is a
better chance that the product design will keep the testing requirements in mind.
2. Testing team will have better product and technology skills. These skills can be
built upfront during the product life cycle. In fact, the testing team can even make
valuable contributions to product and technology choices.
3. The testing team can get a clear understating of what design and architecture are
built for and plan their tests accordingly
4. The technical road map for product development and test suite development will
be in better sync.
5. In the case of a multi-product company, the CTO's team can leverage and optimize
the experiences of testing across the various product organizations/business units
in the company.

96
6. The CTO's team can evolve a consistent, cost-effective strategy for test
automation.
7. As the architecture and testing responsibilities are with the same person, that is the
CTO, the end-to-end objectives of architecture such as performance, load
conditions, availability requirements, and so on can be met without any ambiguity
and planned upfront.
In this model, the CTO handles only the architecture and test teams. The actual
development team working on the product code can report to a different person, who has
operational responsibilities for the code. This ensures independence to the testing team.
This group reporting to the CTO addresses issues that have organization-wide
ramifications and need proactive planning. A reason for making them report to the CTO
is that this team is likely to be cross-divisional, and cross-functional. This reporting
structure increases the credibility and authority of the team. Thus, their decisions are
likely to be accepted with fewer questions by rest the of the organization, without much
of a “this decision does not apply to my product as it was decided by someone else” kind
of objections.
This structure also addresses career path issues of some of the top test engineers.
Oftentimes, people perceive a plateau in the testing profession and harbor a
misconception that in order to move ahead in their career, they have to go into
development. This model, wherein a testing role reports to the CTO and has high
visibility, will motivate them to have a good target to aim for.
In order that such a team reporting to the CTO be effective,
1. It should be small in number;
2. It should be a team of equals or at most very few hierarchies;
3. It should have organization-wide representation;
4. It should have decision-making and enforcing authority and not just be a
recommending committee; and
5. It should be involved in periodic reviews to ensure that the operations are in line
with the strategy.
97
Single Test Team for All Products
It may be possible to carry out the single-testing-team company model of a single-
product company into a multi-product company. Earlier in this section, we discussed
some criteria of how to organize testing teams. Based on those criteria, a single testing
team for all the products would be possible when the line between the products is
somewhat thin.
This model is similar to the case of a single-product team divided into multiple
components and each of the components being developed by an independent team. The
one major difference between the two is that in the earlier model, the project manager to
whom the testing teams reports has direct delivery responsibilities whereas in the case of
a multi-product company, since different groups/individuals have delivery
responsibilities for different products, the single testing team must necessarily report to a
different level. There are two possibilities.
1. The single testing team can form a “testing business unit” and report into this unit.
This is similar to the “testing services” model to be discussed in the next section.
2. The testing team can be made to report to the “CTO think-tank” discussed earlier.
This may make the implementation of standards and procedures somewhat easier
but may dilute the function of the CTO think-tank to be less strategic and more
operational.
Testing Teams Organized by Product
In a multi-product company, when the products are fairly independent of one another,
having a single testing team may not be very natural. Accountability, decision making,
and scheduling may all become issues with the single testing team. The most natural and
effective way to organize the teams is to assign complete responsibility of all aspects of a
product to the corresponding business unit and let the business unit head figure out how
to organize the testing and development teams. This is very similar to the multi-
component testing teams model.
Depending on the level of integration required among the products, there may be need for
a central integration testing team. This team handles all the issues pertaining to the
98
integration of the multiple products. Such an integration team should be cross-product
and hence ideally report into the CTO think-tank.
Separate Testing Teams for Different Phases of Testing
Testing is not a single, homogeneous activity. Because
 There are different types of testing that need to be done—such as black box
testing, system testing, performance testing, integration testing,
internationalization testing, and so on.
 The skill sets required for performing each of these different test types are quite
different from each other. For example, for white box testing, an intimate
knowledge of the program code and programming language are needed. For black
box testing, knowledge of external functionality is needed.
 Each of these different types of tests may be carried out at different points in time.
For example, within internationalization testing, certain activities (such as
enabling testing) are carried out early in the cycle and fake language testing is
done before the product is localized.
As a result of these factors, it is common to split the testing function into different types
and phases of testing. Since the nature of the different types of tests are different and
because the people who can ascertain or be directly concerned with the specific types of
tests are different, the people performing the different types of tests may end up reporting
into different groups.
Such an organization based on the testing types presents several advantages.
1. People with appropriate skill sets are used to perform a given type of test.
2. Defects can get detected better and closer to the point of injection.
3. This organization is in line with the V model and hence can lead to effective
distribution of test resources.
The challenge to watch out for is that the test responsibilities are now distributed and
hence it may seem that there is no single point of accountability for testing. The key to
address this challenge is to define objectively the metrics for each of the phases or groups
and track them to completion.
99
Hybrid Models
The above models are not mutually exclusive or disjoint models. In practice, a
combination of all these models are used and the models chosen change from time to
time, depending on the needs of the project. For example, during the crunch time of a
project, when a product is near delivery, a multi-component team may act like a single-
component team. During debugging situations, when a problem to do with the integration
of multiple products comes up, the different product teams may work as a single team
and report to the CTO/CEO for the duration of that debugging situation. The various
organization structures presented above can be viewed as simply building blocks that can
be put together in various permutations and combinations, depending on the need of the
situation. The main aim of such hybrid organization structures should be effectiveness
without losing sight of accountability.

7. Explain in detail the role of the 3 critical group.


(May/Jun 2016,Nov/Dec 15,16, Nov/Dec 2014,May/Jun
2013)
 TMM framework three groups were identified as critical players in the testing
process. These groups were managers, developers/testers, and users/clients. In
TMM terminology they are called the three critical views (CV).
 At each TMM level the three groups play specific roles in support of the maturity
goals at that level. Critical group participation for all three TMM level 2 maturity
goals is summarized in the following Figure:

100
Managers Developers/Testers Users/ Clients
Task forces, policies, standards Apply black box and white box methods
Specify requirements clearly
Planning, Assist with test planning Support with operational profile
Resource allocation Test at all levels Participate in usability test
Support for education and training Train and mentor Participate in acceptance test planning
Interact with users and clients Participate in task forces
Interact with users and client

Achievements of TMM level 2 maturityProceed


goals to TMM level 3 goals

FIG. Reaching TMM level 2: summary of critical group roles


 Each group views the testing process from a different perspective that is related to
their particular goals, needs, and requirements.
 The manager’s view involves commitment and support for those activities and
tasks related to improving testing process quality.
 The developer/tester’s view encompasses the technical activities and tasks that
when applied, constitute best testing practices.
 The user/client view is defined as a cooperating or supporting view. The
developers/testers work with client/user groups on quality-related activities and
tasks that concern user-oriented needs. The focus is on soliciting client/user
support, consensus, and participation in activities such as requirements analysis,
usability testing, and acceptance test planning.
For the TMM maturity goal, “Develop Testing and Debugging Goals,” the TMM
recommends that project and upper management:
 Provide access to existing organizational goal/policy statements and sample testing
policies from other sources. These serve as policy models for the testing and
debugging domains.
101
 Provide adequate resources and funding to form the committees (team or task
force) on testing and debugging. Committee makeup is managerial, with technical
staff serving as co members.
 Support the recommendations and policies of the committee by:
 distributing testing/debugging goal/policy documents to project managers,
developers, and other interested staff,
 appointing a permanent team to oversee compliance and policy change
making.
 Ensure that the necessary training, education, and tools to carry out defined
testing/debugging goals is made available.
 Assign responsibilities for testing and debugging.
The activities, tasks, and responsibilities for the developers/testers include:
 Working with management to develop testing and debugging policies and goals.
 Participating in the teams that oversee policy compliance and change management.
 Familiarizing themselves with the approved set of testing/debugging goals and
policies, keeping up-to-date with revisions, and making suggestions for changes
when appropriate.
 When developing test plans, setting testing goals for each project at each level of
test that reflect organizational testing goals and policies.
 Carrying out testing activities that are in compliance with organizational policies.
Users and clients play an indirect role in the formation of an organization’s testing goals
and polices since these goals and policies reflect the organizations efforts to ensure
customer/client/user satisfaction. Feedback from these groups and from the marketplace
in general has an influence on the nature of organizational testing goals and policies.
Successful organizations are sensitive to customer/client/user needs.

102
UNIT V
TEST AUTOMATION

Software test automation – skill needed for automation – scope of automation – design
and architecture for automation – requirements for a test tool – challenges in automation
– Test metrics and measurements – project, progress and productivity metrics.

PART – A
1. Define: Test automation.
A software is developed in order to test the software. This is termed as test automation.

2. List the types of test cases.


 Manual
 Automated

3. Define SCM. (May/Jun 2013)


Software Configuration Management is a set of activities carried out for identifying,
organizing and controlling changes throughout the lifecycle of computer software.

4. What is the scope of automation?(May/Jun 2016,Nov/Dec 2013,2014)


 Finding testing types which are amendable to automation.
 Automating areas less prone to change
 Automate tests that are less prone to change
 Management aspects
 Return on investments.
5. What are test data generators?
Automation holds some scripts which gives the test data to increase the coverage
of Permutation and combination of inputs and Expected output for result comparison

103
6. What are the disadvantages of first generation automation?
 Scripts hold hard coded values
 Test maintenance cost is maximized

7. Define: Test measurement process.


Test measurement process is an integral part of tracking.It first collects the data
and then it is analyzed to take suitable conclusions.
8. Write the need of TMM. (Nov/Dec 2015)
Its aim to be used in a similar way to CMM, that is to provide a framework for
assessing the maturity of the test processes in an organisation, and so providing targets on
improving maturity.
The five Levels in the Testing Maturity Model

Level Description

At this level an organisation is using ad hoc methods for testing, so


Level 1 – Initial
results are not repeatable and there is no quality standard.

At this level testing is defined as a process, so there might be test


Level 2 – strategies, test plans, test cases, based on requirements. Testing
Definition does not start until products are completed, so the aim of testing is
to compare products against requirements.

At this level testing is integrated into a software life cycle, e.g.


Level 3 – the V-model. The need for testing is based on risk management,
Integration and the testing is carried out with some independence from the
development area.

104
At this level testing activities take place at all stages of the life
Level 4 –
cycle, including reviews of requirements and designs. Quality
Management and
criteria are agreed for all products of an organisation (internal and
measurement
external).

At this level the testing process itself is tested and improved at


Level 5 – each iteration. This is typically achieved with tool support, and
Optimization also introduces aims such as defect prevention through the life
cycle, rather than defect detection (zero defects).

9. What is test framework?


It is a module which depicts 2 things
 What to execute
 How the execution is to be done.
10. Name any two test metrics. (May/Jun 2016)
 Project metrics
 Productivity metrics.
11. Differentiate inspection from walkthroughs.
Inspection is monitoring the process of working. Walkthrough is a meeting in
which the details of the product are discussed.
12. Write the types of Review. (Nov/Dec 2015,Apr/May 2015,Nov/Dec 2014,2013)
 Inspection
 Walkthrough
 Desk checking
13. Differentiate milestone and deliverables. (Nov / Dec 16)
Test Deliverables are the artifacts which are given to the stakeholders of software
project during the software development lifecycle. There are different test deliverables
at every phase of the software development lifecycle. Milestones are often new

105
Releases of said software. Each new Release may have many new features (ie.
deliverables) within it. 

106
PART- B

1. Briefly explain about Software test Automation and Skills needed for
automation. (May/Jun 16,Nov/Dec 2015,17 )

Test Automation: Automate running of most of the test cases that are repetitive in
nature. Developing software to test the software is called test automation.
 Automation saves time as software can execute test cases faster than
human do.
 Test automation can free the test engineers from mundane tasks and
make them focus on more creative tasks.
 Automated tests can be more reliable.
Automated helps in immediate testing.
 Automated can protect an organization against attrition of test engineers.
 Test automation opens up opportunities for better utilization of
global resources.
 Certain types of testing cannot be executed without
automation.
 Automation means end-to-end, not test execution alone.
Terms Used in Automation: A test case is a set of sequential steps to execute a
test operating on a set of predefined inputs to produce certain expected outputs. There
are 2 types of test cases
 automated (executed using automation)
 manual(executed manually)
Skills Needed for Automation
The automation of testing is broadly classified into three generations.

107
First generation: Record and playback
Record and playback avoids the repetitive nature of executing tests. Almost all the test
tools available in the market have the record and playback feature. A test engineer
records the sequence of actions by keyboard characters or mouse clicks and those
recorded scripts are played back later, in the same order as they were recorded. When
there is frequent change, the record and playback generation of test automation tools may
not be very effective.
Second generation: Data –driven This method helps in developing test scripts that
generates the set of input conditions and corresponding expected output. This enables the
tests to be repeated for different input and output conditions. This generation of
automation focuses on input and output conditions using the black box testing approach.
Third generation: Action driven This technique enables a layman to create automated
tests; there are no input and expected output condition required for running the tests. All
action that appear on application are automatically tested based on a generic set of
controls defined for automation e input and output condition are automatically generated
and used the scenarios for test execution can be dynamically changed using the test
framework that available in this approach of automation hence automation in the third
generation involves two major aspects test case automation and frame work design

2. Explain about Design and Architecture for automation.


(Nov/Dec 2015,Apr/May 2015,Nov/Dec 2014,May/jun 2014,Nov/Dec 2013)

Design and architecture is an important aspect of automation. As in product


development, the design has to represent all requirements in modules and in the
interactions between modules. In integration testing both internal interfaces and external
interfaces have to be captured by design and rchitecture. Architecture for major heads: a
test infrastructure that covers a test case database and a defect database or defect
repository. Using this infrastructure, the test framework provides a backbone that ties the
selection and execution of test cases.
108
External modules: There are two modules that are external modules to automation
TCDB and defect DB. Manual test cases do not need any interaction between the
framework and TCDB. Test engineers submit the defects for manual test cases. For
automated test cases, the framework can automatically submit the defects to the defect
DB during execution. These external modules can be accessed by any module in
automation framework.
Scenario and configuration file modules: Scenarios are information on how to execute
a particular test case. A configuration file contains a set of variables that are used in
automation. A configuration file is important for running the test cases for various
execution conditions and for running the tests for various input and output conditions and
states. The values of variables in this configuration file can be changed dynamically to
achieve different execution input, output and state conditions.
Test cases and test framework modules: Test case is an object for execution for other
modules in the architecture and does not represent any interaction by itself. A test
framework is a module that combines ―what to execute‖ and how they have to be
executed. The test framework is considered the core of automation design. It can be
developed by the organization internally or can be bought from the vendor.

109
Tools and results modules: When a test framework performs its operations, there are a
set of tools that may be required. For example, when test cases are stored as source code
files in TCDB, they need to be extracted and compiled by build tools. In order to run the
compiled code, certain runtime tools and utilities may be required. The results that come
out of the test must be stored for future analysis. The history of all the previous tests run
should be recorded and kept as archives. This results help the test engineer to execute the
test cases compared with the previous test run. The audit of all tests that are run and the
related information are stored in the module of automation. This can also help in
selecting test cases for regression runs.
Report generator and reports /metrics modules : Once the results of a test run are
available, the next step is to prepare the test reports and metrics. Preparing reports is a
complex work and hence it should be part of the automation design. The periodicity of
the reports is different, such as daily, weekly, monthly, and milestone reports. Having
reports of different levels of detail can address the needs of multiple constituents and thus
provide significant returns. The module that takes the necessary inputs and prepares a
formatted report is called a report generator. Once the results are available, the report
generator can generate metrics. All the reports and metrics that are generated are stored in
the reports/metrics module of automation for future use and analysis.

3. Explain in detail about requirements for a test tool and challenges in automation.
 No hard coding in the test suite.
 Test case/suite expandability.
 Reuse of code for different types of testing, test cases.
 Automatic setup and cleanup.
 Independent test cases.
 Test case dependency
 Insulating test cases during execution
 Coding standards and directory structure.

110
 Selective execution of test cases.
 Random execution of test cases.
 Parallel execution of test cases.
 Looping the test cases
 Grouping of test scenarios
 Test case execution based on previous results.
 Remote execution of test cases.
 Automatic archival of test data.
 Reporting scheme.
 Independent of languages
 Probability to different platforms.
Process Model for Automation : The work on automation can go simultaneously with
product development and can overlap with multiple releases of the product. One specific
requirement for automation is that the delivery of the automated tests should be done
before the test execution phase so that the deliverables from automation effort can be
utilized for the current release of the product. Test automation life cycle activities bear a
strong similarity to product development activities. Just as product requirements need to
be gathered on the product side, automation requirements too need to be gathered.
Similarly, just as product planning, design and coding are done, so also during test
automation are automation planning, design and coding. After introducing testing
activities for both the product and automation, the above figure includes two parallel sets
of activities for development and testing separately.
Selecting a test tool: Having identified the requirements of what to automate, a related
question is the choice of an appropriate tool for automation. Selecting the test tool is an
important aspect of test automation for several reasons given below:
1. Free tools are not well supported and get phased out soon.
2. Developing in-house tools take time.
3. Test tools sold by vendors are expensive.
111
4. Test tools require strong training.
5. Test tools generally do not meet all the requirements for automation.
6. Not all test tools run on all platform.
For all the above strong reasons, adequate focus needs to be provided for selecting
the right tool for automation.
Criteria for selecting test tools: Categories for classifying the criteria are
1. Meeting requirements
2. Technology expectations
3. Training/skills and
4. Management aspects.
Meeting requirements: There are plenty of tools available in the market, but they do not
meet all the requirements of a given product. Evaluating different tools for different
requirements involves significant effort, money and time. Secondly, test tools are usually
one generation behind and may not provide backward or forward go through the same
amount of evaluation for new requirements. Finally, a number of test tools cannot
differentiate between a product failure and a test failure. So the test tool must have some
intelligence to proactively find out the changes that happened in the product and
accordingly analyze the results.
Technology expectations
 Extensibility and customization are important expectations of a test tool.
 A good number of test tools require their libraries to be liked with product
binaries.Test tools are not 100% cross platform. When there is an impact analysis
of the product on the network, the first suspect is the test tool and it is uninstalled
when such analysis starts.
Training skills: Test tools expect the users to learn new language/scripts and may not
use standard languages/scripts. This increases skill requirements for automation and
increases the need for a learning curve inside the organization.
Management aspects
 Test tools requires system upgrades.
112
 Migration to other test tools difficult
 Deploying tool requires huge planning and effort.
Steps for tool selection and deployment
1. Identify your test suite requirements among the generic requirements discussed.
2. Make sure experiences discussed in previous sections are taken care of.
3. Collect the experiences of other organizations which used similar test tools.
4. Keep a checklist of questions to be asked to the vendors on cost/effort/support.
5. Identify list of tools that meet the above requirements.
6. Evaluate and shortlist one/set of tools and train all test developers on the tool.
7. Deploy the tool across the teams after training all potential users of the tool.
Challenges in Automation
The most important challenge of automation is the management commitment.
Automation takes time and effort and pays off in the long run. Management should have
patience and persist with automation. Successful test automation endeavors are
characterized unflinching management commitment, a clear vision of goals that track
progress with respect to the long-term vision.

4. Explain in detail about the terms used in automation and scope of automation.
(May/Jun
2014)
Terms used in automation : A test case is a set of sequential steps to execute a test
operating on a set of predefined inputs to produce certain expected outputs. There are two
types of test cases namely automated and manual.
A test case can be documented as a set of simple steps, or it could be an assertion
statement or a set of assertions. An example of assertion is ―Opening a file, which is
already opened, should fail.
Scope of Automation: The specific requirements can vary from product to product, from
situation to situation, from time to time. The following gives some generic tips for
identifying the scope of automation. Identifying the types of testing amenable to
113
automation Stress, reliability, scalability, and performance testing. These types of testing
require the test cases to be run from a large number of different machines for an extended
period of time, such as 24 hours, 48 hours, and so on. Test cases belonging to these
testing types become the first candidates for automation.
Regression tests: Regression tests are repetitive in nature. Given the repetitive nature of
the test cases, automation will save significant time and effort in the long run.
Functional tests: These kinds of tests may require a complex set up and thus required
specialized skill, which may not be available on an ongoing basis. Automating these
once, using the expert skill tests, can enable using less-skilled people to run these tests on
an ongoing basis.
Automating areas less prone to change: User interfaces normally go through
significant changes during a project. To avoid rework on automated test cases, proper
analysis has to be done to find out the areas of changes to user interfaces, and automate
only those areas that will go through relatively less change. The non-user interface
portions of the product can be automated first. This enables the non-GUI portions of the
automation to be reused even when GUI goes through changes.
Automate tests that pertain to standards: One of the tests that products may have to
undergo is compliance to standards. For example, a product providing a JDBC interface
should satisfy the standard JDBC tests. Automating for standards provides a dual
advantage. Test suites developed for standards are not only used for product testing but
can also be sold as test tools for the market. Testing for standards have certain legal
requirements. To certify the software, a test suite is developed and handed over to
different companies. This is called certification testing and requires perfectly compliant
results every time the tests are executed.
Management aspects in automation: Prior to starting automation, adequate effort has to
be spent to obtain management commitment. The automated test cases need to be
maintained till the product reaches obsolescence. Since automation involves effort over
an extended period of time, management permissions are only given in phases and part

114
by part. It is important to automate the critical and basic functionalities of a product first.
To achieve this, all test cases need to be prioritized
as high, medium, and low, based on customer expectations. Automation should start from
high priority and then over medium and low-priority requirements.

5. What is metrics? Explain its types. (Nov/Dec 15,Apr/May 15,17,May/Jun 2013)

 The measurement of key parameters is an integral part of tracking.


 Measurements first entail collecting a set of data. But, raw data by itself
may not throw light on why a particular event has happened.
 The collected data have to be analyzed in totality to draw the
appropriate conclusions.
Need for Metrics in Testing: Testing is the penultimate phase before product release,
it is essential to measure the progress of testing and product quality. Tracking test
progress and product quality can give a good idea about the release—whether it will be
met on time with known quality. Measuring and producing metrics to determine
the progress of testing is thus very important.
Knowing only how much testing got completed does not answer the question on
when the testing will get completed and when the product will be ready for release.
To answer these questions, one needs to know how much more time is needed for
testing. To judge the remaining days needed for testing, two data points are needed—
remaining test cases yet to be executed and how many test cases can be executed per
elapsed day. The test cases that can be executed per person day are calculated based on
a measure called test case execution productivity. This productivity number is
derived from the previous test cycles. It is represented by the formula, given alongside
in the margin.
Types of Metrics
Product metrics can be further classified as,
Project Metrics: A typical project starts with requirements gathering and ends
115
with product release. All the phases that fall in between these points need to be
planned and tracked. In the planning cycle, the scope of the project is finalized. The
project scope gets translated to size estimates, which specify the quantum of work to
be done. This size estimate gets translated to effort estimate for each of the phases and
activities by using the available productivity data available. This initial effort is
called base lined effort.
As the project progresses and if the scope of the project changes or if the
available productivity numbers are not correct, then the effort estimates are re-
evaluated again and this re-evaluated effort estimate is called revised effort. The
estimates can change based on the frequency of changing requirements and other
parameters that impact the effort.
Progress Metrics: Any project needs to be tracked from two angles. One is how well the
project is doing with respect to effort and schedule. This is the angle we have been
looking at so far in this chapter. The other equally important angle is to find out how
well the product is meeting the quality requirements for the release. There is no point in
producing a release on time and within the effort estimate but with a lot of defects,
causing the product to be unusable. One of the main objectives of testing is to find as
many defects as possible before any customer finds them. The number of defects that are
found in the product is one of the main indicators of quality. Hence in this section, we
will look at progress metrics that reflect the defects (and hence the quality) of a
product.
Productivity Metrics: Productivity metrics combine several measurements and
parameters with effort spent on the product. They help in finding out the capability of
the team as well as for other purposes, such as
 Estimating for the new release.
 Finding out how well the team is progressing, understanding the
reasons for (both positive and negative) variations in results.
 Estimating the number of defects that can be found.
 Estimating the cost involved in the release.
116
6. Write the technological developments that cause organizations to revise their
approach to testing; also write the criteria and methods involved while establishing
a testing policy. (Nov / Dec 2015)

Criteria for a Testing Policy


A testing policy involves the following four criteria:
 Definition of testing. A brief but clear definition of testing.
 Testing system. The method through which testing will be achieved and
enforced.
 Evaluation. How information services management will measure and evaluate
testing.
 Standards. The standards against which testing will be measured.
Good testing does not just happen, it must be planned; and a testing policy should be the
cornerstone of that plan. A good practice is for management to establish the testing
policy for the IT department, have all members of IT management sign that policy as
their endorsement and intention to enforce that testing policy, and then prominently
display that endorsed policy where everyone in the IT department can see it.
IT management normally assumes that their staff understands the testing function and
what management wants from testing. Exactly the opposite is typically true. Testing
often is not clearly defined, nor is management’s intent made known regarding their
desire for the type and extent of testing.
IT departments frequently adopt testing tools such as a test data generator, make the
system programmer/analyst aware of those testing tools, and then leave it to the
discretion of the staff how testing is to occur and to what extent. In fact, many “anti-
testing” messages may be indirectly transmitted from management to staff. For example,
pressure to get projects done on time and within budget is an anti-testing message from
management. The message says, “I don’t care how you get the system done, but get it
done on time and within budget,” which translates to the average systems analyst/

117
programmer as “Get it in on time even if it isn’t tested.”
Methods for Establishing a Testing Policy
The following three methods can be used to establish a testing policy:

1. Management directive. One or more senior IT managers write the policy. They
determine what they want from testing, document that into a policy, and issue it to
the department. This is an economical and effective method to write a testing
policy; the potential disadvantage is that it is not an organizational policy, but
rather the policy of IT management.
2. Information services consensus policy. IT management convenes a group of the
more senior and respected individuals in the department to jointly develop a
policy. While senior management must have the responsibility for accepting and
issuing the policy, the development of the policy is representative of the thinking
of all the IT department, rather than just senior management. The advantage of this
approach is that it involves the key members of the IT department. Because of this
participation, staff is encouraged to follow the policy. The disadvantage is that it is
an IT policy and not an organizational policy
3. Users’ meeting. Key members of user management meet in conjunction with the
IT department to jointly develop a testing policy. Again, IT management has the
final responsibility for the policy, but the actual policy is developed using people
from all major areas of the organization. The advantage of this approach is that it
is a true organizational policy and involves all of those areas with an interest in
testing. The disadvantage is that it takes time to follow this approach, and a policy
might be developed that the IT department is obligated to accept because it is a
consensus policy and not the type of policy that IT itself would have written.

118
7. a. Discuss the different test process activities of software testing in detail.
(Apr/May 15)
Testing is a process rather than a single activity. This process starts from test planning
then designing test cases, preparing for execution and evaluating status till the test
closure. So, we can divide the activities within the fundamental test process into the
following basic steps:
1) Planning and Control
2)    Analysis and Design
3)    Implementation and Execution
4)    Evaluating exit criteria and Reporting
5)    Test Closure activities
1)    Planning and Control:
Test planning has following major tasks:
i.  To determine the scope and risks and identify the objectives of testing.
ii. To determine the test approach.
iii. To implement the test policy and/or the test strategy
iv. To determine the required test resources like people, test environments, PCs,
etc.
v. To schedule test analysis and design tasks, test implementation, execution and
evaluation.
vi. To determine the Exit criteria we need to set criteria such as Coverage
criteria. 
 Test control has the following major tasks:
o To measure and analyze the results of reviews and testing.
o To monitor and document progress, test coverage and exit criteria.
o To provide information on testing.
o To initiate corrective actions.

119
o To make decisions.
2)  Analysis and Design:
Test analysis and Test Design has the following major tasks:
o To review the test basis. 
o To identify test conditions.
o To design the tests.
o To evaluate testability of the requirements and system.
o To design the test environment set-up and identify and required
infrastructure and tools.
3)  Implementation and Execution:
During test implementation and execution, we take the test conditions into test cases and
procedures and other testware such as scripts for automation, the test environment and
any other test infrastructure.
Test implementation has the following major task:
 To develop and prioritize our test cases by using techniques and create test
data for those tests.
  To create test suites from the test cases for efficient test execution.
  To implement and verify the environment.
Test execution has the following major task:
o To execute test suites and individual test cases following the test
procedures.
o  To re-execute the tests that previously failed in order to confirm a fix. This
is known as confirmation testing or re-testing.
o To log the outcome of the test execution and record the identities and
versions of the software under tests. The test log is used for the audit trial.
o To Compare actual results with expected results.
o Where there are differences between actual and expected results, it report
discrepancies as Incidents.

120
4)  Evaluating Exit criteria and Reporting:
Based on the risk assessment of the project we will set the criteria for each test level
against which we will measure the “enough testing”. These criteria vary from project to
project and are known as exit criteria.
Exit criteria come into picture, when:
— Maximum test cases are executed with certain pass percentage.
— Bug rate falls below certain level.
— When achieved the deadlines.
Evaluating exit criteria has the following major tasks:
o To check the test logs against the exit criteria specified in test planning.
o To assess if more test are needed or if the exit criteria specified should be
changed.
o To write a test summary report for stakeholders.
5)  Test Closure activities:
Test closure activities are done when software is delivered. The testing can be closed for
the other reasons also like:
 When all the information has been gathered which are needed for the testing.
 When a project is cancelled.
 When some target is achieved.
 When a maintenance release or update is done.
Test closure activities have the following major tasks:
o To check which planned deliverables are actually delivered and to ensure
that all incident reports have been resolved.
o To finalize and archive testware such as scripts, test environments, etc. for
later reuse.
o To handover the testware to the maintenance organization. They will give
support to the software.

121
o To evaluate how the testing went and learn lessons for future releases and
projects.

7 b. Write short notes on website testing. (Nov / Dec 16)


Website testing is the name given to software testing that focuses on web applications.
Complete testing of a web-based system before going live can help address issues before the
system is revealed to the public.
Website testing checklists:
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
1) Functionality Testing: Test for – all the links in web pages, database connection, forms used
for submitting or getting information from user in the web pages, Cookie testing etc.
Check all the links:
 Test the outgoing links from all the pages  to specific domain under test.
 Test all internal links.
 Test links jumping on the same pages.
 Test links used to send email to admin or other users from web pages.
 Test to check if there are any orphan pages.
 Finally link checking includes, check for broken links in all above-mentioned links.
Test forms in all pages:
Forms are the integral part of any website. Forms are used for receiving information from users
and to interact with them. So what should be checked on these forms?
 First check all the validations on each field.
 Check for default values of the fields.
 Wrong inputs in the forms to the fields in the forms.
 Options to create forms if any, form delete, view or modify the forms.
Cookies Testing:
122
Cookies are small files stored on the user machine. These are basically used to maintain the
session- mainly the login sessions. Test the application by enabling or disabling the cookies in
your browser options. Test if the cookies are encrypted before writing to user machine. If you are
testing the session cookies (i.e. cookies that expire after the session ends) check for login
sessions and user stats after session ends. Check effect on application security by deleting the
cookies.
Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is the most
important one. Mainly validate the site for HTML syntax errors. Check if the site is crawl able to
different search engines.
Database testing: Data consistency is also very important in web application. Check for data
integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved and also updated
correctly. More on database testing could be load on DB,we will address this in web load or
performance testing below.
2) Usability Testing:
Test for navigation:
Navigation means how an user surfs the web pages, different controls like buttons, boxes or how
the user uses the links on the pages to surf different pages.
Usability testing includes the following:
 Website should be easy to use.
 Instructions provided should be very clear.
 Check if the instructions provided are perfect to satisfy its purpose.
 Main menu should be provided on each page.
 It should be consistent enough.
Content checking:
Content should be logical and easy to understand. Check for spelling errors. Usage  of dark
colours annoys the users and should not be used in the site theme. You can follow some standard
colours that are used for web page and content building. These are the common accepted
standards like what I mentioned above about annoying colours, fonts, frames etc.

123
Content should be meaningful. All the anchor text links should be working properly. Images
should be placed properly with proper sizes.
These are some of basic important standards that should be followed in web development. The
task is to validate all for UI testing.
3) Interface Testing: The main interfaces are:
 Web server and application server interface
 Application server and Database server interface.
Check if all the interactions between these servers are executed and errors are handled properly.
If database or web server returns any error message for any query by application server then
application server should catch and display these error messages appropriately to the users.
Check what happens if user interrupts any transaction in-between? Check what happens if
connection to the web server is reset in between?
4) Compatibility Testing:
Compatibility of your website is a very important testing aspect. See which compatibility test to
be executed:
 Browser compatibility
 Operating system compatibility
 Mobile browsing
 Printing options
Browser compatibility:
In my web-testing career I have experienced this as the most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different
configurations and settings that your web page should be compatible with. Your website coding
should be a cross browser platform compatible. If you are using java scripts or AJAX calls for UI
functionality, performing security checks or validations then give more stress on browser
compatibility testing of your web application.Test web application on different browsers like
Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different
versions.
OS compatibility: Some functionality in your web application is that it may not be compatible
with all operating systems. All new technologies used in web development like graphic designs,
interface calls like different API’s may not be available in all Operating Systems.Hence test your
124
web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with
different OS flavors.
Mobile browsing: Test your web pages on mobile browsers. Compatibility issues may be there
on mobile devices as well.
Printing options: If you are giving page-printing options then make sure fonts, page alignment,
page graphics etc., are  getting printed properly. Pages should  fit to the paper size or as per the
size mentioned in the printing option.
5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
 Web Load Testing
 Web Stress Testing
Test application performance on different internet connection speed.
Web load testing: You need to test if many users are accessing or requesting the same page. Can
system sustain in peak load times? Site should handle many simultaneous user requests, large
input data from users, simultaneous connection to DB, heavy load on specific pages etc.
Web Stress testing: Generally stress means stretching the system beyond its specified limits.
Web stress testing is performed to break the site by giving stress and its checked as how the
system reacts to stress and how it  recovers from crashes. Stress is generally given on input
fields, login and sign up areas.
In web performance testing website functionality on different operating systems and different
hardware platforms is checked for software and hardware memory leakage errors.
6) Security Testing:
Following are some of the test cases for web security testing:
 Test by pasting internal URL directly onto the browser address bar without login. Internal
pages should not open.
 If you are logged in using username and password and browsing internal pages then try
changing URL options directly. I.e. If you are checking some publisher site statistics with
publisher site ID= 123. Try directly changing the URL site ID parameter to different site
ID which is not related to the logged in user. Access should be denied for this user to
view others stats.

125
 Try some invalid inputs in input fields like login username, password, input text boxes
etc. Check the systems reaction on all invalid inputs.
 Web directories or files should not be accessible directly unless they are given download
option.
 Test the CAPTCHA for automates script logins.
 Test if SSL is used for security measures. If used proper message should get displayed
when user switch from non-secure http:// pages to secure https:// pages and vice versa.
 All transactions, error messages, security breach attempts should get logged in log files
somewhere on the web server.
7. c With examples explain the following black box testing
 Requirements based testing
 Positive and Negative Testing
 Sate based testing
 User documentation and compatability

Requirements based testing


Requirements based testing is a testing approach in which test cases, conditions and data are
derived from requirements. It includes functional tests and also non-functional attributes such as
performance, reliability or usability.
Stages in Requirements based Testing:
 Defining Test Completion Criteria - Testing is completed only when all the functional
and non-functional testing is complete.
 Design Test Cases - A Test case has five parameters namely the initial state or
precondition, data setup, the inputs, expected outcomes and actual outcomes.
 Execute Tests - Execute the test cases against the system under test and document the
results.
 Verify Test Results - Verify if the expected and actual results match each other.
 Verify Test Coverage - Verify if the tests cover both functional and non-functional
aspects of the requirement.
 Track and Manage Defects - Any defects detected during the testing process goes
through the defect life cycle and are tracked to resolution. Defect Statistics are

126
maintained which will give us the overall status of the project.
Requirements Testing process:
 Testing must be carried out in a timely manner.
 Testing process should add value to the software life cycle, hence it needs to be effective.
 Testing the system exhaustively is impossible hence the testing process needs to be
efficient as well.
 Testing must provide the overall status of the project, hence it should be manageable.
Positive Testing: - When tester test the application from positive point of mind than it is known
as positive testing. Testing the application with valid input and data is known as positive testing.
A test which is designed to check that application is correctly working. Here the aim of tester is
to pass affecting application, sometimes it is obviously called as clean testing, and that is “test to
pass”.
Negative Testing: - When tester test the application from negative point of mind than it is
known as negative testing.Testing the application always with invalid input and data is known as
negative testing.
Example of positive testing is given below:
Considering example length of password defined in requirements is 6 to 20 characters, and
whenever we check the application by giving alphanumeric characters on password field
“between” 6 to 20 characters than it is positive testing, because we test the application with valid
data/ input.
Example of negative testing is given below:
Considering example as we know phone no field does not accept the alphabets and special
characters it obviously accepts numbers, but if we type alphabets and special characters on phone
number field to check it accepts the alphabets and special characters or not than it is negative
testing.
State Based Testing:-
State Based means change of sate from one state to another.State based Testing is useful to
generate the test cases for state machines as it has a dynamic behavior (multiple state) in its
system. we Can explain this using state transition diagram.It is a graphic representation of a state
machine.
For eg we can take the behavior of mixer grinder. The state transition for this will be like

127
 switch on -- turn towards 1 then 2 then 3 then turn backwards to 2 then 1 then off
 switch on - directly turn backwards to 3 then turn towards to off then turn towards 1 then
2 then 3 then turn backwards to 2 then 1 then off
Each represents a state of machine. Like this we can draw state transition diagram. Valid test
cases can be generated by:
 Start from the start state
 Choose a path that leads to the next state
 If you encounter an invalid input in a given state, generate an error condition test case
 Repeat the process till you reach the final state

User documentation and compatibility


Documentation Testing involves testing of the documented artifacts that are usually developed
before or during the testing of Software.
Documentation for Software testing helps in estimating the testing effort required, test coverage,
requirement tracking/tracing, etc. This section includes the description of some commonly used
documented artifacts related to Software development and testing, such as:
 Test Plan
 Requirements
 Test Cases
 Traceability Matrix
Compatibility testing is a non-functional testing conducted on the application to evaluate the
application's compatibility within different environments. It can be of two types - forward
compatibility testing and backward compatibility testing.
 Operating system Compatibility Testing - Linux , Mac OS, Windows
 Database Compatibility Testing - Oracle SQL Server
 Browser Compatibility Testing - IE , Chrome, Firefox
 Other System Software - Web server, networking/ messaging tool, etc.

128
Industrial / practical connectivity of the subject

There are several important trends in software testing world that will alter the landscape
that testers find themselves in today:

 Mobile application testing (Rapid growth in Mobile testing)


 Cloud-based Testing (Virtualization and Cloud Computing)
 Testing in the Agile Development Environment
 Context-driven test
 Security Testing
 Crowd sourced testing
 Proliferation of open-source testing tools
 Tester Certification.

129
QUESTION BANK

Question Paper Code : 51745


B.E./B.Tech. DEGREE EXAMINATION, MAY/JUNE 2016
Seventh Semester
Computer Science and Engineering
IT 2032/IT 702/10177 ITE 24/10144 CSE 15 – SOFTWARE TESTING
(Common to Information Technology)
(Regulations 2008/2010)
(Common to PTIT 2032/10144 CSE 15 – Software Testing for B.E. (Part-Time)
Sixth Semester Computer Science and Engineering – Regulations 2009/2010)
Time : Three Hours Maximum : 100 Marks
Answer All Questions.
PART A - (10 x 2 = 20 Marks)
1. Define software quality. [Page No 9]
2. What is a test case? Give example. [Page No 8]
3. State the difference between white-box testing and black-box testing.[Page No 23]
4. What is boundary value analysis ? Give example. [Page No 32]
5. Define regression testing. [Page No 48]
6. What is alpha testing? [Page No 51]
7. List the organization structures for testing teams. [Page No 76]
8. What are the skills needed by a test specialist ? [Page No 76]
9. State the advantages of using automated tools for software testing.[Page No 105]
10. What is a metric? Give examples for software metrics. [Page No 113]

PART B - (5 x 16 = 80 Marks)
11. (a) “Principles play an important role in all engineering disciplines and are usually
introduced as part of an educational background in each branch of engineering”.
List and discuss the software testing principles related execution-based testing.
130
[Page No 12]
(16)
OR
(b) What is a defect ? List the origins of defects and discuss the developer / tester
support for developing a defect repository. [Page No 15]
12. (a) Consider the following set of requirements for the triangle problem:
R1: If x < y + z or y <x + z or z < x + y then it is a triangle
R2: If x = y and x # z and y # z then it is a scalene triangle
R3: If x = y or x = z or y = z then it is an isosceles triangle
R4: If x = y and y = z and z = x then it is an equilateral triangle
R5: If x > y + z or y > x + z or z > x + y then it is impossible to construct a
triangle. Now, consider the following causes and effects for the triangle problem :
Causes (inputs) :
 C1 : Side “x” is less than sum of “y” and “z”
 C2 : Side “y” is less than sum of “x” and “z”
 C3 : Side “z” is less then sum of “x” and “y”
 C4 : Side “x” is equal to side “y”
 C5 : Side “x” is equal to side “z”
 C6 : Side “y” is equal to side “z”
Effects:
 E1 : Not a triangle
 E2 : Scalene triangle
 E3 : Isosceles triangle
 E4 : Equilateral triangle
 E5 : Impossible
What is a cause-effect graph ? Model a cause-effect graph for the above.
[Page No 70]
(16)
131
OR
(b) Consider the following fragment of code :
i = 0;
while (i < n – 1) do
j = i + 1;
while (j < n) do
if A [i] < A [j] then
swap (A[i], A [j]);
end do;
i = i + 1;
end do;
Identify bug (s) if any in the above program segment, modify the code if you have
identified bug (s). Construct a control flow graph and compute Cyclomatic
complexity. [Page No 72] (16)
13. (a) What is unit testing? Explain with an example the process of designing the
unit tests, running the unit tests and recording results. [Page No 51] (16)
OR
(b) What is integration testing ? Explain with examples the different types of
integration testing. [Page No 58] (16)
14. (a) What is a test plan ? List and explain the test plan components.[Page No 78]

(16)
OR
(b) Explain the role played by the managers, developers/testers, and users/clients
in testing planning and test policy development. [Page No 99] (16)
15. (a) What is software test automation ? State the major objectives of software test
automation and discuss the same. [Page No 105] (16)
OR

132
(b)Discuss with diagrammatic illustration the testing maturity model.
[Out of syllabus as per regulation 2013]
(16)
__________________

Question Paper Code : 21745


B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER 2015
Seventh Semester
Computer Science and Engineering
IT 2032/IT 702/10177 ITE 24/10144 CSE 15 – SOFTWARE TESTING
(Common to Information Technology)
(Regulations 2008/2010)
(Common to PTIT 2032/10144 CSE 15 – Software Testing for B.E. (Part-Time)
Sixth Semester Computer Science and Engineering – Regulations 2009/2010)
Time : Three Hours Maximum : 100 Marks
Answer All Questions.
PART A - (10 x 2 = 20 Marks)
1. What would be the information, a test case will contain? [Page No 8
2. Define the following terms: Errors, Fault and Failure. [Page No 8]
3. Write the samples of cause and effect notations. [Page No 27]
4. How the black box testing strategy differs from white box ? [Page No 23]
5. Write the levels of testing. [Page No 50]
6. Why should we use Ad-hoc testing? [Page No 48]
7. What role do users/clients play in the development of test plan for a project?
[Page No 77]
8. Define Test log. [Page No 76]
9. Write the need of testing maturity model. [Page No
103]

133
10. Write the types of reviews. [Page No
104]
PART B – (5 x 16 = 80 Marks)
11. (a)Write the technological developments that cause organizations to revise their
approach to testing; also write the criteria and methods involved while
establishing a testing policy. [Page No 115] (16)
Or
(b) Explain the four steps involved in developing a test strategy, and with an
example create a sample test strategy. (16)
12. (a) Compare functional and structural testing with its advantages and
disadvantages. [Page No 42] (16)
Or
(b) (i) Draw the flowchart for testing technique/tool selection process. (8)
(ii) Explain the following testing concepts : [Page No 44]
(1) Dynamic versus Static testing (4)
(2) Manual versus Static testing. (4)
13. (a) Write the importance of security testing. What are the consequences of
security breaches? Also write the various areas which has to be focused on
during security testing. [Page No 64] (16)
Or
(b) Explain the phases involved in unit test planning and how will you design
the unit test. [Page No 51] (16)
14. (a) Write the various personal, managerial and technical skills needed by a
Test specialist. [Page No 88] (16)
Or
(b) Write the essential high level items that are included during test planning;
also write the hierarchy of test plans. [Page No 78] (16)
15. (a) Explain about SCM and its activities. [Not in 2013 regulation] (16)
Or
134
(b) Explain the five steps in software quality metrics methodology adopted
from IEEE standard. [Page No 21] (16)

____________________

Question Paper Code : 71745


B.E./B.Tech. DEGREE EXAMINATION, APRIL/MAY 2015
Seventh Semester
Computer Science and Engineering
IT 2032/IT 702/10177 ITE 24/10144 CSE 15 – SOFTWARE TESTING
(Common to Information Technology)
(Regulations 2008/2010)
(Common to PTIT 2032/10144 CSE 15 – Software Testing for B.E. (Part-Time)
Fifth / Sixth Semester Computer Science and Engineering – Regulations 2009/2010)
Time : Three Hours Maximum : 100 Marks
Answer All Questions.
PART A - (10 x 2 = 20 Marks)
1. Define Test Oracle and Test Bed. [Page No 9]
2. Mention the quality attributes of software. [Page No 07]
3. Define Test Adequacy Criteria. [Page No 27]
4. Draw the notations used in cause effect graph. [Page No 27]
5. State the purpose of Defect Bash testing. [Page No 49]
6. Write the major activities followed in internationalization testing. [Page No 47]
7. List down the skills needed by test specialist. [Page No 76]
8. List the internal and external dependencies for executing WBS. [Page No 74]
9. Write the different types of reviews practiced by software industry. [PageNo104]
10. Differentiate effort and schedule. [Page No 77]
135
PART B – (5 x 16 = 80 Marks)
11. (a) Discuss different testing principles being followed in Software Testing.
[Page No 12]
Or
(b) Describe the defect classes in detail with example. [Page No 17]
12. (a) Define White Box testing. Draw CFG for the program P. Identify distinct
paths and calculate cyclomatic complexity of P. Write suitable test
cases to satisfy all distinct paths [Page
No 36]
Program P
1. begin
2. int num, product
3. bool done;
4. product = 1;
5. input (done);
6. while (! Done) {
7. input (num)
8. if(num>0)
9. product = product * num;
10. input (done);
11. }
12. output (product);
13. end.
Or
(b) Consider an application App that takes two inputs name and age where
name is a nonempty string containing at most 20 alphabetic characters and age is
an integer that must satisfy the constraint 0< age < 80. The App is required to
display an error message if the input value provided for age is out of range. The
application truncates any name that is more than 20 characters in length and
136
generates an error message if an empty string is supplied for name. Construct test
data for App using the [Page No 30]
(i) uni-dimensional equivalence partitioning
(ii) multi-dimensional equivalence partitioning
(iii) boundary value analysis technique.
13. (a) Discuss in detail about different types of integration testing. [Page No 58]
Or
(b) Discuss the levels of testing adapted to test OO systems. [Page No 66]
14. (a) Discuss the roles and responsibilities if testing services organization with
suitable Organization structure. [Page No 89]
Or
(b) Discuss the different test process activities of software testing in detail.
[Page No
117]
15. (a) Elaborate different types of S/W metrics and measurement used.
[Page No 113]
Or
(b) Explain the design and architecture for test automation with examples.
[Page No 106]

____________________________

137
Question Paper Code : 80592
B.E./B.Tech. DEGREE EXAMINATION, NOVEMBER/DECEMBER 2016
Sixth Semester
Computer Science and Engineering
IT6004– SOFTWARE TESTING
(Common to Information Technology)
(Regulations 2013)
Time : Three Hours Maximum : 100 Marks
Answer All Questions.
PART A - (10 x 2 = 20 Marks)
1. Mention the objectives of software Testing? [Page No 9]
2. Define Defects with example. [Page No 8]
3. Sketch the control flow graph for an ATM withdrawal system. [Page No 108]
4. Give a note on the procedure to computer cyclomatic complexity [Page No 26]
5. List out types of system testing. [Page No 48]
6. Compare and contrast Alpha Testing and Beta Testing. [Page No 46]
7. Discuss on the role of manager in the test group. [Page No 100]
8. What are the issues in testing object oriented system? [Page No 70]
9. Mention the criteria for selecting test tool. [Page No 108]
10. Distinguish between milestone and deliverable. [Page No 104 ]
PART B – (5 x 16 = 80 Marks)
11. (a)Elaborate on the principles of software testing and summarize the tester role
in the software development Organization. [Page No 12] (16)
Or
(b) Explain in detail processing and monitoring of the defects with defect
repository. [Page No 17] (16)
12. (a) Demonstrate the various black box test cases using Equivalence class
partitioning and boundary values analysis to test a module for payroll system.
[Page No 30] (16)
138
Or
(b) (i) Explain the various white box techniques with suitable test cases.
[Page No 36] (8)
(ii) Discuss in detail about code coverage testing.[Page No 38] (8)
13. (a) Explain the different integration strategies for procedure & functions with
suitable diagrams. [Page No 58] (16)
Or
(b) How would you identify the Hardware and Software for configuration
testing and explain what testing techniques applied for website testing. (16)
[Page No 63]
14. (a) i. What are the skills needed by a Test specialist. [Page No 88] (8)
ii. Explain the organizational structure for testing teams in single product
companies. [Page No 89] (8)
Or
(b) i. Explain the components of test plan in detail. [Page No 80] (8)
ii. Compare the contrast role of debugging goals and polices in testing. (8)
15. (a) Explain the design and architecture for automation and outline the
challenges . [Page No 106] (16)
Or
(b) What are the metrics and measurements? Illustrate the types of product metrics.
[Page No 113](16)

139
140
141

You might also like