Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Fundamentals of Software

Testing
-Ruchi K. Sharma

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


Outlines
1. Why is testing necessary?

2. What is testing?

3. Testing principles

4. Fundamental test process

5. The psychology of testing

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS
 We can divide the activities within the fundamental test process into the
following basic phases:

Test Planning and Control;

Test Analysis and Design;

Test Implementation and Execution;

Evaluating Exit Criteria and Reporting;

Test Closure Activities.

Test phases may overlap


Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai
FUNDAMENTAL TEST PROCESS

 Testing throughout the software development process

 Testing is more than test


execution

 Each phase of the testing


process takes place concurrent
to the phases of the software
development process

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS

 1. Test planning - major tasks:


Determine the scope and risks and
identify the objectives of testing
Determine the test approach
(techniques, test items, identifying and
interfacing with the teams involved in
testing)
Implement the test policy and/or the test
strategy
Determine the required test resources
(e.g. people, test environment, PCs)
 Schedule test analysis and design tasks,
test implementation, execution and
evaluation
Determine the exit criteria

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS

 Master Test Plan


 A document describing the scope, approach, resources and schedule of intended test
activities.
 It identifies amongst others test items, the features to be tested, the testing tasks, who will
do each task, degree of tester independence, the test environment, the test design techniques
and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring
contingency planning. It is a record of the test planning process. [After IEEE 829]
 Test Strategy
 A high-level description of the test levels to be performed and the testing within those
levels for an organization or program (one or more projects).
 Test Approach
 The implementation of the test strategy for a specific project. It typically includes the
decisions made that follow based on the (test) project’s goal and the risk assessment carried
out, starting points regarding the test process, the test design techniques to be applied, exit
criteria and test types to be performed
 Exit Criteria
 The set of generic and specific conditions, agreed upon with the stakeholders for permitting
a process to be officially completed. The purpose of exit criteria is to prevent a task from
being considered completed when there are still outstanding parts of the task which have not
been finished. Exit criteria are used to report against and to plan when to stop testing. [After
Gilb and Graham]
Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai
FUNDAMENTAL TEST PROCESS
 2. Test control - major tasks
Measure and analyse the results of
reviews and testing
Monitor and document progress, test
coverage and exit criteria
Provide information on testing
Initiate corrective actions
make decisions or enable others to
make decisions: to continue testing, to
stop testing, to release the software or
to retain it for further work

An ongoing activity influencing test


planning. The master test plan may be
modified according to the information
acquired from test controlling
The status of the test process is
determined by comparing the
progress achieved against the test
plan. Necessary activities will be
started accordingly
Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai
FUNDAMENTAL TEST PROCESS
 3. Test Analysis and Design- major tasks /1
 we take general testing objectives identified during
planning and build test designs and test procedures
(scripts)
 In this phase we only need specifications, not
the software itself, the SWDs can program here

 Review the test basis (such as the product risk


analysis, requirements, architecture, design
specifications, and interfaces), examining the
specifications for the software we are testing.

 Designing tests/test cases before the code exists, as


we can use the test basis documents to understand
what the system should do once built.
 Create and prioritize logical/high level test
cases (test cases without specific values for test
data)
 Positive tests give proof of the functionality,
negative tests check the handling of error
situations
 representative tests that relate to particular
aspects of the software which carry risks or
which are of particular interest

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS
 3. Test Analysis and Design- major tasks /2
Identify test conditions and required test data
based on analysis of test items, their
specifications, and what we know about their
behaviour and structure.
Evaluate the availability of test data and/or
the feasibility of generating test data
This gives us a high-level list of what we are
interested in testing.

Evaluate testability of the requirements and


system. The requirements may be written in a
way that allows a tester to design tests
Example:
If the requirements just say 'the software needs to
respond quickly enough' that is not testable,
because 'quick enough' may mean different things
to different people. A more testable requirement
would be 'the software needs to respond in 5
seconds with 20 people logged on'
Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai
FUNDAMENTAL TEST PROCESS
 3. Test Analysis and Design- major
tasks /3
Design the test environment set-up
and identify any required
infrastructure and tools.
This includes testing tools and
support tools such as
spreadsheets, word processors,
project planning tools, and non-IT
tools and equipment - everything
we need to carry out our work.
(Exclusive) availability of the test
environment, time windows etc.
Define the operation of the test
environment, including user
administration
Processes, procedures and
responsibility choosing

Creating bi-directional
Faculty:traceability
Prof Ruchi K. Sharma ,NMIMS-Mumbai
between test basis and test cases
Explanation of Terms :FUNDAMENTAL TEST
PROCESS
 Test Data
 Data that exists (for example, in a database) before a test is executed,
and that affects or is affected by the component or system under test
 Input Data
 A variable that is read by a component (whether stored within the
system or outside)
 Test Coverage
 The degree, expressed as a percentage, to which a specified coverage
item has been exercised by a test suite (expressed as a percentage). Used
mostly on white box tests to determine code coverage
 The total number of activities in relation to executed activities
 Test oracle
 A source to determine expected results to compare with the actual
result of the software under test. An oracle may be the existing system
(for a benchmark), other software, a user manual, or an individual’s
specialized knowledge, butProfshould
Faculty: not,NMIMS-Mumbai
Ruchi K. Sharma be the code.
FUNDAMENTAL TEST PROCESS

Testware
Artifacts produced during the test process required to plan, design,
and execute tests, such as documentation, scripts, inputs, expected
results, set-up and clear-up procedures, files, databases,
environment, and any additional software or utilities used in testing.
[After Fewster and Graham]

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS

 4. Test Implementation & Execution /1

Finalizing, implementing and prioritizing test cases


including the identification of test data

Developing and prioritizing test procedures, creating test data


optionally, preparing test harnesses and writing automated test scripts
Creating test suites from the test procedures for efficient test execution
Verifying that the test environment (test bed) has been set up correctly
Verifying and updating bi-directional traceability between the test basis
and test cases

Executing tests (test procedures) either manually or by using test


execution tools, according to the planned sequence

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS

 4. Test Implementation & Execution /2


Logging the outcome of test execution and recording the identities and
versions of the software under test, test tools and testware
Comparing actual results with expected results
Reporting discrepancies as incidents (PRs) and analyzing them in order
to establish their cause
Example: a defect in the code, in specified test data, in the test
document, or a mistake in the way the test was executed
Repeating test activities to confirm a fix
Retest (after defect correction)
Regression test:
execution of a corrected test and/or execution of tests in order to
ensure that defects have not been introduced in unchanged areas of
the software or that defect fixing did not uncover other defects

*PR = Problem Report, CR = Change Request

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS
Test suite/test sequence
A set of several test cases for a component or system under test, where the
post condition of one test is often used as the precondition for the next one.
Test procedure specification (test scenario)
A document specifying a sequence of actions for the execution of a test. Also
known as test script or manual test script. [After IEEE 829]
Test execution
The process of running a test on the component or system under test,
producing actual result(s)
Test log
A chronological record of relevant details about the execution of tests. [IEEE
829]
 Regression testing:
Testing of a previously tested program following modification to ensure that
defects have not been introduced or uncovered in unchanged areas of the
software, as a result of the changes made. It is performed when the software
or its environment is changed
Re-testing (confirmation testing)
Repeating a test after a defect has been fixed in order to confirm that the
original defect has been successfully removed (see the process of testing a
PR) Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai
FUNDAMENTAL TEST PROCESS

 5. Evaluating Exit Criteria


 Evaluating exit criteria is the activity where test
execution is assessed against the defined objectives.
 This should be done for each test level, as for each
we need to know whether we have done enough
testing.
 Based on our risk assessment, we'll have set criteria
against which we'll measure 'enough'.

 Evaluating exit criteria has the following major


tasks:

 Check test logs against the exit criteria specified in


test planning
 Assess if more tests are needed or if the exit
criteria specified should be changed
 Write a test summary report for stakeholders

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS

 6. Test Closure Activities – major tasks


 collect data from completed test activities to consolidate
experience, testware, facts and numbers
 occur at project milestones such as when a software
system is released, a test project is completed (or
cancelled), a milestone has been achieved, or a
maintenance release has been completed
 Checking which planned deliverables have been
delivered
 Closing incident reports or raising change records for
any that re main open
 Documenting the acceptance of the system
 Finalizing and archiving testware, the test environment
and the test infrastructure for later reuse, handing over the
testware to the maintenance organization
 Analyzing “lessons learned” to determine changes needed
for future releases and projects
 Using the information gathered to improve test maturity

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


FUNDAMENTAL TEST PROCESS - Summary

 The testing process can be divided into different phases


 Test planning covers the activities defining test approach for all test phases
as well as planning resources (time, personnel, machines)
 Test design (specification) covers designing the test cases and their
expected results
 Test execution covers defining test data, performing test execution and
comparing results
 Test evaluation and reporting covers exit criteria evaluation and recording
of test results in written form
 Test conclusion covers closure of incident report and lesson learned
 Test control consists of controlling activities covering all the above phases of
the testing process

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


Outlines
1. Why is testing necessary?

2. What is testing?

3. Testing principles

4. Fundamental test process

5. The psychology of testing

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING
• Roles and Responsibilities
• Developer Role • Tester Role
• implements requirements • plans testing activities
• develops structures • designs test cases
• designs and programs the software • is concerned only with finding defects
• creating a product is his success • finding an error made by a developer is
his success

Developers are constructive! Testers are destructive!

Perception
Wrong!
Testing is a constructive activity as well,
It aims at eliminating defects from a product!

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING

 Personal attributes of a good tester /1


 Curious, perceptive, attentive to detail
 To comprehend the practical scenarios of the customer
 To be able to analyze the structure of the test
 To discover details, where failures might show

 Skepticism and has a critical eye


 test objects contain defects – you just have to find them
 do not believe everything you are told by the developers
 one must not get frightened by the fact that serious defects may often be
found which will have an impact on the course of the project

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING
 Personal attributes of a good tester /2
 Good communication skills
 to bring bad news to the developers
 to overbear frustration states of minds
 both technical and issues of the practical use of the system must be
understood and communicated
 positive communication can help to avoid or to edge difficult situations
 Praise them from the beginning
 to quickly establish a working relationship with the developer

 Experience
 personal factors influencing error occurrence
 experience helps identifying where errors might accumulate

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING
 Differences: to design – to develop – to test
 Testing requires a different mindset from designing or developing new
computer systems
 common goal: to provide good software
 design mission: help the customer to supply the right requirements
 developer’s mission: convert the requirements into functions
 tester’s mission: examine the correct implementation of the customer’s
requirements

 In principle, one person can be given all three roles to work at


 differences in goals and role models must be taken into account
 this is difficult but possible
 other solutions (independent testers) are often easier and produce better
results

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING

 Independent testing
 The separation of testing responsibilities supports the independent evaluation
of test results

 Degree of independence ?
 The higher the value, the greater is the independence

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING
 Types of test organization /1

 Developer test
 The developer will never examine his “creation” unbiased (emotional attachment)
 However, the developer knows the test object better than anybody else
 extra costs result for the orientation of other persons on the test object

 human beings tend to overlook their own faults


 the developer runs the risk of not recognizing even self-evident defects

 errors made because of misinterpretation of the requirements will remain


undetected
 setting up test teams where developers test each other’s products helps to avoid or
at least lessen this shortcoming

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


Degree of independence
THE PSYCHOLOGY OF TESTING
 Types of test organization /2

 Team of developers
 developers speak the same language
 costs for orientation in the test object are kept moderate, especially when the
teams exchange test objects

 danger of conflicts’ generation among developing teams


 One developer who looks for and finds a defect will not be the other developer’s
best friend

 mingling development and test activities


 frequent switching of ways of thinking
 makes difficult to control project budget

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


Degree of independence
THE PSYCHOLOGY OF TESTING
 Types of test organization /3

 Test teams
 Creating test teams covering different project areas enhances the quality of
testing

 It is important that test teams of different areas in the project work


independently

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


Degree of independence
THE PSYCHOLOGY OF TESTING
 Types of test organization /4

 Outsourcing tests
 The separation of testing activities and development activities offers best
independence between test object and tester
 Outsourced test activities are performed by personal having relatively little
knowledge about the test object and the project background
 Learning curve brings high costs, therefore unbiased party experts should be
involved at the early stages of the project
 External experts have a high level of testing know how:
 An appropriate test design is ensured methods and tools find optimal use

 Designing test cases automatically


 Computer aided generation of test cases, e.g. based on the formal specification
documents, is also independent

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


Degree of independence
THE PSYCHOLOGY OF TESTING

 Difficulties /1
 unable to understand each other
 developers should have basic knowledge of testing
 testers should have basic knowledge of software development

 especially in stress situations, discovering errors that someone has made often
leads to conflict
 the way of documenting defects and the way the defect is described will decide
how the situation will develop
 persons should not be criticized, the defects must be stated factually
 defect description should help the developer find the error
 common objectives must always be the main issue

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING
 Difficulties /2
 Communication between testers and developers is missing or is insufficient.
This can make it impossible to work together
 testers seen as “only messengers of bad news”
 Improvement: try to see yourself in the other person’s role. Did my message
come through? Did the answer reach me?

 A solid test requires the appropriate distance to the test object


 An independent and non-biased position is acquired through distance from the
development
 But: too large a distance between the test object and the development team will
lead to more effort and time for testing

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai


THE PSYCHOLOGY OF TESTING - Summary

 People make mistakes, every implementation has defects


 Human nature makes it difficult to stand in front of one’s own defects (error
blindness)
 Developer and tester means two different worlds meet each other
 developing is constructive – something is created that was not there before
 testing seems destructive at first glance – defects will be found
 Together, development and test are constructive in their objective to ensure
software with the least defects possible

 Independent testing enhances quality of testing


 Instead of developer teams, use tester teams or teams with external personnel for
testing

Faculty: Prof Ruchi K. Sharma ,NMIMS-Mumbai

You might also like