Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

CHAPTER TWO

TESTING FUNDAMENTALS
SOFTWARE TESTING LIFE CYCLE (STLC)

Software Testing Life Cycle (STLC): Since testing has been identified as a process SDLC,
there exists a need for well-defined series of steps to ensure effective testing.

The testing process divided into a precise sequence of steps is termed as ST life cycle

(STLC).

This methodology involves the testers at early stages of development


Test Planning

Test strategy, size of test case, duration, cost, risk responsibility

Test Design

Test cases and procedures

Test Execution

Bug reports and metrics


Post
execution/Test
review
TEST PLANNING

 The goal of test planning is to consider important issues of testing


strategy which are: resources, schedules, responsibilities, risks and
priorities.

 The output of test planning is the test planning document, which


specifies about test case formats, test case formats, test cases at different
phases of SDLC, different types of testing etc.

 Test Plan Template (IEEE 829-1998 Format)


 The activities here are listed below:

 Define the test strategy.

 Estimate the number of test cases, their duration, and cost.

 Plan the resources like manpower, tools etc.

 Identify the risks.

 Define the test completion criteria.

 Identify the methodologies, and techniques for various test cases.

 Identify reporting procedures, bug classification, databases [for testing], bug severity levels and
project metrics.
TEST DESIGN
 The activities here are:

 Determining the test objectives and their prioritization

 Creating Test Cases and Test Data: Objectives of the test cases are identified and the type of
testing (positive or negative) is also decided for the input specifications.

 Selecting the testing environment and supporting tools: Details like hardware
configurations, testers, interfaces, operating systems etc. are specified in this phase.

 Creating the procedure specification: This is a description of how the test case will
be run. This form of sequenced steps is used by a tester at the time of executed test
cases.
TEST EXECUTION

 In this phase, all test cases are executed.

 Test results are documented as reports, logs and summary


reports.
TEST REVIEW/POST-EXECUTION
 This phase is to analyze bug related issues and obtain the feedback.

 As soon as the developer gets the bug-report, he performs the following activities:

 Understanding the bug

 Reproducing the bug: This is done to confirm the bug position and its whereabouts so as to avoid the

failures.

 Analyze the nature and cause of a bug.

 After these, the results from manual and automated testing can be collected and the following

activities can be done: reliability analysis (are the reliability goals being met or not), coverage analysis
and overall defect analysis.
WHAT ARE THE ENTRY AND EXIT CRITERIA FOR TESTING?

 Entry criteria

 Complete or partially testable code is available.

 Requirements are defined and approved.

 Availability of sufficient and desired test data.

 Test cases are developed and ready.

 Test environment has been set-up and all other necessary resources such as tools and
devices are available.
 Exit criteria

 The commonly considered exit criteria for terminating or concluding the process of
testing are:

 Deadlines meet or budget depleted.

 Execution of all test cases.

 Desired and sufficient coverage of the requirements and functionalities under the test.

 All the identified defects are corrected and closed.

 No high priority or severity or critical bug has been left out.


SOFTWARE TESTING PRINCIPLES

 Principle 1: A necessary part of a test case is a definition of the expected output


or result.

 A test case must consist of two components:

 A description of the input data to the program.

 A precise description of the correct output of the program for that set of input data.
 Principle 2: A programmer should avoid attempting to test his or her own
program.

 Most programmers cannot effectively test their own programs because they cannot bring
themselves to shift mental gears to attempt to expose errors.

 In addition to these psychological issues, there is a second significant problem: The


program may contain errors due to the programmer’s misunderstanding of the problem
statement or specification.

 If this is the case, it is likely that the programmer will carry the same misunderstanding
into tests of his or her own program
 This does not mean that it is impossible for a programmer to test his or her own program.
Rather, it implies that testing is more effective and successful if someone else does it.

 Developers can be valuable members of the testing team when the program specification
and the program code itself are being evaluated.

 Note that this argument does not apply to debugging (correcting known errors);
debugging is more efficiently performed by the original programmer.
 Principle 3:A programming organization should not test its own Programs

 It is more economical for testing to be performed by an objective, independent party


 Principle 4: Any testing process should include a thorough inspection of the
results of each test.

 We’ve seen numerous experiments that show many subjects failed to detect certain
errors, even when symptoms of those errors were clearly observable on the output
listings.

 Put another way, errors that are found in later tests were often missed in the results from
earlier tests.
 Principle 5: Test cases must be written for input conditions that are invalid and
unexpected, as well as for those that are valid and expected.

 There is a natural tendency when testing a program to concentrate on the valid and
expected input conditions, to the neglect of the invalid and unexpected conditions.

 Therefore, test cases representing unexpected and invalid input conditions seem to have a
higher error detection yield than do test cases for valid input conditions.
 Principle 6: Examining a program to see if it does not do what it is supposed to
do is only half the battle; the other half is seeing whether the program does what
it is not supposed to do.

 This is a corollary to the previous principle.

 Programs must be examined for unwanted side effects.

 For instance, a payroll program that produces the correct paychecks is still an erroneous
program if it also produces extra checks for nonexistent employees, or if it overwrites the first
record of the personnel file.
 Principle 7: Avoid throwaway test cases unless the program is truly a throwaway
program.

 Saving test cases and running them again after changes to other components of the
program is known as regression testing.

 Therefore, we need to avoid throwaway test cases so as to retest after correction of


errors.
 Principle 8: Do not plan a testing effort under the tacit
assumption that no errors will be found.

 Testing is the process of finding errors but not showing a program is


working correctly.

 Therefore, even after extensive testing and error correction, it is safe


to assume that errors still exist; they simply have not yet been found.
 Principle 9: The probability of the existence of
more errors in a section of a program is
proportional to the number of errors already
found in that section
 Principle 10: Testing is an extremely creative and
intellectually challenging task.

 We already have seen that it is impossible to test a program


sufficiently to guarantee the absence of all errors. Methodologies
discussed later in this course help you develop a reasonable set of
test cases for a program, but these methodologies still require a
significant amount of creativity.
TESTABILITY

 MCCALL’S QUALITY FACTORS AND


CRITERIA

 A quality factor represents a behavioral characteristic of


a system. Some examples of high-level quality factors are:

 correctness, reliability, efficiency, testability, portability, and


reusability.
TESTABILITY

 It is important to be able to verify every requirement, both explicitly stated and simply expected.

 Testability means the ability to verify requirements.

 At every stage of software development, it is necessary to consider the testability aspect of a product.

 Specifically, for each requirement we try to answer the question:

 What procedure should one use to test the requirement, and how easily can one verify it?

 To make a product testable, designers may have to instrument a design with functionalities not
available to the customer.
QUALITY CRITERION

 A quality criterion is an attribute of a quality factor that is related to software


development.
 Testability
 Instrumentation
 Simplicity
 Self-descriptiveness
 Modularity
 Simplicity : Ease with which the software can be understood

 Instrumentation Degree to which the software provides for


measurement of its use or identification of errors

 Modularity: Provision of highly independent modules

 Self-documentation Provision of in-line documentation that explains


implementation of components
 If an effort is made to improve one quality factor, another quality
factor may be degraded.

 Some quality factors positively impact others.

 For example, an effort to enhance the testability of a system will


improve its maintainability.
QUALITY METRICS

 The high-level quality factors cannot be measured directly.

 For example, we cannot directly measure the testability of a software system.

 Neither can testability be expressed in “yes” or “no” terms.

 Instead, the degree of testability can be assessed by associating with testability a


few quality metrics, namely, simplicity, instrumentation, self-descriptiveness, and
modularity.
 A quality metric is a measure that captures some aspect of a quality criterion. One
or more quality metrics should be associated with each criterion.

 The metrics can be derived as follows:

 Formulate a set of relevant questions concerning the quality criteria and seek a “yes” or “no”
answer for each question.

 Divide the number of “yes” answers by the number of questions to obtain a value in the range of 0
to 1.

 The resulting number represents the intended quality metric.

You might also like