Professional Documents
Culture Documents
Testing
Testing
Software testing is a process of executing a program or application with the intent of finding the
software bugs.It can also be stated as the process of validating and verifying that a software
program or application or product:
Meets the business and technical requirements according to design and development
Works as expected
Can be implemented with the same characteristic.
1
What is a Failure in software testing?
If under certain environment and situation defects in the application or product get executed
then the system will produce the wrong results causing a failure.Failures may also arise because of
human error in interacting with the software, perhaps a wrong input value being entered or an output
being misinterpreted.Finally failures may also be caused by someone deliberately trying to cause a
failure in the system.
Difference between Error, Defect and Failure in software testing:
Error: The mistakes made by programmer is known as an ‘Error’. This could happen because of the
following reasons:
– Because of some confusion in understanding the functionality of the software
– Because of some miscalculation of the values
– Because of misinterpretation of any value, etc.
Defect: The bugs introduced by programmer inside the code are known as a defect. This can happen
because of some programatical mistakes.
Failure: If under certain circumstances these defects get executed by the tester during the testing then
it results into the failure which is known as software failure.
There is several level of independence in software testing which is listed here from the lowest level of
independence to the highest:
3
i. Tests by the person who wrote the item.
ii. Tests by another person within the same team, like another programmer.
iii. Tests by the person from some different group such as an independent test team.
iv. Tests by a person from a different organization or company, such as outsourced testing or
certification by an external body.
Clear and courteous communication and feedback on defects between tester and developer:
We all make mistakes and we sometimes get annoyed and upset or depressed when someone points
them out. So, when as testers we run a test which is a good test from our viewpoint because we found
the defects and failures in the software. But at the same time we need to be very careful as how we
react or report the defects and failures to the programmers. We are pleased because we found a good
bug but how will the requirement analyst, the designer, developer, project manager and customer react.
The people who build the application may react defensively and take this reported defect as per-
sonal criticism.
The project manager may be annoyed with everyone for holding up the project.
The customer may lose confidence in the product because he can see defects.
If our goal is to demonstrate that a program has no errors, we tend to select test data that have a
low probability of causing the program to fail. On the other hand if our goal is to demonstrate that a
program has errors, our test data have a high probability of finding errors.
A well-constructed and executed software test is successful when it finds errors that can be
fixed. The same test is also successful when it eventually establishes that there are no more errors to be
found. An unsuccessful test case is one that causes a program to produce the correct result without
finding any errors.People perform poorly when they set out on a task that they know to be infeasible or
impossible. Defining program testing as the process of uncovering errors in a program makes it feasible
task, thus overcoming this psychological problem.
A problem with the common definitions such as “Testing is the process of establishing
confidence that a program does what it is supposed to do” is that programs that do what they are
supposed to do still can contain errors. That is an error is clearly present if a program does not do what
it is supposed to do; but errors are also present if a program does what it is not supposed to do.So in
program testing we establish a confidence that a program does what it is supposed to do and does not
do what it is not supposed to do.
4
Software Testing Guidelines
If the expected result of a test case has not been predefined, chances are that a plausible, but
erroneous, result will be interpreted as a correct result because there is a subconscious desire to see
the correct result. One way of combating this is to encourage a detailed examination of all output by
precisely spelling out, in advance, the expected output of the program.
It is extremely difficult, after a programmer has been constructive while designing and coding a
program, to suddenly, overnight, change his or her perspective and attempt to form a completely de-
structive frame of mind toward the program. In addition, the program may contain errors due to the
programmer's misunderstanding of the problem statement or specification. The programmer will have
the same misunderstanding when attempting to test his or her own program. This does not mean that it
is impossible for a programmer to test his or her own program rather, it implies that testing is more ef-
fective and successful if performed by another party.
This is probably the most obvious principle, but again, it is something that is often overlooked. A
significant percentage of errors that are eventually found were actually made visible by earlier test
cases, but slipped by because of the failure to carefully inspect the results of those earlier test cases.
Test cases must be written for invalid and unexpected, as well as valid and expected, input condi-
tions.
There is a natural tendency, when testing a program, to concentrate on the valid and expected input
conditions, at the neglect of the invalid and unexpected conditions. Hence many errors are suddenly
discovered in production programs when the program is used in some new or unexpected way. Test
cases representing unexpected and invalid input conditions seem to have a higher error-detection yield
than do test cases for valid input conditions.
5
Examining a program to see if it does not do what it is supposed to do is only half the battle. The
other half is seeing whether the program does what it is not supposed to do. This is simply a corollary
to the previous principle. It also implies that programs must be examined for unwanted side effects.
Avoid throw-away test cases unless the program is truly a throw-away program.
The major problem is that test cases represent a valuable investment that, in this environment, dis-
appears after the testing has been completed. Whenever the program has to be tested again (e.g., after
correcting an error or making an improvement), the test cases have to be reinvented. More often than
not, since this reinvention requires a considerable amount of work, people tend to avoid it. Therefore,
the retest of the program is rarely as rigorous as the original test, meaning that if the modification
causes a previously functional part of the program to fail, this error often goes undetected.
Do not plan a testing effort under the tacit assumption that no errors will be found.
This is a mistake often made by project managers and is a sign of the use of the incorrect definition
of testing, that is, the assumption that testing is the process of showing that the program functions cor-
rectly.
Errors seem to come in clusters, and in the typical program, some sections seem to be much more
error prone than other sections. This phenomenon gives us insight or feedback in the testing process. If
a particular section of a program seems to be much more error prone than other sections, then in terms
of yield on our testing investment, additional testing efforts are best focused against this error-prone
section.
It is probably true that the creativity required in testing a large program exceeds the creativity re-
quired in designing that program, since it is impossible to test a program such that the absence of all er-
rors can be guaranteed.
Testing Principles
There are 7 testing principles.
1) Testing shows presence of defects: Testing can show the defects are present, but cannot
prove that there are no defects. Even after testing the application or product thoroughly we can-
not say that the product is 100% defect free. Testing always reduces the number of undiscov-
ered defects remaining in the software but even if no defects are found, it is not a proof of cor -
rectness.
6
need 30 517 578 125 (515) tests. This is very unlikely that the project timescales would allow
for this number of tests. So, accessing and managing risk is one of the most important activities
and reason for testing in any project.
3) Early testing: In the software development life cycle testing activities should start as early
as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the defects discovered dur-
ing pre-release testing or shows the most operational failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and again, eventually the
same set of test cases will no longer be able to find any new bugs. To overcome this “Pesticide
Paradox”, it is really very important to review the test cases regularly and new and different
tests need to be written to exercise different parts of the software or system to potentially find
more defects.
7) Absence – of – errors fallacy: If the system built is unusable and does not fulfil the user’s
needs and expectations then finding and fixing defects does not help.
Economics of Testing
There is a definite economic impact of software testing. One economic impact is from the cost
of defects. This is a very real and very tangible cost Another economic impact is from the way we per-
form testing. It is possible to have very good motivations and testing goals while testing in a very inef-
ficient way .To combat the challenges associated with testing economics, we should establish some
strategies like black box and white box testing.
What is Test analysis / Test Basis? or How to identify the test conditions?
Test analysis is the process of looking at something that can be used to derive test infor-
mation. This basis for the tests is called the test basis. The test basis is the information we need
in order to start the test analysis and create our own test cases. Basically it’s a documentation
on which test cases are based, such as requirements, design specifications, product risk analysis,
architecture and interfaces.We can use the test basis documents to understand what the system
should do once built. The test basis includes whatever the tests are based on. Sometimes tests
can be based on experienced user’s knowledge of the system which may not be documented.
From testing perspective we look at the test basis in order to see what could be tested.
These are the test conditions. A test condition is simply something that we could test.While
identifying the test conditions we want to identify as many conditions as we can and then we se-
lect about which one to take forward and combine into test cases. We could call them test possi-
bilities.Testing everything is an impractical goal, which is known as exhaustive testing. We can-
7
not test everything.We have to select a subset of all possible tests. In practice the subset we se-
lect may be a very small subset and yet it has to have a high probability of finding most of the
defects in a system. Hence we need some intelligent thought process to guide our selection
called test techniques. The test conditions that are chosen will depend on the test strategy or
detailed test approach. Once we have identified a list of test conditions, it is important to priori-
tize them, so that the most important test conditions are identified. Test conditions can be identi-
fied for test data as well as for test inputs and test outcomes, for example, different types of
record, different sizes of records or fields in a record. Test conditions are documented in the
IEEE 829 document called a Test Design Specification.
Test conditions should be able to be linked back to their sources in the test basis, this is known
as traceability. Traceability can be horizontal through all the test documentation for a given test level
(e.g. system testing, from test conditions through test cases to test scripts) or it can be vertical through
the layers of development documentation (e.g. from requirements to components).
The requirements for a given function or feature have changed. Some of the fields now have
different ranges that can be entered. Which tests were looking at those boundaries? They now
need to be changed. How many tests will actually be affected by this change in the require-
ments? These questions can be answered easily if the requirements can easily be traced to the
tests.
A set of tests that has run OK in the past has now started creating serious problems. What func-
tionality do these tests actually exercise? Traceability between the tests and the requirement be-
ing tested enables the functions or features affected to be identified more easily.
Before delivering a new release, we want to know whether or not we have tested all of the specified
requirements in the requirements specification.
Test design is the act of creating and writing test suites for testing a software. Test analysis and
identifying test conditions gives us a generic idea for testing which covers quite a large range of possi-
bilities. But when we come to make a test case we need to be very specific. In fact now we need the ex-
act and detailed specific input. But just having some values to input to the system is not a test, if you
don’t know what the system is supposed to do with the inputs, you will not be able to tell that whether
your test has passed or failed. One of the most important aspects of a test is that it checks that the sys-
tem does what it is supposed to do.
In order to know what the system should do, we need to have a source of information about the
correct behavior of the system – this is called an ‘oracle’ or a test oracle. Once a given input value has
been chosen, the tester needs to determine what the expected result of entering that input would be and
document it as part of the test case. Expected results include information displayed on a screen in re-
sponse to an input. If we don’t decide on the expected results before we run a test then there might be a
chance that we will notice that there is something wildly wrong.
8
What is Test implementation? or How to specifying test procedures or scripts?
The document that describes the steps to be taken in running a set of tests and specifies the exe-
cutable order of the tests is called a test procedure in IEEE 829, and is also known as a test script.
When test Procedure Specification is prepared then it is implemented and is called Test implementa-
tion. Test script is also used to describe the instructions to a test execution tool.
The tests that are intended to be run manually rather than using a test execution tool can be called as
manual test script. The test procedures, or test scripts, are then formed into a test execution schedule
that specifies which procedures are to be run first – a kind of superscript.
Each testing technique falls into one of a number of different categories. There are two main categories:
1. Static technique
2. Dynamic technique
Dynamic techniques are subdivided into three more categories: specification-based (black-box, also
known as behavioral techniques), structure-based (white-box or structural techniques) and experience-
based. Specification-based techniques include both functional and nonfunctional techniques (i.e.
quality characteristics).
9
What are the uses of static test technique?
Since static testing can start early in the life cycle so early feedback on quality issues can be es-
tablished.
As the defects are getting detected at an early stage so the rework cost most often relatively low.
Development productivity is likely to increase because of the less rework effort.
Types of the defects that are easier to find during the static testing are: deviation from stan-
dards, missing requirements, design defects, non-maintainable code and inconsistent interface
specifications.
Static tests contribute to the increased awareness of quality issues.
Informal reviews are applied many times during the early stages of the life cycle of the docu-
ment. A two person team can conduct an informal review. In later stages these reviews often in-
volve more people and a meeting. The goal is to keep the author and to improve the quality of
the document. The most important thing to keep in mind about the informal reviews is that they
are not documented.
1. Planning
2. Kick-off
3. Preparation
4. Review meeting
5. Rework
6. Follow-up
1. Planning: The first phase of the formal review is the Planning phase. In this phase thereview process
begins with a request for review by the author to the moderator (or inspection leader). A moderator has
to take care of the scheduling like date, time, place and invitation of the review. For the formal reviews
the moderator performs the entry check and also defines the formal exit criteria. The entry check is
done to ensure that the reviewer’s time is not wasted on a document that is not ready for review. After
doing the entry check if the document is found to have very little defects then it’s ready to go for the re-
views. So, the entry criteria are to check that whether the document is ready to enter the formal review
process or not. Hence the entry criteria for any document to go for the reviews are:
10
Once, the document clear the entry check the moderator and author decides that which part of the docu-
ment is to be reviewed. Since the human mind can understand only a limited set of pages at one time so
in a review the maximum size is between 10 and 20 pages. Hence checking the documents improves
the moderator ability to lead the meeting because it ensures the better understanding.
2. Kick-off: This kick-off meeting is an optional step in a review procedure. The goal of this step is to
give a short introduction on the objectives of the review and the documents to everyone in the meeting.
The relationships between the document under review and the other documents are also explained, es-
pecially if the numbers of related documents are high.
3. Preparation: In this step the reviewers review the document individually using the related docu-
ments, procedures, rules and checklists provided. Each participant while reviewing individually identi-
fies the defects, questions and comments according to their understanding of the document and role.
After that all issues are recorded using a logging form. The success factor for a thorough preparation is
the number of pages checked per hour. This is called the checking rate. Usually the checking rate is in
the range of 5 to 10 pages per hour.
Logging phase: In this phase the issues and the defects that have been identified during the
preparation step are logged page by page. The logging is basically done by the author or by a
scribe. Scribe is a separate person to do the logging and is especially useful for the formal re-
view types such as an inspection. Every defects and it’s severity should be logged in any of the
three severity classes given below:
— Critical:The defects will cause downstream damage.
— Major: The defects could cause a downstream damage.
— Minor: The defects are highly unlikely to cause the downstream damage.
During the logging phase the moderator focuses on logging as many defects as possible within a certain
time frame and tries to keep a good logging rate (number of defects logged per minute). In formal re-
view meeting the good logging rate should be between one and two defects logged per minute.
Discussion phase: If any issue needs discussion then the item is logged and then handled in the
discussion phase. As chairman of the discussion meeting, the moderator takes care of the people
issues and prevents discussion from getting too personal and calls for a break to cool down the
heated discussion. The outcome of the discussions is documented for the future reference.
Decision phase: At the end of the meeting a decision on the document under review has to be
made by the participants, sometimes based on formal exit criteria. Exit criteria are the average
number of critical and/or major defects found per page (for example no more than three
critical/major defects per page). If the number of defects found per page is more than a certain
level then the document must be reviewed again, after it has been reworked.
5. Rework: In this step if the number of defects found per page exceeds the certain level then the docu-
ment has to be reworked. Not every defect that is found leads to rework. It is the author’s responsibility
to judge whether the defect has to be fixed. If nothing can be done about an issue then at least it should
be indicated that the author has considered the issue.
11
6. Follow-up: In this step the moderator check to make sure that the author has taken action on all
known defects. If it is decided that all participants will check the updated documents then the modera-
tor takes care of the distribution and collects the feedback. It is the responsibility of the moderator to
ensure that the information is correct and stored for future analysis.
1. The moderator:
2. The author:
3. The scribe:
Scribe is a separate person to do the logging of the defects found during the review.
4. The reviewers:
The main review types that come under the static testing are mentioned below:
1. Walkthrough:
To present the documents both within and outside the software discipline in order to gather the
information regarding the topic under documentation.
To explain or do the knowledge transfer and evaluate the contents of the document
To achieve a common understanding and to gather feedback.
To examine and discuss the validity of the proposed solutions
To ensure that an early stage the technical concepts are used correctly
To access the value of technical concepts and alternatives in the product
To have consistency in the use and representation of technical concepts
To inform participants about the technical content of the document
3. Inspection:
It helps the author to improve the quality of the document under inspection
It removes defects efficiently and as early as possible
13
It improve product quality
It create common understanding by exchanging information
It learn from defects found and prevent the occurrence of similar defects
Equivalence partitioning
Boundary value analysis
Decision tables
State transition testing
In white-box testing the tester is concentrating on how the software does it. For example, a struc-
tural technique may be concerned with exercising loops in the software.
Different test cases may be derived to exercise the loop once, twice, and many times. This may be
done regardless of the functionality of the software.
Structure-based techniques can also be used at all levels of testing. Developers use structure-based
techniques in component testing and component integration testing, especially where there is
good tool support for code coverage.
14
Structure-based techniques are also used in system and acceptance testing, but the structures are dif-
ferent. For example, the coverage of menu options or major business transactions could be the
structural element in system or acceptance testing.
Retesting is done by replicating the same scenario with same data in new build.
In retesting those test cases are included which were failed earlier.
Retesting ensures that the issue has been fixed and is working as expected.
It is a planned testing with proper steps of verification
When a bug is raised and is rejected by the developer saying that it’s not re-creatable in their en-
vironment then in this case also the testers do the re-testing of the bug to ensure that whether it’s
a genuine bug or not.
In some cases the entire module is required to be re-tested to ensure the quality of the module.
The test cases of retesting cannot be automated.
Example of Retesting
Let’s assume that there is an application which maintains the details of all the students in school. This
application has four buttons like Add, Save, Delete and Refresh. All the other buttons functionality are
working as expected but on clicking on ‘Save’ button is not saving the details of the student. This is a
bug which is caught by the tester and he raised it. This issue is assigned to the developer and he fixes it.
Post fixing the issue it’s again assigned to the tester. This time tester tested only the ‘Save’ button func-
tionality. This is called retesting.
Advantages of Re-testing
Disadvantages of Re-testing
When any modification or changes are done to the application or even when any small change
is done to the code then it can bring unexpected issues. Along with the new changes it becomes
very important to test whether the existing functionality is intact or not. This can be achieved by
doing the regression testing.
The purpose of the regression testing is to find the bugs which may get introduced accidentally
because of the new changes or modification.
During confirmation testing the defect got fixed and that part of the application started working
as intended. But there might be a possibility that the fix may have introduced or uncovered a
different defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of
fixes is to do regression testing.
This also ensures that the bugs found earlier are NOT creatable.
Usually the regression testing is done by automation tools because in order to fix the defect the
same test is carried out again and again and it will be very tedious and time consuming to do it
manually.
During regression testing the test cases are prioritized depending upon the changes done to the
feature or module in the application. The feature or module where the changes or modification
is done that entire feature is taken into priority for testing.
This testing becomes very important when there are continuous modifications or enhancements
done in the application or product. These changes or enhancements should NOT introduce new
issues in the existing tested code.
16
This helps in maintaining the quality of the product along with the new changes in the applica-
tion.
Example:
Let’s assume that there is an application which maintains the details of all the students in school. This
application has four buttons Add, Save, Delete and Refresh. All the buttons functionalities are working
as expected. Recently a new button ‘Update’ is added in the application. This ‘Update’ button function-
ality is tested and confirmed that it’s working as expected. But at the same time it becomes very impor-
tant to know that the introduction of this new button should not impact the other existing buttons func-
tionality. Along with the ‘Update’ button all the other buttons functionality are tested in order to find
any new issues in the existing code. This process is known as regression testing.
1) Corrective Regression Testing: Corrective regression testing can be used when there is no change
in the specifications and test cases can be reused.
2) Progressive Regression Testing: Progressive regression testing is used when the modifications are
done in the specifications and new test cases are designed.
3) Retest-All Strategy: The retest-all strategy is very tedious and time consuming because here we re-
use all test which results in the execution of unnecessary test cases. When any small modification or
change is done to the application then this strategy is not useful.
4) Selective Strategy: In selective strategy we use a subset of the existing test cases to cut down the
retesting effort and cost. If any changes are done to the program entities, e.g. functions, variables etc.,
then a test unit must be rerun. Here the difficult part is to find out the dependencies between a test case
and the program entities it covers.
It helps us to make sure that any changes like bug fixes or any enhancements to the module or
application have not impacted the existing tested code.
It ensures that the bugs found earlier are NOT creatable.
Regression testing can be done by using the automation tools
It helps in improving the quality of the product.
17
Disadvantages of Regression testing:
If regression testing is done without using automated tools then it can be very tedious and time
consuming because here we execute the same set of test cases again and again.
Regression test is required even when a very small change is done in the code because this
small modification can bring unexpected issues in the existing functionality.
18
What are Software Testing Levels?
Testing levels are basically to identify missing areas and prevent overlap and repetition between
the development life cycle phases. In software development life cycle models there are defined phases
like requirement gathering and analysis, design, coding or implementation, testing and deployment.
Each phase goes through the testing. Hence there are various levels of testing. The various levels of
testing are:
Unit Testing: It is basically done by the developers to make sure that their code is working fine and
meet the user specifications. They test their piece of code which they have written like classes,
functions, interfaces and procedures.
Component testing: It is also called as module testing. The basic difference between the unit testing
and component testing is in unit testing the developers test their piece of code but in component testing
the whole component is tested. For example, in a student record application there are two modules one
which will save the records of the students and other module is to upload the results of the students.
Both the modules are developed separately and when they are tested one by one then we call this as a
component or module testing.
Integration testing: Integration testing is done when two modules are integrated, in order to test the
behavior and functionality of both the modules after integration. Below are few types of integration
testing:
Component integration testing: In the example above when both the modules or components are
integrated then the testing done is called as Component integration testing. This testing is basically
done to ensure that the code should not break after integrating the two modules.
System integration testing: System integration testing (SIT) is a testing where testers basically test
that in the same environment all the related systems should maintain data integrity and can operate in
coordination with other systems.
System testing: In system testing the testers basically test the compatibility of the application with
the system.
Acceptance testing: Acceptance testing are basically done to ensure that the requirements of the
specification are met.
Alpha testing: Alpha testing is done at the developers site. It is done at the end of the development
process
Beta testing: Beta testing is done at the customers site. It is done just before the launch of the
product.
Unit tests are basically written and executed by software developers to make sure that code
meets its design and requirements and behaves as expected.
19
The goal of unit testing is to segregate each part of the program and test that the individual parts
are working correctly.
This means that for any function or procedure when a set of inputs are given then it should re-
turn the proper values. It should handle the failures gracefully during the course of execution
when any invalid input is given.
A unit test provides a written contract that the piece of code must assure. Hence it has several
benefits.
Unit testing is basically done before integration as shown in the image below.
1. Issues are found at early stage. Since unit testing are carried out by developers where they test their
individual code before the integration. Hence the issues can be found very early and can be resolved
then and there without impacting the other piece of codes.
2. Unit testing helps in maintaining and changing the code. This is possible by making the codes less
interdependent so that unit testing can be executed. Hence chances of impact of changes to any other
code gets reduced.
3. Since the bugs are found early in unit testing hence it also helps in reducing the cost of bug fixes.
Just imagine the cost of bug found during the later stages of development like during system testing or
during acceptance testing.
4. Unit testing helps in simplifying the debugging process. If suppose a test fails then only latest
changes made in code needs to be debugged.
Component testing is also known as module and program testing. It finds the defects in the
module and verifies the functioning of software.
Component testing is done by the tester.
20
Component testing may be done in isolation from rest of the system depending on the develop-
ment life cycle model chosen for that particular application. In such case the missing software is
replaced by Stubs and Drivers and simulate the interface between the software components in a
simple manner.
Let’s take an example to understand it in a better way. Suppose there is an application consist-
ing of three modules say, module A, module B and module C. The developer has developed the
module B and now wanted to test it. But in order to test the module B completely few of it’s
functionalities are dependent on module A and few on module C. But the module A and module
C has not been developed yet. In that case to test the module B completely we can replace the
module A and module C by stub and drivers as required.
Stub: A stub is called from the software component to be tested. As shown in the diagram be-
low ‘Stub’ is called by ‘component A’.
Driver: A driver calls the component to be tested. As shown in the diagram below ‘component
B’ is called by the ‘Driver’.
As discussed in the previous article of the ‘Unit testing’ it is done by the developers where they do the
testing of the individual functionality or procedure. After unit testing is executed, component testing
comes into the picture. Component testing is done by the testers.
Component testing plays a very important role in finding the bugs. Before we start with the integration
testing it’s always preferable to do the component testing in order to ensure that each component of an
application is working effectively.
Also after integrating two different components together we do the integration testing. As dis-
played in the image below when two different modules ‘Module A’ and ‘Module B’ are inte-
grated then the integration testing is done.
21
Integration testing is done by a specific integration tester or test team.
Integration testing follows two approach known as ‘Top Down’ approach and ‘Bottom Up’ ap-
proach as shown in the image below:
In Big Bang integration testing all components or modules are integrated simultaneously, after
which everything is tested as a whole. In this approach individual modules are not integrated
until and unless all the modules are ready.
In Big Bang integration testing all the modules are integrated without performing any integra-
tion testing and then it’s executed to know whether all the integrated modules are working fine
or not.
Because of integrating everything at one time if any failures occurs then it become very difficult
for the programmers to know the root cause of that failure.
In case any bug arises then the developers has to detach the integrated modules in order to find
the actual cause of the bug.
Big Bang testing has the advantage that everything is finished before integration testing starts.
22
If any bug is found then it is very difficult to detach all the modules in order to find out the root
cause of it.
There is high probability of occurrence of the critical bugs in the production environment
Suppose a system consists of four modules as displayed in the diagram above. In big bang integration
all the four modules ‘Module A, Module B, Module C and Module D’ are integrated simultaneously
and then the testing is performed. Hence in this approach no individual integration testing is performed
because of which the chances of critical failures increases.
Another advantage is that all programs are integrated one by one and a test is carried out after
each step.
A disadvantage is that it can be time-consuming since stubs and drivers have to be developed
and used in the test.
Within incremental integration testing a range of possibilities exist, partly depending on the sys-
tem architecture:
o Top down: Testing takes place from top to bottom, following the control flow or archi-
tectural structure (e.g. starting from the GUI or main menu). Components or systems are
substituted by stubs.
o Bottom up: Testing takes place from the bottom of the control flow upwards. Compo-
nents or systems are substituted by drivers.
o Functional incremental: Integration and testing takes place on the basis of the func-
tions and functionalities, as documented in the functional specification
a) Top-down integration testing: Testing takes place from top to bottom, following the control flow
or architectural structure (e.g. starting from the GUI or main menu). Components or systems are
substituted by stubs. Below is the diagram of ‘Top down Approach’:
23
Advantages of Top-Down approach:
The tested product is very consistent because the integration testing is basically performed in an
environment that almost similar to that of reality
Stubs can be written with lesser time because when compared to the drivers then Stubs are sim-
pler to author.
b) Bottom-up integration testing: Testing takes place from the bottom of the control flow upwards.
Components or systems are substituted by drivers. Below is the image of ‘Bottom up approach’:
In this approach development and testing can be done together so that the product or application
will be efficient and as per the customer specifications.
24
It is required to create the test drivers for modules at all levels except the top control
c) Functional incremental: Integration and testing takes place on the basis of the functions and func-
tionalities, as documented in the functional specification.
It tests the interactions between software components and is done after component testing.
The software components themselves may be specified at different times by different specifica-
tion groups, yet the integration of all of the pieces must work together.
In system testing the behavior of whole system/product is tested as defined by the scope of the
development project or product.
It may include tests based on risks and/or requirement specifications, business process, use
cases, or other high level descriptions of system behavior, interactions with the operating sys-
tems, and system resources.
System testing is most often the final test to verify that the system to be delivered meets the
specification and its purpose.
System testing is carried out by specialists testers or independent testers.
System testing should investigate both functional and non-functional requirements of the test-
ing.
System integration testing (SIT) tests the interactions between different systems and may be
done after system testing.
It verifies the proper execution of software components and proper interfacing between compo-
nents within the solution.
The objective of SIT Testing is to validate that all software module dependencies are function-
ally correct and that data integrity is maintained between separate modules for the entire solu-
tion.
As testing for dependencies between different components is a primary function of SIT Testing,
this area is often most subject to Regression Testing.
After the system test has corrected all or most defects, the system will be delivered to the user
or customer for Acceptance Testing or User Acceptance Testing (UAT).
Acceptance testing is basically done by the user or customer although other stakeholders may
be involved as well.
The goal of acceptance testing is to establish confidence in the system.
Acceptance testing is most often focused on a validation type testing.
Acceptance testing may occur at more than just a single level, for example:
o A Commercial Off the shelf (COTS) software product may be acceptance tested when
it is installed or integrated.
25
o Acceptance testing of the usability of the component may be done during component
testing.
o Acceptance testing of a new functional enhancement may come before system test-
ing.
The types of acceptance testing are:
o The User Acceptance test: focuses mainly on the functionality thereby validating the
fitness-for-use of the system by the business user. The user acceptance test is performed
by the users and application managers.
o The Operational Acceptance test: also known as Production acceptance test validates
whether the system meets the requirements for operation. In most of the organization the
operational acceptance test is performed by the system administration before the system
is released. The operational acceptance test may include testing of backup/restore, disas-
ter recovery, maintenance tasks and periodic check of security vulnerabilities.
o Contract Acceptance testing: It is performed against the contract’s acceptance criteria
for producing custom developed software. Acceptance should be formally defined when
the contract is agreed.
o Compliance acceptance testing: It is also known as regulation acceptance testing is
performed against the regulations which must be adhered to, such as governmental, legal
or safety regulations.
Alpha testing is one of the most common software testing strategy used in software
development. Its specially used by product development organizations.
This test takes place at the developer’s site. Developers observe the users and note problems.
Alpha testing is testing of an application when development is about to complete. Minor design
changes can still be made as a result of alpha testing.
Alpha testing is typically performed by a group that is independent of the design team, but still
within the company, e.g. in-house software test engineers, or software QA engineers.
Alpha testing is final testing before the software is released to the general public. It has two
phases:
o In the first phase of alpha testing, the software is tested by in-house developers. They
use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs
quickly.
o In the second phase of alpha testing, the software is handed over to the software QA
staff, for additional testing in an environment that is similar to the intended use.
A beta test is the second phase of software testing in which a sampling of the intended audience
tries the product out. (Beta is the second letter of the Greek alphabet.) Originally, the term alpha
26
testing meant the first phase of testing in a software development process. The first phase in-
cludes unit testing, component testing, and system testing. Beta testing can be considered “pre-
release testing.
The goal of beta testing is to place your application in the hands of real users outside of your
own engineering team to discover any flaws or issues from the user’s perspective that you
would not want to have in your final, released version of the application. Example: Microsoft
and many other organizations release beta versions of their products to be tested by users.\
Closed beta versions are released to a select group of individuals for a user test and are invita-
tion only, while
Open betas are from a larger group to the general public and anyone interested. The testers re-
port any bugs that they find, and sometimes suggest additional features they think should be
available in the final version.
You have the opportunity to get your application into the hands of users prior to releasing it to
the general public.
Users can install, test your application, and send feedback to you during this beta testing period.
Your beta testers can discover issues with your application that you may have not noticed, such
as confusing application flow, and even crashes.
Using the feedback you get from these users, you can fix problems before it is released to the
general public.
The more issues you fix that solve real user problems, the higher the quality of your application
when you release it to the general public.
Having a higher-quality application when you release to the general public will increase cus-
tomer satisfaction.
These users, who are early adopters of your application, will generate excitement about your ap-
plication.
TEST COVERAGE
27
What is test coverage in software testing? It’s advantages and disadvantages
Test coverage measures the amount of testing performed by a set of test. Wherever we can count things
and can tell whether or not each of those things has been tested by some test, then we can measure coverage and
is known as test coverage.The basic coverage measure is where the ‘coverage item’ is whatever we have been
able to count and see whether a test has exercised or used this item.
• Equivalence partitioning: percentage of equivalence partitions exercised (we could measure valid and invalid
partition coverage separately if this makes sense);
• Boundary Value Analysis: percentage of boundaries exercised (we could also separate valid and invalid
boundaries if we wished);
• Decision tables: percentage of business rules or decision table columns tested;
• State transition testing: there are a number of possible coverage measures:
28
Chow’s 1-switch coverage) – and longer series of transitions, such as transition triples, quadruples, etc.
— Percentage of invalid transitions exercised (from the state table).
Code sample
TEST SET 1
Test 1_1: X= 2, Y = 3
Test 1_2: X =0, Y = 25
Test 1_3: X =47, Y = 1
In Test 1_1, the value of Z will be 8, so we will cover the statements on lines 1 to 4 and line 6.
In Test 1_2, the value of Z will be 50, so we will cover exactly the same statements as Test 1_1.
In Test 1_3, the value of Z will be 49, so again we will cover the same statements.
29
Since we have covered five out of six statements, we have 83% statement coverage (with three tests). What test
would we need in order to cover statement 5, the one statement that we haven’t exercised yet? How about this
one:
This time the value of Z is 70, so we will print ‘Large Z’ and we will have exercised all six of the statements, so
now statement coverage = 100%. Notice that we measured coverage first, and then designed a test to cover the
statement that we had not yet covered.
Note that Test 1_4 on its own is more effective which helps in achieving 100% statement coverage, than the first
three tests together. Just taking Test 1_4 on its own is also more efficient than the set of four tests, since it has
used only one test instead of four. Being more effective and more efficient is the mark of a good test technique.
Branch coverage – Has each branch of each control structure (such as in if and case statements) been
executed? For example, given an if statement, have both the true and false branches been executed? An-
other way of saying this is, has every edge in the program been executed?
Branch coverage is also known as Decision coverage or all-edges coverage.
It covers both the true and false conditions unlikely the statement coverage.
A branch is the outcome of a decision, so branch coverage simply measures which decision outcomes
have been tested.
A decision is an IF statement, a loop control statement (e.g. DO-WHILE or REPEAT-UNTIL), or a
CASE statement, where there are two or more outcomes from the statement. With an IF statement, the
exit can either be TRUE or FALSE, depending on the value of the logical condition that comes after IF.
1READ A
2 READ B
3 C = A – 2 *B
4 IFC <0THEN
5 PRINT “C negative”
6 ENDIF
Let’s suppose that we already have the following test, which gives us 100% statement coverage for code sample
30
The value of C is -10, so the condition ‘C < 0’ is True, so we will print ‘C negative’ and we have executed the
True outcome from that decision statement. But we have not executed the False outcome of the decision
statement. What other test would we need to exercise the False outcome and to achieve 100% decision coverage?
Before we answer that question, let’s have a look at another way to represent this code. Sometimes the decision
structure is easier to see in a control flow diagram (see Figure 4.4).
The dotted line shows where Test 2_1 has gone and clearly shows that we haven’t yet had a test that takes the
False exit from the IF statement.
Let’s modify our existing test set by adding another test:
TEST SET 2
Test 2_1: A = 20, B = 15
Test 2_2: A = 10, B = 2
This now covers both of the decision outcomes, True (with Test 2_1) and False (with Test 2_2). If we were to
draw the path taken by Test 2_2, it would be a straight line from the read statement down the False exit and
through the ENDIF. We could also have chosen other numbers to achieve either the True or False outcomes.
Condition coverage (or predicate coverage) – Has each Boolean sub-expression evaluated both to true
and false?
This is closely related to decision coverage but has better sensitivity to the control flow.
However, full condition coverage does not guarantee full decision coverage.
Condition coverage measures the conditions independently of each other.
31
For example, consider the following C function:
Assume this function is a part of some bigger program and this program was run with some test suite.
Statement coverage for this function will be satisfied if it was called e.g. as foo(1,1), as in this case, ev-
ery line in the function is executed including z = x;.
Tests calling foo(1,1) and foo(0,1) will satisfy branch coverage because, in the first case, both if condi-
tions are met and z = x; is executed, while in the second case, the first condition (x>0) is not satisfied,
which prevents executing z = x;.
Condition coverage can be satisfied with tests that call foo(1,0) and foo(0,1). These are necessary be-
cause in the first cases, (x>0) evaluates to true, while in the second, it evaluates false. At the same time,
the first case makes (y>0) false, while the second makes it true.
Condition coverage does not necessarily imply branch coverage. For example, consider the following fragment
of code:
if a and b then
Condition coverage can be satisfied by two tests:
a=true, b=false
a=false, b=true
However, this set of tests does not satisfy branch coverage since neither case will meet the if condition.
Condition/decision coverage requires that both decision and condition coverage be satisfied. However,
for safety-critical applications it is often required that modified condition/decision coverage (MC/DC) be satis-
fied. This criterion extends condition/decision criteria with requirements that each condition should affect the de-
cision outcome independently. For example, consider the following code:
if (a or b) and c then
However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the
value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is
needed to satisfy MC/DC:
32
a=false, b=true, c=false
a=false, b=true, c=true
a=false, b=false, c=true
a=true, b=false, c=true
This criterion requires that all combinations of conditions inside each decision are tested. For example, the code
fragment from the previous section will require eight tests:
The internal factors that influence the decisions about which technique to use are:
Models used in developing the system– Since testing techniques are based on models used to develop
that system, will to some extent govern which testing techniques can be used. For example, if the speci-
fication contains a state transition diagram, state transition testing would be a good technique to use.
Testers knowledge and their experience – How much testers know about the system and about testing
techniques will clearly influence their choice of testing techniques. This knowledge will in itself be in-
fluenced by their experience of testing and of the system under test.
Similar type of defects – Knowledge of the similar kind of defects will be very helpful in choosing test-
ing techniques (since each technique is good at finding a particular type of defect). This knowledge
could be gained through experience of testing a previous version of the system and previous levels of
testing on the current version.
Test objective – If the test objective is simply to gain confidence that the software will cope with typical
operational tasks then use cases would be a sensible approach. If the objective is for very thorough test-
ing then more rigorous and detailed techniques (including structure-based techniques) should be chosen.
Documentation – Whether or not documentation (e.g. a requirements specification) exists and whether
or not it is up to date will affect the choice of testing techniques. The content and style of the documenta-
tion will also influence the choice of techniques (for example, if decision tables or state graphs have
been used then the associated test techniques should be used).
Life cycle model used – A sequential life cycle model will lend itself to the use of more formal tech-
niques whereas an iterative life cycle model may be better suited to using an exploratory testing ap -
proach.
33
The external factors that influence the decisions about which technique to use are:
Risk assessment – The greater the risk (e.g. safety-critical systems), the greater the need for more thor-
ough and more formal testing. Commercial risk may be influenced by quality issues (so more thorough
testing would be appropriate) or by time-to-market issues (so exploratory testing would be a more appro-
priate choice).
Type of system used – The type of system (e.g. embedded, graphical, financial, etc.) will influence the
choice of techniques. For example, a financial application involving many calculations would benefit
from boundary value analysis.
Regulatory requirements – Some industries have regulatory standards or guidelines that govern the
testing techniques used. For example, the aircraft industry requires the use of equivalence partitioning,
boundary value analysis and state transition testing for high integrity systems together with statement,
decision or modified condition decision coverage depending on the level of software integrity required.
Time and budget of the project – Ultimately how much time there is available will always affect the
choice of testing techniques. When more time is available we can afford to select more techniques and
when time is severely limited we will be limited to those that we know have a good chance of helping us
find just the most important defects.
Mutation Testing
Mutation Testing is a type of software testing where we mutate (change) certain statements in the
source code and check if the test cases are able to find the errors. It is a type of White Box Testing which is
34
mainly used for Unit Testing. The changes in mutant program are kept extremely small, so it does not affect the
overall objective of the program.
The goal of Mutation Testing is to assess the quality of the test cases which should be robust enough to fail
mutant code. This method is also called as Fault based testing strategy as it involves creating fault in the
program
Step 1: Faults are introduced into the source code of the program by creating many versions called mutants.
Each mutant should contain a single fault, and the goal is to cause the mutant version to fail which demonstrates
the effectiveness of the test cases.
Step 2: Test cases are applied to the original program and also to the mutant program. A Test Case should be ad-
equate, and it is tweaked to detect faults in a program.
Step 4: If the original program and mutant programs generate the different output, then that the mutant is killed
by the test case. Hence the test case is good enough to detect the change between the original and the mutant pro-
gram.
35
Step 5: If the original program and mutant program generate same output, Mutant is kept alive. In such cases,
more effective test cases need to be created that kill all mutants.
A mutation is nothing but a single syntactic change that is made to the program statement. Each mutant
program should differ from the original program by one mutation.
If (x>y) If(x<y)
Else Else
Mutation testing could be fundamentally categorized into 3 types– statement mutation, decision mutation, and
value mutation
1. Statement Mutation - developer cut and pastes a part of code of which the outcome may be removal of
some lines
2. Value Mutation- values of primary parameters are modified
3. Decision Mutation- control statements are to be changed
Mutation Score:
The mutation score is defined as the percentage of killed mutants with the total number of mutants.
There are several techniques that could be used to generate mutant programs.
36
Expression
Operand replacement Statement modification
Modification
operators Operators
Operators
Replace the operand with Replace an operator or
Programmatic statements are
another operand (x with y or y insertion of new
modified to create mutant
with x) or with the constant operators in a program
programs.
value. statement.
Example-
37
Mutation testing is extremely costly and time consuming since there are many mutant programs that
need to be generated.
Since its time consuming, it's fair to say that this testing cannot be done without an automation tool.
Each mutation will have the same number of test cases than that of the original program. So, a large
number of mutant programs may need to be tested against the original test suite.
As this method involves source code changes, it is not at all applicable for Black Box Testing.
3. Static technique
4. Dynamic technique
Equivalence partitioning
Boundary value analysis
Decision tables
State transition testing(Finite state testing)
Cause Effect Graphing
Syntax Testing
39
In white-box testing the tester is concentrating on how the software does it. For example, a struc-
tural technique may be concerned with exercising loops in the software.
Different test cases may be derived to exercise the loop once, twice, and many times. This may be
done regardless of the functionality of the software.
Structure-based techniques can also be used at all levels of testing. Developers use structure-based
techniques in component testing and component integration testing, especially where there is
good tool support for code coverage.
Structure-based techniques are also used in system and acceptance testing, but the structures are dif-
ferent. For example, the coverage of menu options or major business transactions could be the
structural element in system or acceptance testing.
Dividing the test input data into a range of values and selecting one input value from each range is called
Equivalence Partitioning. This is a black box test design technique used to calculate the effectiveness of test
cases and which can be applied to all levels of testing from unit, integration, system testing and so forth.In this
method, the input domain data is divided into different equivalence data classes. This method is typically used to
reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.
In short, it is the process of taking all possible test cases and placing them into classes. One test value is
picked from each class while testing.E.g.: If you are testing for an input box accepting numbers from 1 to 1000
then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for in-
valid data.
Using equivalence partitioning method above test cases can be divided into three sets of input data called
as classes. Each test case is a representative of a respective class.So in above example, we can divide our test
cases into three equivalence classes of some valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you
select other values between 1 and 1000 the result is going to be same. So one test case for valid input data
should be sufficient.
40
2) Input data class with all values below the lower limit. I.e. any value below 1, as an invalid input data test
case.
3) Input data with any value greater than 1000 to represent third invalid input class.
So using equivalence partitioning you have categorized all possible test cases into three classes. Test
cases with other values from any class should give you the same result.We have selected one representative from
every input class to design our test cases. Test case values are selected in such a way that largest number of
attributes of equivalence class can be exercised.
Equivalence partitioning uses fewest test cases to cover maximum requirements.
Example 2
That means results for values in partitions 0-5, 6-10, 11-14 should be equivalent
So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just Inside-Just
Outside values are called boundary values and the testing is called "boundary testing".
The basic idea in boundary value testing is to select input variable values at their:
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding
those exist in center of input domain. Here we have both valid boundaries (in the valid partitions) and invalid
boundaries (in the invalid partitions). Boundary value analysis is the next part of Equivalence partitioning for de-
signing test cases where test cases are selected at the edges of the equivalence classes.
41
Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 100 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.
Boundary value analysis is often called as a part of stress and negative testing.
1. This testing is used to reduce very large number of test cases to manageable chunks.
2. Very clear guidelines on determining test cases without compromising on the effectiveness of
testing.
3. Appropriate for calculation-intensive applications with large number of variables/inputs
Cause Effect Graph is a black box testing technique that graphically illustrates the relationship between
a given outcome and all the factors that influence the outcome.It is also known as Ishikawa diagram
as it was invented by Kaoru Ishikawa or fish bone diagram because of the way it looks. A “Cause”
stands for a separate input condition that fetches about an internal change in the system. An “Effect”
represents an output condition, a system transformation or a state resulting from a combination of
causes.
42
Cause Effect - Flow Diagram
To Identify the possible root causes, the reasons for a specific effect, problem, or outcome.
To Relate the interactions of the system among the factors affecting a particular process or ef-
fect.
To Analyze the existing problems so that corrective action can be taken at the earliest.
Benefits :
It Helps us to determine the root causes of a problem or quality using a structured approach.
It Uses an orderly, easy-to-read format to diagram cause-and-effect relationships.
It Indicates possible causes of variation in a process.
It Identifies areas, where data should be collected for further study.
It Encourages team participation and utilizes the team knowledge of the process.
It Increases knowledge of the process by helping everyone to learn more about the factors at
work and how they relate.
43
44
Fig: Notations
Let’s draw a cause and effect graph based on a situation
Situation:
The “Print message” is software that read two characters and, depending on their values, messages must be
printed.
Solution:
LET’S START!!
Key – Always go from effect to cause (left to right). That means, to get effect “E”, what causes should be true.
The circle in the middle is just an interpretation of the middle point to make the graph less messy.
There is a third condition where C1 and C2 are mutually exclusive. So the final graph for effect E1 to be true is
shown below:
47
Let’s move to Effect E2:
E2 states to print message “X”. Message X will be printed when the First character is neither A nor B.
Which means Effect E2 will hold true when either C1 OR C2 is invalid. So the graph for Effect E2 is shown as
(In blue line)
48
This completes the Cause and Effect graph for the above situation.
Now let’s move to draw the Decision table based on the above graph.
First, write down the Causes and Effects in a single column shown below
Key is the same. Go from bottom to top which means traverse from effect to cause.
Start with Effect E1. For E1 to be true, the condition is (C1 C2) C3.
Here we are representing True as 1 and False as 0
49
For E2 to be True, either C1 or C2 has to be false shown as
So it’s done. Let’s complete the graph by adding 0 in the blank column and including the test case identifier.
50
Writing Test cases from the decision table
A sample test case for test case 1 (TC1) and Test Case 2 (TC2).
Error guessing is a technique on guessing the error which can prevail in the code. It is
basically an experience based technique that makes use of a tester's skill, intuition and experience in
testing similar applications to identify defects that may not be easy to capture by the more formal
techniques. Some people seem to be naturally good at testing and others are good testers because they
have a lot of experience either as a tester or working with a particular system and so are able to find out
its weaknesses. This is why an error guessing approach, used after more formal techniques have been
applied to some extent, can be very effective. It also saves a lot of time because of the assumptions and
guessing made by the experienced testers to find out the defects which otherwise won’t be able to find.
The success of error guessing is very much dependent on the skill of the tester, as good testers know
where the defects are most likely to be. is usually done after more formal techniques are completed. If
the analyst guesses that the login page is error prone, then the testers write more detailed test cases con-
centrating on the login page. Testers can think of variety of combinations of data to test the login page.
To design test cases based on error guessing technique, analyst can use the past experiences to
identify the conditions. This technique can be used at any level of testing and for testing the common
mistakes like:
Divide by zero
Entering blank spaces in the text fields
Pressing submit button without entering values.
Uploading files exceeding maximum limits.
Error guessing technique requires skilled and experienced tester. Following factors can be used to
guess the errors:
51
Lessons learnt from past releases
Historical learning
Previous defects
Review checklist
Application UI
Previous test results
Risk reports of the application
Variety of data used for testing.
Though Error guessing is one of the key techniques of testing, it does not provide a full coverage of
the application. It also cannot guarantee that the software has reached the expected quality benchmark.
This technique should be combined with other techniques to yield better results.
The main drawback of error guessing is it depends on the experience of the tester, who is deploying it.
52
TEST PLAN FUNDAMENTALS
TEST PLAN DEFINITION
A Software Test Plan is a document describing the testing scope and activities. It is the basis for formally testing
any software/product in a project.
test plan: A document describing the scope, approach, resources and schedule of intended test activities.
It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task,
degree of tester independence, the test environment, the test design techniques and entry and exit criteria
to be used, and the rationale for their choice,and any risks requiring contingency planning. It is a record
of the test planning process.
Test Plan helps us determine the effort needed to validate the quality of the application under test
Help people outside the test team such as developers, business managers, customers understand the de-
tails of testing.
Test Plan guides our thinking. It is like a rule book, which needs to be followed.
Important aspects like test estimation, test scope, Test Strategy are documented in Test Plan, so it can be
reviewed by Management Team and re-used for other projects.
Master Test Plan: A single high-level test plan for a project/product that unifies all other test plans.
Testing Level Specific Test Plans:Plans for each level of testing.
o Unit Test Plan
o Integration Test Plan
o System Test Plan
o Acceptance Test Plan
Testing Type Specific Test Plans: Plans for major types of testing like Performance Test Plan and Secu-
rity Test Plan.
Make the plan concise. Avoid redundancy and superfluousnessBe specific. For example, when you
specify an operating system as a property of a test environment, mention the OS Edition/Version as well,
not just the OS Name.
Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
Have the test plan reviewed a number of times prior to baselining it or sending it for approval. The
quality of your test plan speaks volumes about the quality of the testing you or your team is going to per-
form.
Update the plan as and when necessary. An out-dated and unused document stinks and is worse than
not having the document in the first place.
53
TEST PLAN TEMPLATE
The format and content of a software test plan vary depending on the processes, standards, and test management
tools being implemented. Test plan contains the following:
You already know that making a Test Plan is the most important task of Test Management Process. Follow the
seven steps below to create a test plan as per IEEE 829
54
Step 1) Analyze the product
You should research clients and the end users to know their needs and expectations from the application
Test Strategy is a critical step in making a Test Plan. A Test Strategy document, is a high-level document,
which is usually developed by Test Manager. This document defines:
Back to your project, you need to develop Test Strategy for testing that banking website. You should follow
steps below
55
Step 2.1) Define Scope of Testing
Before the start of any test activity, scope of the testing should be known. You must think hard about it.
The components of the system to be tested (hardware, software, middleware, etc.) are defined as "in
scope"
The components of the system that will not be tested also need to be clearly defined as being "out of
scope."
A Testing Type is a standard test procedure that gives an expected test outcome.
Each testing type is formulated to identify a specific type of product bugs. But, all Testing Types are aimed at
achieving one common goal “Early detection of all the defects before releasing the product to the customer”
Risk is future’s uncertain event with a probability of occurrence and a potential for loss. When the risk actu-
ally happens, it becomes the ‘issue’.
In Test Logistics, the Test Manager should answer the following questions:
Who will test?
When will the test occur?
To select the right member for specified task, you have to consider if his skill is qualified for the task or not, also
estimate the project budget. Selecting wrong member for the task may cause the project to fail or delay.
You will start to test when you have all required items shown in following figure
Test Objective is the overall goal and achievement of the test execution. The objective of the testing is finding as
many software defects as possible; ensure that the software under test is bug free before release.
1. List all the software features (functionality, performance, GUI…) which may need to test.
2. Define the target or the goal of the test based on above features
Test Criteria is a standard or rule on which a test procedure or test judgment can be based. There’re 2 types of
test criteria as following
57
Suspension Criteria
Specify the critical suspension criteria for a test. If the suspension criteria are met during testing, the active test
cycle will be suspended until the criteria are resolved.
Example: If your team members report that there are 40% of test cases failed, you should suspend testing until
the development team fixes all the failed cases.
Exit Criteria
It specifies the criteria that denote a successful completion of a test phase. The exit criteria are the targeted re-
sults of the test and are necessary before proceeding to the next phase of development. Example: 95% of all crit-
ical test cases must pass.
Resource plan is a detailed summary of all types of resources required to complete project task. Resource could
be human, equipment and materials needed to complete a project .The resource planning is important factor of
the test planning because helps in determining the number of resources (employee, equipment…) to be used
for the project. Therefore, the Test Manager can make the correct schedule & estimation for the project.
For the task which required low skill, I recommend you choose outsourced
members to save project cost.
Developer in
3. Implement the test cases, test program, test suite etc.
Test
For testing, a web application, you should plan the resources as following tables:
A testing environment is a setup of software and hardware on which the testing team is going to execute test
cases. The test environment consists of real business and user environment, as well as physical environments,
such as server, front end running environment.
. In the Test Estimation phase, suppose you break out the whole project into small tasks and add the estimation
for each task as below
Making schedule is a common term in project management. By creating a solid schedule in the Test Planning,
the Test Manager can use it as tool for monitoring the project progress, control the cost overruns.
To create the project schedule, the Test Manager needs several types of input as below:
Employee and project deadline: The working days, the project deadline, resource availability are the
factors which affected to the schedule
Project estimation: Base on the estimation, the Test Manager knows how long it takes to complete the
project. So he can make the appropriate project schedule
Project Risk : Understanding the risk helps Test Manager add enough extra time to the project schedule
to deal with the risks
Test Deliverables is a list of all the documents, tools and other components that has to be developed and main-
tained in support of the testing effort.
59
There are different test deliverables at every phase of the software development lifecycle.
Test Scripts
Simulators.
Test Data
Test Traceability Matrix
Error logs and execution logs.
Test Results/reports
Defect Report
Installation/ Test procedures guidelines
Test estimation is a management activity which approximates how long a Task would take to complete.
Estimating effort for the test is one of the major and important tasks in Test Management.
60
Why Test Estimation?
Two questions you can expect from your clients when discussing potential test engagements are
What to Estimate?
Resources: Resources are required to carry out any project tasks. They can be people, equipment, facilities,
funding, or anything else capable of definition required for the completion of a project activity.
Times : Time is the most valuable resource in a project. Every project has a deadline to delivery.
Human Skills : Human skills mean the knowledge and the experience of the Team members. They af-
fect to your estimation. For example, a team, whose members have low testing skills, will take more
time to finish the project than the one which has high testing skills.
Cost: Cost is the project budget. Generally speaking, it means how much money it takes to finish the
project.
How to estimate?
61
Estimation Steps:
Task is a piece of work that has been given to someone. To do this, you can use the Work Breakdown
Structure technique.In this technique, a complex project is divided into modules. The modules are divided into
sub-modules. Each sub-module is further divided into functionality. It means divide the whole project task into
the smallest tasks.
62
After that, you can break out each task to the subtask. The purpose of this activity is create task
as detailed as possible.
63
Review test execution results
In this step, each task is assigned to the appropriate member in the project team. You can assigned task as
follows
Task Members
There are 2 techniques which you can apply to estimate the effort for tasks
64
Method 1) Function Point Method
In this method, the Test Manager estimates Size, Duration, and Cost for the tasks
In Step 1, you already have broken the whole project task into small task by using WBS method. Now you
estimate the size of those tasks. Let’s practice with a particular task “Create the test specification”.The size of
this task depends on the functional size of the system under test. The functional size reflects the amount of
functionality that is relevant to the user. The more number of functionality, the more complex system is.
Prior to start actual estimating tasks effort, functional points are divided into three groups
like Complex, Medium Simple as following:
Based on the complex of software functions, the Test Manger has to give enough weightage to each functional
point. For example
Group Weightage
Complex 5
Medium 3
Simple 1
66
6. New Customer Manager Manager: A manager can add a new customer. 3
Manager: A manager can edit details like
address, email, telephone of a customer.
Saving
Current
10. Delete Customer Manager A customer can be deleted only if he/she has 1
no active current or saving accounts
Manager: A manager can delete a customer.
After classifying the complexity of the function points, you have to estimate the duration to test them. Duration
means how much time needs to finish the task.
67
Total Effort: The effort to completely test all the functions of the website
Total Function Points: Total modules of the website
Estimate defined per Function Points: The average effort to complete one function points. This value de-
pends on the productivity of the member who will take in charge this task.
Suppose your project team has estimated defined per Function Points of 5 hours/points. You can estimate the
total effort to test all the features of website Guru99 Bank as follows:
Complex 5 3 15
Medium 3 5 15
Simple 1 4 4
So the total effort to complete the task “Create the test specification” of Bank is around 170 man-hours.
This step helps you to answer the last question of customer “How much does it cost?”
Suppose, on average your team salary is $5 per hour. The time required for “Create Test Specs” task is 170
hours. Accordingly, the cost for the task is 5*170= $850. Now you can calculate budget for other activities in
WBS and arrive at overall budget for the project.
68
Three-Point estimation is one of the techniques that could be used to estimate a task. The simplicity of the Three-
point estimation makes it a very useful tool for a Project Manager that who wants to estimate.
In three-point estimation, three values are produced initially for every task based on prior experience or best-
guesses as follows
When estimating a task, the Test Manager needs to provide three values, as specified above. The three values
identified, estimate what happens in an optimal state, what is the most likely, or what we think it would be
the worst case scenario.
For the task “Create the test specification”, for the above eg
The best case to complete this task is 120 man-hours (around 15 days). In this case, you have a talented
team, they can finish the task in smallest time.
The most likely case to complete this task is 170 man-hours (around 21 days). This is a normal case,
you have enough resource and ability to complete the task
The worst case to complete this task is 200 man-hours (around 25 days). You need to perform much
more work because your team members are not experienced.
The effort to complete the task can be calculated using double-triangular distribution formula as follows-
69
In the above formula, parameter E is known as Weighted Average. It is the estimation of the task “Create the
test specification”.
In the above estimation, you just determine a possible and not a certain value, we must know about the probability that
the estimation is correct. You can use the other formula:
In above formula, the SD mean Standard Deviation, this value could give you the information about
the probability that the estimation is correct.
Now you can conclude the estimation for the task “Create the test specification”
To complete the task “Create the test specification” of Guru99 Bank website, you need 166.6 ± 13.33Man-hour
(153.33 to 179.99 man-hour)
Once you create an aggregate estimate for all the tasks mentioned in the WBS, you need to forward it to
the management board, who will review and approve it.The management board will review and discuss your
estimation plan
70
What is Test Monitoring and Test Control?
Test Monitoring and Test Control is basically a management activity. Test monitoring is a process of
evaluating and providing feedback of the “currently in progress” testing phase and Test control is an activity of
guiding and taking corrective action based on some metrics or information to improve the efficiency and quality.
Why do we monitor?
What do we monitor?
Monitoring will allow you to make comparisons between your original plan and your progress so far. You will
be able to implement changes, where necessary, to complete the project successfully.We should monitor the key
parameters as below
Cost
You have to estimate and track basic cost information for your project. Having
accurate project estimates and a robust project budget is necessary to deliver project within the
decided budget.
Schedules
Resources are all things required to carry out the project tasks. They can be people or equipment required to
complete the project activity. Lack of resources can affect the project progress. Monitoring resources will help
you to early detect any resource crunch and find a solution to deal with it.
Quality
Quality monitoring involves monitoring the results of specific work products (like test case suite, test execution
log), to evaluate whether its meets the defined quality standards. In case results do not meet quality standards,
you need to identify potential resolution.
How to monitor?
Are you on schedule? If not, how far behind are you, and how can you catch up?
Are you over budget?
Are you still working toward the same project goal?
Are you running low on resources?
Are there warning signs of impending problems?
Is there pressure from management to complete the project sooner?
To monitor project progress effectively, you should follow the following steps
You cannot monitor progress unless you have a plan to monitor progress with DEFINED metrics. In the Moni-
toring Plan, you must plan carefully about
Now decide when or how often you are going to collect the data for monitoring in the monitoring plan –Weekly
or monthly? Or just at the start and end of the project?
In the monitoring plan, you should define the methods to evaluate the project’s progress via collected metrics.
Some methods you can refer are
Compare the progress in plan with the actual progress that the team has made
Define the criteria which are used to evaluate the project’s progress. For example, if the effort to com-
plete a task took more than 30% effort than planed a project delay.
With time, your team member will be making progress on their project task. You must track their activity as per
schedule and ask them frequently update the progress information such as time spent, task status…etc. By
checking these records, you can immediately see the impact on the project plan. One of the best methods to track
the member progress is holding regular meetings.
In the meeting, all members report their current status and issues if any. If a team member or members have
fallen behind or have run into obstacles, formulate a plan for identifying and solving the problem.
In this step, you compare the progress you defined in plan with the actual progress that the team has made. By
analyzing the record, you can also see how much time has been spent on individual task and the total time spent
on the project overall, can early detect any issue which may happen to the project, and also you can find out the
solution to solve that issue.
73
Step 3.2) Adjustment
Make the necessary adjustments keep your project on track. Reassign tasks, modify schedules, or reassess your
goals. This will help you keep moving toward the finish line.
You need to prepare progress report of the project. Using the report is a good option to share the overall project
progress with team members or the Management Board. It is also a useful way to show your boss whether the
project is on track.
Documentation: What will happen if you do not write down any discussion or decision in a document?
You may forget them and lose many things. You should write down discussions and decisions at the ap-
propriate place, and establishing a formal documentation procedure for meetings. Such documentation
helps you to resolve issues of miscommunication or misunderstandings among the project team.
Proactivity: Issues occur in all projects. The important thing is that you have to adopt a proactive ap-
proach to solve issues and problems that arise during project execution. Such issues could be budget,
scope, time, quality, and human resources
74
What is GUI ?
There are two types of interfaces for a computer application. Command Line Interface is where you type
text and computer responds to that command. GUI stands for Graphical User Interface where you interact with
the computer using images rather than text. Following are the GUI elements which can be used for interaction
between the user and application:
GUI testing is the process of testing the system's Graphical User Interface of the Application Under Test.
GUI testing involves checking the screens with the controls like menus, buttons, icons, and all types of bars -
toolbar, menu bar, dialog boxes and windows, etc.
GUI is what user sees. Say if you visit keralauniversity.ac.in what you will see say home page it is the
GUI (graphical user interface) of the site. A user does not see the source code. The interface is visible to the user.
Especially the focus is on the design structure, images that they are working properly or not. In above example,
if we have to do GUI testing we first check that the images should be completely visible in different browsers.
Also, the links are available, and the button should work when clicked. Also, if the user resizes the screen,
neither images nor content should shrink or crop or overlap.
A normal User first observes the design and looks of the Application/Software and how easy it is for him
to understand the UI. If a user is not comfortable with the Interface or find application complex to understand he
would never going to use that application again. That's why, GUI is a matter for concern, and proper testing
should be carried out in order to make sure that GUI is free of Bugs.
Check all the GUI elements for size, position, width, length and acceptance of characters or numbers.
For instance, you must be able to provide inputs to the input fields.
Check you can execute the intended functionality of the application using the GUI
Check Error Messages are displayed correctly
75
Check for clear boundary/separation of different sections on screen
Check Font used in application is readable
Check the alignment of the text is proper
Check the Color of the font and warning messages is aesthetically pleasing
Check that the images have good clarity
Check that the images are properly aligned
Check the positioning of GUI elements for different screen resolution.
Under this approach, graphical screens are checked manually by testers in conformance with the
requirements stated in the business requirements document.
GUI testing can be done using automation tools. This is done in 2 parts. During Record , test steps are
captured by the automation tool. During playback, the recorded test steps are executed on the Application Under
Test.
76
Model Based Testing
A model is a graphical description of system's behavior. It helps us to understand and predict the system
behavior. Models help in a generation of efficient test cases using the system requirements. Following needs to
be considered for this model based testing:
Some of the modeling techniques from which test cases can be derived:
Charts - Depicts the state of a system and checks the state after some input.
Decision Tables - Tables used to determine results for each input applied
Model based testing is an evolving technique for the generating the test cases from the requirements. Its main
advantage, compared to above two methods, is that it can determine undesirable states that your GUI can
attain.
77
GUI Testing Test Cases
Here we will use some sample test cases for the following screen.
Following below are the example of the Test cases, which consists of UI and Usability test scenarios.
TC 01- Verify that the text box with the label "Source Folder" is aligned properly.
TC 02 - Verify that the text box with the label "Package" is aligned properly.
TC 03 – Verify that label with the name "Browse" is a button which is located at the end of TextBox with the
name "Source Folder."
TC 04 – Verify that label with the name "Browse" is a button which is located at the end of Text Box with the
name "Package."
TC 05 – Verify that the text box with the label "Name" is aligned properly.
TC 06 – Verify that the label "Modifiers" consists of 4 radio buttons with the name public, default, private,
protected.
TC 07 – Verify that the label "Modifiers" consists of 4 radio buttons which are aligned properly in a row.
TC 08 – Verify that the label "Superclass" under the label "Modifiers" consists of a dropdown which must be
proper aligned.
TC 09 – Verify that the label "Superclass" consists of a button with the label "Browse" on it which must be
78
properly aligned.
TC 10 – Verify that clicking on any radio button the default mouse pointer must be changed to hand mouse
pointer.
TC 11 – Verify that user must not be able to type in the dropdown of "Superclass."
TC 12 – Verify that there must be a proper error generated if something has been mistakenly chosen.
TC 13 - Verify that the error must be generated in the RED color wherever it is necessary.
TC 15 – Verify that the single radio buttons must be selected by default every time.
TC 16 – Verify that the TAB button must be work properly while jumping on other field next to previous.
TC 17 – Verify that all the pages must contain the proper title.
TC 19 – Verify that after updating any field a proper confirmation message must be displayed.
TC 20 - Verify that only 1 radio button must be selected and more than single checkboxes may be selected.
79
What is Automation Testing?
Manual Testing is performed by a human sitting in front of a computer carefully executing the test steps.
Automation Testing means using an automation tool to execute your test case suite. The automation software
can also enter test data into the System Under Test , compare expected and actual results and generate detailed
test reports.
Test Automation demands considerable investments of money and resources.Successive development
cycles will require execution of same test suite repeatedly.Using a test automation tool it's possible to record this
test suite and re-play it as required. Once the test suite is automated, no human intervention is required . Goal
of Automation is to reduce number of test cases to be run manually and not eliminate manual testing all together.
Manual Testing of all work flows, all fields , all negative scenarios is time and cost consuming
It is difficult to test for multi lingual sites manually
Automation does not require Human intervention. You can run automated test unattended (overnight)
Automation increases speed of test execution
Automation helps increase Test Coverage
Manual Testing can become boring and hence error prone.
The following category of test cases are not suitable for automation:
Test Cases that are newly designed and not executed manually atleast once
Test Cases for which the requirements are changing frequently
80
Test cases which are executed on ad-hoc basis.
Test Tool selection largely depends on the technology the Application Under Test is built on.
Scope of automation is the area of your Application Under Test which will be automated. Following points help
determine scope:
During this phase you create Automation strategy & plan, which contains following details-
Test Execution
81
Automation Scripts are executed during this phase. The scripts need input test data before there are set to run.
Once executed they provide detailed test reports. Execution can be performed using the automation tool directly
or through the Test Management tool which will invoke the automation tool.
Example: Quality center is the Test Management tool which in turn it will invoke QTP for execution of
automation scripts. Scripts can be executed in a single machine or a group of machines. The execution can be
done during night , to save time.
Maintenance
As new functionalities are added to the System Under Test with successive cycles, Automation Scripts need to
be added, reviewed and maintained for each release cycle. Maintenance becomes necessary to improve
effectiveness of Automation Scripts.
Framework in Automation
Scope of Automation needs to be determined in detail before the start of the project. This sets expecta-
tions from Automation right.
Select the right automation tool: A tool must not be selected based on its popularity but it's fit to the au-
tomation requirements.
Choose appropriate framework
82
Scripting Standards- Standards have to be followed while writing the scripts for Automation .Some of
them are-
o Create uniform scripts, comments and indentation of the code
o Adequate Exception handling - How error is handled on system failure or unexpected behavior
of the application.
o User defined messages should be coded or standardized for Error Logging for testers to under-
stand.
Measure metrics- Success of automation cannot be determined by comparing the manual effort with the
automation effort but by also capturing the following metrics.
o Percent of defects found
o Time required for automation testing for each and every release cycle
o Minimal Time taken for release
o Customer satisfaction Index
o Productivity improvement
The above guidelines if observed can greatly help in making your automation successful.
Selecting the right tool can be a tricky task. Following criterion will help you select the best tool for your
requirement-
Environment Support
Ease of use
Testing of Database
Object identification
Image Testing
Error Recovery Testing
Object Mapping
Scripting Language Used
83
Support for various types of test - including functional, test management, mobile, etc...
Support for multiple testing frameworks
Easy to debug the automation software scripts
Ability to recognize objects in any environment
Extensive test reports and results
Minimize training cost of selected tools
There are tons of Functional and Regression Testing Tools available in market. Here are 5 best tools certified by
our experts
1. Selenium
It is a software testing tool used for regression testing. It is an open source testing tool that provides playback
and recording facility for regression testing. The Selenium IDE only supports Mozilla Firebox web browser.
It provides the provision to export recorded script in other languages like Java, Ruby, RSpec, Python,
C#,etc
It can be used with frameworks like Junit and TestNG
It can execute multiple tests at a time
Autocomplete for Selenium commands that are common
Walkthrough tests
Identifies the element using id, name , X-path, etc.
Store tests as Ruby Script, HTML, and any other format
It provides an option to assert the title for every page
It supports selenium user-extensions.js file
It allows to insert comments in the middle of the script for better understanding and debugging
It is widely used for functional and regression testing, it addresses every major software application and
environment. To simplify test creation and maintenance, it uses the concept of keyword driven testing. It allows
the tester to build test cases directly from the application.
It is easier to use for non-technical person to adapt to and create working test cases
It fix defects faster by thoroughly documenting and replicating defects for developer
Collapse test creation and test documentation at a single site
Parameterization is easy than WinRunner
QTP supports .NET development environment
It has better object identification mechanism
It can enhance existing QTP scripts without "Application Under Test" being available, by using the Ac-
tiveScreen
It supports a wide range of protocols and applications like Java, HTML, NET, Windows, SAP, Visual ba-
sic, etc.
It can record and replay the actions on demand
84
It integrates well with source control management tools such as Rational Clear Case and Rational Team
Concert integration
It allows developers to create keyword associated script so that it can be re-use
Eclipse Java Developer Toolkit editor facilitates the team to code test scripts in Java with Eclipse
It supports custom controls through proxy SDK (Java/.Net)
It supports version control to enable parallel development of test scripts and concurrent usage by geo-
graphically distributed team
4. WATIR
It is an open source testing software for regression testing. It enables you to write tests that are easy to read and
maintain. Watir supports only internet explorer on windows while Watir webdriver supports Chrome, Firefox, IE,
Opera, etc.
5.SilkTest
Silk Test is designed for doing functional and regression testing. For e-business application, silk test is the
leading functional testing product. It is a product of Segue Software takeover by Borland in 2006. It is an object
oriented language just like C++. It uses the concept of object, classes, and inheritance. Its main feature includes
85
Tool Selection and Implementation
How to Select Best Automation Testing Tool / The importance of the software testing tool selection
Success in any test automation depends on identifying the right tool for automation. Selecting the “correct” Testing Tool for
your project is one of the best ways to achieve the project target. The following example will show you the benefit of the
testing tool selection
In the project Bank, to save the effort of testing, the project team decided to use an automated testing tool for the test
execution. After many meetings, your team selected a suitable tool for the project.
One month later, you got the report from the project team about this tool
The results are great. The new automated tool doubled the testing productivity. It means we saved 50% cost of test
execution
This is an example of the benefit of using the testing tool in the project. Selecting the right testing tool help you to improve
the project productivity and save project cost.
86
There’re many types of test tool, which Test Manager can consider when selecting the test tools.
Open-Source Tools
Open source tools are the program wherein the source code is openly published for use and/or modification from
its original design, free of charge.
Commercial Tools
Commercial tools are the software which are produced for sale or to serve commercial purposes. Commercial tools
have more support and more features from a vendor than open-source tools.
Custom Tools
In some Testing project, the testing environment, and the testing process has special characteristics. No open-
source or commercial tool can meet the requirement. Therefore, the Test Manager has to consider the development of the
custom tool.Example: You want to find a Testing tool for the project Bank. You want this tool to meet some specific
requirement of the project.
Before selecting the test tool, you must analyze the test cases and decide which test cases should be automated and
which test cases should not. This is the Automation Feasibility Analysis activity.Automation Feasibility Analysis is the
very significant contributor in testing. In this analysis, you need to check if the application under test is qualified for
automated test.
87
It has to be made sure that the tool is very easy to use and its training and user adaptation time is very less. Every
organization, and in some cases every project follow their own model of testing. The testing tool should be configurable
enough to support these model variances.
It is very important for the testers to able to trace back all their work in a centralized test management system. A bi-
directional traceability between test artifacts and the associated requirements and defects increase the efficiency of
measuring quality of a project. It also allows organizations to track the coverage of both Requirements and Test Cases,
failing which may lead to missing information, loss of productivity and fall in quality.
88
Most software projects fail due to the lack of proper visualization of analytical data related to a project’s progress. In the
absence of a centralized tool, the entire process of reporting is dependent on manual interactions, making it error-prone. So,
the tool to be procured should have the facility of providing real-time reports and dashboards, keeping stakeholders updated
with the latest status of progress and assess quality at every step.
A testing tool must have the support for managing Test Automation scripts from a single repository. However, that is not
enough for a Test automation project. Test automation needs to be an integral part of the entire execution process. Features
like the central execution of test automation scripts, automatic capturing of test results and making them visible from a
central platform are necessary. Viewing Test Automation results needs to be a part of the end-to-end traceability chain.
Testing is no longer an isolated phase or a security gate to final delivery, but an integral part of the entire lifecycle. To
achieve this and ensure quality right from the beginning, testing should get involved at every stage of the lifecycle.
Therefore, a testing tool should have the capability to integrate with tools from other phases of the lifecycle, so that a
centralized status update on the project’s progress and quality can be achieved.
To select the most suitable testing tool for the project, the Test Manager should follow the below tools selection process
In order to select a testing tool, we have to precisely identify test tool requirements. All the requirement must be
documented and reviewed by project teams and the management board.
You want to find the testing tool for the Bank project. What do you expect from the tool?
B) The tool can generate the test result in the desired format
C) The tester can select which test cases to execute with given set of test data
89
D) The tool can execute the test case automatically
E) The tool can judge and perform test output validation and mark test cases pass or fail
After baselining the requirement of the tool, the Test Manager should
Analyze the commercial and open source tools that are available in the market, based on the project requirement.
Create a tool shortlist which best meets your criteria
One factor you should consider is vendors. You should consider the vendor’s reputation, after sale support, tool
update frequency, etc. while taking your decision.
Evaluate the quality of the tool by taking the trial usage & launching a pilot. Many vendors often make trial ver-
sions of their software available for download
To ensure the test tool is beneficial for business, the Test Manager have to balance the following factors:
Example: After spending considerable time to investigate testing tools, the project team found the perfect testing tool for the
project Bank website. The evaluation results concluded that this tool could
However, after discussing with the software vendor, you found that the cost of this tool is too high compare to the value and
90
benefit that it can bring to the teamwork.
In such a case, the balance between cost & benefit of the tool may affect the final decision.
Have a strong awareness of the tool. It means you must understand which is the strong points and the weak
points of the tool
Balance cost and benefit.
Testing is a continuous activity during software development. In object-oriented systems, testing encompasses
three levels, namely, unit testing, subsystem testing, and system testing.
Unit Testing:
In unit testing, the individual classes are tested. It is seen whether the class attributes are implemented as
per design and whether the methods and the interfaces are error-free.
Unit testing is the responsibility of the application engineer who implements the structure.
Subsystem Testing:
This involves testing a particular module or a subsystem and is the responsibility of the subsystem lead.
It involves testing the associations within the subsystem as well as the interaction of the subsystem with
the outside.
Subsystem tests can be used as regression tests for each newly released version of the subsystem.
System Testing:
System testing involves testing the system as a whole and is the responsibility of the quality-assurance
team. The team often uses system tests as regression tests when assembling new releases.
The traditional programming consists of procedures operating on data, while the object-oriented paradigm
focuses on objects that are instances of classes. In object-oriented (OO) paradigm, software engineers identify
91
and specify the objects and services provided by each object. In addition, interaction of any two objects and
constraints on each identified object are also determined. The main advantages of OO paradigm include
increased reusability, reliability, interoperability, and extendibility.
OO program should be tested at different levels to uncover all the errors. At the algorithmic level, each module
(or method) of every class in the program should be tested in isolation. For this, white-box testing can be applied
easily. As classes form the main unit of object-oriented program, testing of classes is the main concern while
testing an OO program. At the class level, every class should be tested as an individual entity. At this level,
programmers who are involved in the development of class conduct the testing. Test cases can be drawn from
requirements specifications, models, and the language used. In addition, structural testing methods such as
boundary value analysis are extremely used. After performing the testing at class level, cluster level testing
should be performed. As classes are collaborated (or integrated) to form a small subsystem (also known as
cluster), testing each cluster individually is necessary. At this level, focus is on testing the components that
execute concurrently as well as on the interclass interaction. Hence, testing at this level may be viewed as
integration testing where units to be integrated are classes. Once all the clusters in the system are tested, system
level testing begins. At this level, interaction among clusters is tested.
The methods used to design test cases in OO testing are based on the conventional methods. However, these test
cases should encompass special features so that they can be used in the object-oriented environment. The points
that should be noted while developing test cases in an object-oriented environment are listed below.
1. It should be explicitly specified with each test case which class it should test.
2. Purpose of each test case should be mentioned.
3. External conditions that should exist while conducting a test should be clearly stated with each test case.
4. All the states of object that is to be tested should be specified.
5. Instructions to understand and conduct the test cases should be provided with each test case.
The methods used for performing object-oriented testing are discussed in this section.
State-based testing
92
It is used to verify whether the methods (a procedure that is executed by an object) of a class are interacting
properly with each other. This testing seeks to exercise the transitions among the states of objects based upon the
identified inputs.
For this testing, finite-state machine (FSM) or state-transition diagram representing the possible states of the ob -
ject and how state transition occurs is built. In addition, state-based testing generates test cases, which check
whether the method is able to change the state of object as expected. If any method of the class does not change
the object state as expected, the method is said to contain errors.
To perform state-based testing, a number of steps are followed, which are listed below.
1. Derive a new class from an existing class with some additional features, which are used to examine and
set the state of the object.
2. Next, the test driver is written. This test driver contains a main program to create an object, send mes-
sages to set the state of the object, send messages to invoke methods of the class that is being tested and
send messages to check the final state of the object.
3. Finally, stubs are written. These stubs call the untested methods.
Fault-based Testing
Fault-based testing is used to determine or uncover a set of plausible faults. In other words, the focus of tester in
this testing is to detect the presence of possible faults. Fault-based testing starts by examining the analysis and
design models of OO software as these models may provide an idea of problems in the implementation of soft -
ware. With the knowledge of system under test and experience in the application domain, tester designs test
cases where each test case targets to uncover some particular faults.
The effectiveness of this testing depends highly on tester experience in application domain and the system under
test. This is because if he fails to perceive real faults in the system to be plausible, testing may leave many faults
undetected. However, examining analysis and design models may enable tester to detect large number of errors
with less effort. As testing only proves the existence and not the absence of errors, this testing approach is con -
sidered to be an effective method and hence is often used when security or safety of a system is to be tested.
Integration testing applied for OO software targets to uncover the possible faults in both operation calls and vari -
ous types of messages (like a message sent to invoke an object). These faults may be unexpected outputs, incor -
rect messages or operations, and incorrect invocation. The faults can be recognized by determining the behavior
of all operations performed to invoke the methods of a class.
Scenario-based Testing
Scenario-based testing is used to detect errors that are caused due to incorrect specifications and improper inter -
actions among various segments of the software. Incorrect interactions often lead to incorrect outputs that can
cause malfunctioning of some segments of the software. The use of scenarios in testing is a common way of de -
scribing how a user might accomplish a task or achieve a goal within a specific context or environment. Note
that these scenarios are more context- and user specific instead of being product-specific. Generally, the struc -
ture of a scenario includes the following points.
93
Scenario- based testing combines all the classes that support a use-case (scenarios are subset of use-cases) and
executes a test case to test them. Execution of all the test cases ensures that all methods in all the classes are exe-
cuted at least once during testing. However, testing all the objects (present in the classes combined together) col -
lectively is difficult. Thus, rather than testing all objects collectively, they are tested using either top-down or
bottom-up integration approach.
This testing is considered to be the most effective method as scenarios can be organized in such a manner that
the most likely scenarios are tested first with unusual or exceptional scenarios considered later in the testing
process. This satisfies a fundamental principle of testing that most testing effort should be devoted to those paths
of the system that are mostly used.
Traditional testing methods are not directly applicable to OO programs as they involve OO concepts including
encapsulation, inheritance, and polymorphism. These concepts lead to issues, which are yet to be resolved. Some
of these issues are listed below.
1. Encapsulation of attributes and methods in class may create obstacles while testing. As methods are in-
voked through the object of corresponding class, testing cannot be accomplished without object. In addi -
tion, the state of object at the time of invocation of method affects its behavior. Hence, testing depends
not only on the object but on the state of object also, which is very difficult to acquire.
2. Inheritance and polymorphism also introduce problems that are not found in traditional software. Test
cases designed for base class are not applicable to derived class always (especially, when derived class is
used in different context). Thus, most testing methods require some kind of adaptation in order to func -
tion properly in an OO environment.
State transition technique is a dynamic testing technique, which is used when the system is defined in terms of a
finite number of states and the transitions between the states are governed by the rules of the system.
Or in other words, this technique is used when features of a system are represented as states which transform
into another state. The transformations are determined by the rules of the software. The pictorial representation
can be shown as:
94
So here we see that an entity transitions from State 1 to State 2 because of some input condition, which leads to
an event and results in an action and finally gives the output.
You visit an ATM and withdraw $1000. You get your cash. Now you run out of balance and make exactly the
same request of withdrawing $1000. This time ATM refuses to give you the money because of insufficient
balance. So here the transition, which caused the change in state is the earlier withdrawal
In the practical scenario, testers are normally given the state transition diagrams and we are required to interpret
it.
These diagrams are either given by the Business Analysts or a stakeholder and we use these diagrams to
determine our test cases.
Specifications – The software responds to input requests to change display mode for a time display device.
95
The different states are as follows:
Reset (R)
If the display mode is set to T or D, then a “reset” shall cause the display mode to be set to “alter time (AT)” or
“alter date (AD)” modes.
Display Time(S1),
Change Time(S3),
Display Date(S2) and
Change Date (S4).
Change Mode(CM),
Reset (R),
Time Set(TS),
96
Date Set(DS).
Alter Time(AT),
Display Time(T),
Display Date(D),
Alter Date (AD).
Display Time(S1),
Change Time (S3),
Display Date (S2) and
Change Date (S4).
Step 1:
Write all the start state. For this, take one state at a time and see how many arrows are coming out from it.
For State S1, there are two arrows coming out of it. One arrow is going to state S3 and another arrow is
going to state S2.
For State S2 – There are 2 arrows. One is going to State S1 and other going to S4
For State S3 – Only 1 arrow is coming out of it, going to state S1
For State S4 – Only 1 arrow is coming out of it, going to state S2
Since for state S1 and S2, there are two arrows coming out, we have written it twice.
Step -2:
Step 3:
97
For each start state and its corresponding finish state, write down the input and output conditions
– For state S1 to go to state S2, the input is Change Mode (CM) and output is Display Date(D) shown below:
In a similar way, write down the Input conditions and its output for all the states as follows:
Step 4:
Now add the test case ID for each test shown below:
A test schedule includes the testing steps or tasks, the target start and end dates, and
responsibilities. It should also describe how the test will be reviewed, tracked, and approved.
98
Test Case Generation Process and Technique
In software testing, there are four processes, which are: (1) design test cases (also known as test
case generation process), (2) prepare test data, (3) run program with test data and (4) compare results to
test cases. The test case generation process is a fundamental and the most critical process in the soft-
99
ware testing process. The test case generation process (or the process of designing test cases) is the first
and the most important process in software testing. The test case generation process is also known as a
“test development” process.There are many types of test case generation techniques such as random ap-
proaches, goal-oriented technique, specification-based techniques, sketch diagram based techniques
and source code based techniques.
Specification-based techniques are methods to generate a set of test cases from specification
documents such as a formal requirements specification and Object Constraint Language (OCL)
specification.The specification precisely describes what the system is to do without describing how to
do it. Thus, the software test engineer has important information about the software’s functionality
without having to extract it from unnecessary details. The advantages of this technique include that the
specification document can be used to derive expected results for test data and that tests may be devel-
oped concurrently with design and implementation.
The process of generating tests from the specifications will often help the test engineer discover
problems with the specifications themselves. If this step is done early, the problems can be eliminated
early, saving time and resources. Furthermore, the specification-based technique offers a simpler,
structured and more formal approach to the development of functional tests than non-specification
based testing techniques do. The strong relationship between specification and tests helps find faults
and can simplify regression testing.
Disadvantages
The drawbacks of the specification-based technique with formal methods are: (1) the difficulty
of conducting formal analysis and the perceived or actual payoff in project budget. Testing is a
substantial part of the software budget and formal methods offer an opportunity to significantly reduce
testing costs, thereby making formal methods more attractive from the budget perspective and (2) there
is greater manual effort or processes in generating test cases, compared with techniques involving
automatic generation processes.
The OCL is part of the UML standard. It is a language allowing the specification of formal constraints
in context of a UML model. Constraints are primarily used to express invariants of classes, pre-
conditions and post-conditions of operations. These invariants become elements of test cases. In their
work, they aimed to generate test-cases focusing on possible errors during the design phase of software
development. Examples of such errors might be a missing or misunderstood requirement, a wrongly
implemented requirement, or a simple coding error. In order to represent these errors, they introduced
faults into formal specifications. The faults are introduced by deliberately changing a design, resulting
in wrong behavior possibly causing a failure. They focused dedicatedly on the problem of generating
test cases from a formal specification.
100
Sketch Diagram-based Test Case Generation Techniques
Sketch diagram-based techniques (model-based) are methods to generate test cases from model
diagrams like UML Use Case diagram, UML Sequence diagrams and UML State diagrams
A major advantage of model-based is that it can be easily automated, saving time and resources. Other
advantages are shifting the testing activities to an earlier part of the software development process and
generating test cases that are independent of any particular implementation of the design
In a software development project, use cases define system software requirements. A use case is used to
fully describe a sequence of actions performed by a system to provide an observable result of value to a
person or another system using the product under development. Use cases tell the customer what to
expect, the developer what to code, the technical writer what to document and the tester what to test.
The three-step process to generate test cases from a fully detailed use case: (1) for each use case,
generate a full set of use-case scenarios (2) for each scenario, identify at least one test case and the
conditions that will make it execute and (3) for each test case, identify the data values with which to
test.
The practical problems in software testing as follows: (1) lack of planning/time and cost pressure, (2)
lack of test documentation, (3) lack of tool support, (4) formal language/specific testing languages
required, (5) lack of measures, measurements and data to quantify testing and evaluate test quality and
(6) insufficient test quality. Approach to resolve the above problems is to derive test cases from
scenarios/UML use cases and state diagrams. In this, the generation of test cases is done in three stages:
(1) preliminary test case and test preparation during scenario creation, (2) test case generation from
Statechart and dependency charts and (3) test set refinement by application dependent strategies
( experience-based testing).
Web based applications are of growing complexity and it is a serious business to test them correctly.
Four steps to generate test cases as follows: (1) prioritize use cases based on the requirement
traceability matrix, (2) generate tentatively sufficient use cases and test scenarios, (3) for each scenario,
identify at least one test case and the conditions and (4) for each test case, identify test data values.
They also presented that the test cases contains: a set of test inputs, execution conditions and expected
results developed for a particular objective.
There are two processes in the test case generation technique, which break down briefly as follows:
Define
This is a first process that allows software testing engineers to gather, analyze and define all pre-
requisite and required information, such as requirements, constraints and type of testing. There are four
sub-processes described shortly as follows:
101
Table 1 contains five columns: sub-process, purpose, description, input and output. The sub-process is a
sequential process to analyze requirements before generating test cases. The purpose is a goal that each
process aims to achieve. The description describes a short summary of what the process is and means to
software test engineers. The input is a required pre-requisite for each process while the output is an
outcome of each process.
102
Design
This is a second process that aims to design, prepare and generate all elements in a set of tests, such as
test data, test sequence and dependencies of each test case. This process contains the following sub-
processes:
Table 2 also contains five columns: sub-process, purpose, description, input and output. The sub-
process is a sequential process to prepare and generate all test elements, such as test scenario, test se-
quence and test data. The purpose is a goal that each process aims to achieve. The description describes
a short summary of what the process is and means to software test engineers. The input is a required
pre-requisite for generating test cases while the output is a testing artifact.
The above process can help software test engineers to design, prepare and generate all elements in a set
of test cases.
103
What is White Box Testing?
White Box Testing is the testing of a software solution's internal coding and infrastructure. It focuses
primarily on strengthening security, the flow of inputs and outputs through the application, and
improving design and usability. White box testing is also known as Clear Box testing, Open Box
testing, Structural testing, Transparent Box testing, Code-Based testing, and Glass Box testing.
White box testing involves the testing of the software code for the following:
The testing can be done at system, integration and unit levels of software development. One of the ba-
sic goals of whitebox testing is to verify a working flow for an application. It involves testing a series
of predefined inputs against expected or desired outputs so that when a specific input does not result in
the expected output, you have encountered a bug.
The first thing a tester will often do is learn and understand the source code of the application. Since
white box testing involves the testing of the inner workings of an application, the tester must be very
knowledgeable in the programming languages used in the applications they are testing. Also, the testing
person must be highly aware of secure coding practices. Security is often one of the primary objectives
of testing software. The tester should be able to find security issues and prevent attacks from hackers
and naive users who might inject malicious code into the application either knowingly or unknowingly.
The second basic step to white box testing involves testing the application's source code for proper
flow and structure. One way is by writing more code to test the application's source code. The tester
will develop little tests for each process or series of processes in the application. This method requires
that the tester must have intimate knowledge of the code and is often done by the developer. Other
methods include Manual Testing, trial and error testing and the use of testing tools as we will explain
further on in this article.
A major White box testing technique is Code Coverage analysis. Code Coverage analysis, eliminates
gaps in a Test Case suite. It identifies areas of a program that are not exercised by a set of test cases.
Once gaps are identified, you create test cases to verify untested parts of code, thereby increase the
quality of the software product
Statement Coverage - This technique requires every possible statement in the code to be tested at least
once during the testing process.
Branch Coverage - This technique checks every possible path (if-else and other conditional loops) of a
software application.
Apart from above, there are numerous coverage types such as Condition Coverage, Multiple Condition
Coverage, Path Coverage, Function Coverage etc. Each technique has its own merits and attempts to
test (cover) all parts of software code.
Using Statement and Branch coverage you generally attain 80-90% code coverage which is sufficient.
105
Advantages of White Box Testing
Producer’s view of quality, in simpler terms means the developers perception of the final product.
Consumers view of quality means users perception of final product.
When we carry out the V&V tasks, we have to concentrate both of these view of quality.
What is Verification?
What is Validation?
Validation is the process of evaluating the final product to check whether the software meets the
business needs. In simple words the test execution which we do in our day to day life are actually the
validation activity which includes smoke testing, functional testing, regression testing, systems testing
etc…
Verification Validation
Verifying process includes checking docu- It is a dynamic mechanism of testing and
ments, design, code and program validating the actual product
It does not involve executing the code It always involves executing the code
Verification uses methods like reviews, It uses methods like Black Box Test-
walkthroughs, inspections and desk- check- ing ,White Box Testing and non-functional
ing etc. testing
Whether the software conforms to specifi- It checks whether software meets the re-
cation is checked quirements and expectations of customer
Verification would be check the design doc and correcting the spelling mistake.
Otherwise development team will create button like
107
So new specification is
Owing to Validation testing, the development team will make the submit button clickable
108
What is Gray Box Testing?
Gray Box Testing is a technique to test the software product or application with partial knowledge of
the internal workings of an application.
In this process, context specific errors that are related to web systems are commonly identified. It will
increase the testing coverage by concentrating on all of the layers of any complex system.
Gray Box Testing is a software testing method, which is a combination of both White Box Testing and
Black Box Testing method.
109
Why Gray Box Testing
It provides combined benefits of both black box testing and white box testing both
It combines the input of developers as well as testers and improves overall product quality
It reduces the overhead of long process of testing functional and non-functional types
It gives enough free time for developer to fix defects
Testing is done from the user point of view rather than designer point of view
To perform Gray box testing, it is not necessary that the tester has the access to the source code. Test
are designed based on the knowledge of algorithm, architectures, internal states, or other high -level de-
scriptions of the program behavior. For designing test cases, testers need the knowledge of internal
code and structures. Tester creates the test cases based on the knowledge of internal code and algo-
rithms and then test the application without needing internal code anywhere, on black box level.
Regression Testing: To check whether the change in the previous version has regressed other
aspects of the program in the new version. It will be done by testing strategies like retest all,
retest risky use cases, retest within firewall.
Pattern Testing: This testing is performed on the historical data of the previous system defects.
Unlike black box testing, gray box testing digs within the code and determines why the failure
happened
Usually, Grey box methodology uses automated software testing tools to conduct the testing. Stubs and
module drivers are created to relieve tester to manually generate the code.
The test cases for grey box testing may include, GUI related, Security related, Database related,
Browser related, Operational system related, etc.
When a component under test encounter a failure of some kind may lead to abortion of the on-
going operation
When test executes in full but the content of the result is incorrect.
Domain testing is different from domain specific knowledge you need to test a software system.
In domain testing, we divide a domain into sub-domains (equivalence classes) and then test using
values from each subdomain. For example, if a website (domain) has been given for testing, we will be
111
dividing the website into small portions (subdomain) for the ease of testing.
Domain might involve testing of any one input variable or combination of input variables.
Practitioners often study the simplest cases of domain testing less than two other names, "boundary
testing" and "equivalence class analysis."
Boundary testing - Boundary value analysis (BVA) is based on testing at the boundaries between
partitions. We will be testing both the valid and invalid input values in the partition/classes.
Equivalence Class testing - The idea behind this technique is to divide (i.e. to partition) a set of test
conditions into groups or sets that can be considered the same (i.e. the system should handle them
equivalently), hence 'equivalence partitioning.'
Any domain which we test has some input functionality and an output functionality. There will be some
input variables to be entered, and the appropriate output has to be verified.
C = a+b, where a and b are input variables and C is the output variable.
Here in the above example, there is no need of classification or combination of the variables is
required.
112
Consider a games exhibition for Children, 6 competitions are laid out, and tickets have to be given
according to the age and gender input. The ticketing is one of the modules to be tested in for the whole
functionality of Games exhibition.
According to the scenario, we got six scenarios based on the age and the competitions:
Here the input will be Age and Gender and hence the ticket for the competition will be issued. This
case partition of inputs or simply grouping of values come into the picture.
For the above example, we are partitioning the values into a subset or the subset. We are partitioning
the age into the below classes :
1. Boundaries are representatives of the equivalence classes we sample them from. They're more
likely to expose an error than other class members, so they're better representatives.
2. The best representative of an equivalence class is a value in between the range.
Boundary values:
1. Values should be Equal to or lesser than 10. Hence, age 10 should be included in this class.
2. Values should be greater than 5. Hence, age 5 should not be included in this class.
3. Values should be Equal to or lesser than 10. Hence, age 11 should not be included in this class.
4. Values should be greater than 5. Hence, age 6 should be included in this class.
Equivalence partition is referred when one has to test only one condition from each partition. In this,
we assume that if one condition in a partition works, then all the condition should work. In the same
way, if one condition in that partition does not work then we assume that none of the other conditions
will work. For example,
As the values from 6 to 10 are valid ones, one of the values among 6,7,8,9 and 10 have to be picked up.
Hence selected age "8" is a valid input age for the age group between (Age >5 and <=10). This sort of
partition is referred as equivalence partition.
Input age = 5
Boy - Age >5 and <=10 Input age = 8
Input age = 11
Input age = 10
Input age = 6
Input age = 5
Girl - Age >5 and <=10 Input age = 8
Input age = 11
Input age = 10
Input age = 11
Input age = 10
Boy - Age >10 and <=15 Input age = 13
Input age = 15
Input age = 16
Girl - Age >10 and <=15 Input age = 11 Input age = 13
Input age = 10
114
Input age = 15
Input age = 16
Input age = 4
Age<=5 Input age = 3
Input age = 5
Input age = 15
Age >15 Input age = 25
Input age = 16
Passing the functionality not only depends upon the results of the above scenarios. The input given and
the expected output will give us the results and this requires domain knowledge.
Hence, if all the test cases of the above pass, the domain of issuing tickets in the competition get
passed. If not, the domain gets failed.
115
Domain Testing Structure
Usually, testers follow the below steps in a domain testing. These may be customized/ skipped
according to our testing needs.
INTRODUCTION:
o "Logic" is one of the most often used words in programmers' vocabularies but one of their least
used techniques.
o The functional requirements of many programs can be specified by decision tables, which pro-
vide a useful basis for program and test design.
o Consistency and completeness can be analyzed by using boolean algebra, which can also be
used as a basis for test design. Boolean algebra is trivialized by using Karnaugh-Veitch charts.
o Boolean algebra is to logic as arithmetic is to mathematics. Without it, the tester or programmer
is cut off from many test and design techniques and tools that incorporate those techniques.
o The trouble with specifications is that they're hard to express.
o Boolean algebra (also known as the sentential calculus) is the most basic of all logic systems.
o Higher-order logic systems are needed and used for formal specifications.
o Much of logical analysis can be and is embedded in tools. But these tools incorporate methods
to simplify, transform, and check specifications, and the methods are to a large extent based on
boolean algebra.
o Decision tables are extensively used in business data processing; Decision-table preprocessors
as extensions to COBOL are in common use; boolean algebra is embedded in the implementa-
tion of these processors.
o Although programmed tools are nice to have, most of the benefits of boolean algebra can be
reaped by wholly manual means if you have the right conceptual tool: the Karnaugh-Veitch dia-
gram is that conceptual tool.
DECISION TABLES:
It consists of four areas called the condition stub, the condition entry, the action stub, and the action entry.
Each column of the table is a rule that specifies the conditions under which the actions named in the action
stub will take place.
The condition stub is a list of names of conditions.
116
Example
A rule specifies whether a condition should or should not be met for the rule to be satisfied. "YES" means
that the condition must be met, "NO" means that the condition must not be met, and "I" means that the condition
plays no part in the rule, or it is immaterial to that rule.
The action stub names the actions the routine will take or initiate if the rule is satisfied. If the action entry is
"YES", the action will take place; if "NO", the action will not take place.
The table in Figure 6.1 can be translated as follows:
Action 1 will take place if conditions 1 and 2 are met and if conditions 3 and 4 are not met (rule 1) or if
conditions 1, 3, and 4 are met (rule 2).
"Condition" is another word for predicate.
Decision-table uses "condition" and "satisfied" or "met". Let us use "predicate" and TRUE / FALSE.
Now the above translations become:
1. Action 1 will be taken if predicates 1 and 2 are true and if predicates 3 and 4 are false (rule 1), or if pred-
icates 1, 3, and 4 are true (rule 2).
2. Action 2 will be taken if the predicates are all false, (rule 3).
3. Action 3 will take place if predicate 1 is false and predicate 4 is true (rule 4).
In addition to the stated rules, we also need a Default Rule that specifies the default action to be taken
when all other rules fail. The default rules for Table in Figure 6.1 is shown in Figure 6.3
117
4.
If the decision appears on a path, put in a YES or NO as appropriate. If the decision does not appear on
the path, put in an I, Rule 1 does not contain decision C, therefore its entries are: YES, YES, I, YES.
The corresponding decision table is shown in Table 6.1
118
CONDITION A YES YES YES NO NO NO
CONDITION B YES NO YES I I I
CONDITION C I I I YES NO NO
CONDITION D YES I NO I YES NO
KV CHARTS:
INTRODUCTION:
SINGLE VARIABLE:
Figure 6.6 shows all the boolean functions of a single variable and their equivalent representation as a
KV chart.
The charts show all possible truth values that the variable A can have.
119
A "1" means the variable’s value is "1" or TRUE. A "0" means that the variable's value is 0 or FALSE.
The entry in the box (0 or 1) specifies whether the function that the chart represents is true or false for
that value of the variable.
We usually do not explicitly put in 0 entries but specify only the conditions under which the function is
true.
TWO VARIABLES:
Figure 6.7 shows eight of the sixteen possible functions of two variables.
Each box corresponds to the combination of values of the variables for the row and column of that box.
A pair may be adjacent either horizontally or vertically but not diagonally.
Any variable that changes in either the horizontal or vertical direction does not appear in the expression.
In the fifth chart, the B variable changes from 0 to 1 going down the column, and because the A vari-
able's value for the column is 1, the chart is equivalent to a simple A.
Figure 6.8 shows the remaining eight functions of two variables.
120
Figure 6.8 : More Functions of Two Variables.
The first chart has two 1's in it, but because they are not adjacent, each must be taken separately.
They are written using a plus sign.
It is clear now why there are sixteen functions of two variables.
Each box in the KV chart corresponds to a combination of the variables' values.
That combination might or might not be in the function (i.e., the box corresponding to that combination
might have a 1 or 0 entry).
Since n variables lead to 2n combinations of 0 and 1 for the variables, and each such combination (box)
can be filled or not filled, leading to 22n ways of doing this.
Consequently for one variable there are 221 = 4 functions, 16 functions of 2 variables, 256 functions of 3
variables, 16,384 functions of 4 variables, and so on.
Given two charts over the same variables, arranged the same way, their product is the term by term prod-
uct, their sum is the term by term sum, and the negation of a chart is gotten by reversing all the 0 and 1
entries in the chart.
121
OR
THREE VARIABLES:
122
Figure 6.8 : KV Charts for Functions of Three Variables.
123