Professional Documents
Culture Documents
SSQ Lec 8
SSQ Lec 8
LECTURE # 8
SOFTWARE TESTING-I
Email: ali.javed@uettaxila.edu.pk
Contact No: +92-51-9047747
Office hours:
Monday, 09:00 - 11:00, Office # 7 S.E.D
“Testing is the process of executing a program or system with the intent of finding
errors.” by Myers 1979
Developer Understands the system but will test “gently” and is driven by
“delivery”
Independent Tester must learn about the system but will attempt to break it and is
driven by quality
Test engineers never be sure that they completely understand a software product.
Test Planning
Define a software test plan by specifying a test schedule for a test process and its
activities, as well as assignments test requirements and items test strategy
Test Set up
Testing Lab Space and tools (Environment Set-up) Test Suite Set-up
Problem Reporting
Report program errors using a systematic solution.
Test Automation
Define software test tools
Adopt and use software test tools
Write software test scripts and facility
Test Cases are written based on Business and Functional requirements, use cases and
Technical design documents.
There can be 1:1 or 1:N or N:1 or N:N relationship between requirements and Test
cases
Construction of Test Cases also helps in:
Finding issues or gaps in the requirements
Technical design itself.
Test Case construction activity would make tester to think through different possible Positive
and Negative scenarios.
It is test cases against which tester will verify the application is working as expected.
Number of test cases to be created depends on the size, complexity and type of testing being
performed
Test Case ID
Test Condition(s) and Expected Result(s) being exercised in the Test Case.
Initial setup required for executing the test script. This could be environment or data or configuration setup
to be done before running the test case.
Post execution activities. For e.g.:- Delete the application user ‘WebAdmin’ after test execution is
completed.
Priority (High, Medium and Low) of the Test Case. Priority will help the tester to decide which test case(s)
have to be run earlier than others.
Complexity of the Test Case. It will help to identify and filter Test Cases based on complexity. This would
help in assigning test cases to testers, before test execution.
Approximate time required for executing the test case. This entry is required from Project management
perspective to track the productivity and also to ensure we can still meet the test execution deadlines.
Expected results. Each Test Step will have a corresponding Expected result field that would specify the expected response.
Actual result. Each Test Step will have a corresponding Actual result field. Tester would enter the details on the response he
saw after executing the test step.
Test Step result. Typically this field would contain values Not Applicable, No Run, Passed, Failed or in progress etc
Revision history. When and who wrote or modified the test case etc.
Associated Defects. This field will help to identify what are the existing defect(s) that are associated with the test case.
The advantage of high-level test cases is When only high-level test cases are present in
that each time a tester executes the test a test plan, the biggest disadvantage is that
cases, he is not bound by strict test steps. you can never be sure if all the scenarios that
This gives the tester a chance to explore needed to be covered, have actually been
more edge cases and find gaps in test covered as the test case describes the
coverages. functionality in a broad sense only.
Giving a good tester this sort of freedom Another disadvantage is that it is very difficult
will increase the chances of finding new for an inexperienced tester to work with these
bugs. types of test cases.
The Software Quality Assurance group can include the following Professionals
Testing Manager
Test Analyst
Tester
Automated Testing
This type includes the testing of the Software manually i.e. without using any automated tool or
any script.
In this type the tester takes over the role of an end user and test the Software to identify any
un-expected behavior or bug.
Testers use test plan, test cases or test scenarios to test the Software to ensure the completeness
of testing.
Condition: Manual tests can be used in situations where the steps cannot be automated, for
example to determine a component's behavior when network connectivity is lost; this test could
be performed by manually unplugging the network cable.
In real time 60 to 70 % testing is performed manually. Tester will create test cases, Tester will
execute test cases, tester will write bug report manually.
Except performance testing and Stress Testing every thing we can do manually
Automation testing, also known as Test Automation, is when the tester writes scripts and uses
another software to test the software.
This process involves automation of a manual process. Automation Testing is used to re-run the
test scenarios that were performed manually, quickly and repeatedly.
Code-driven testing
Code driven test automation is a key feature of agile software development, where it is known
as test-driven development (TDD). Unit tests are written to define the functionality before the
code is written. However, these unit tests evolve and are extended as coding progresses, issues
are discovered and the code is subjected to refactoring
Add a test- In test-driven development, each new feature begins with writing a test. To write a
test, the developer must clearly understand the feature's specification and requirements. The
developer can accomplish this through use cases and user stories and can write the test in
whatever testing framework is appropriate to the software environment. It could be a modified
version of an existing test.
Run all tests and see if the new one fails- This validates that the test harness is working
correctly, that the new test does not mistakenly pass without requiring any new code, and that
the required feature does not already exist
Write some code- The next step is to write some code that causes the test to pass. The new code
written at this stage is not perfect and may, for example, pass the test in an inelegant way. That
is acceptable because it will be improved in Step 5.
Run tests- If all test cases now pass, the programmer can be confident that the new code meets
the test requirements, and does not break or degrade any existing features. If they do not, the
new code must be adjusted until they do.
Refactor code- Here you improves the Non-Functional attributes of your code without modifying
functional requirements
Many test automation tools provide record and playback features that allow users to
interactively record user actions and replay them back any number of times, comparing actual
results to those expected.
The advantage of this approach is that it requires little or no software development. This
approach can be applied to any application that has a graphical user interface.
API driven testing- The API Testing is performed for the system, which has a collection of
API that must to be tested. Common Tests performed on API's are
Return Value based on input condition - The return value from the API's are checked based on
the input condition.
Verify if the API's does not return anything.
Verify if the API triggers some other event or calls another API. The Events output should be
tracked and verified.
Verify if the API is updating any data structure.
MANUAL AUTOMATED
Integration Testing
System testing
Acceptance testing
Black Box
White Box
Gray Box
In science and engineering, a black box is a device, system or object which can be
viewed solely in terms of its input, output and transfer characteristics without any
knowledge of its internal workings, that is, its implementation is "opaque" (black).
Also known as functional testing. A software testing technique whereby the internal
workings of the item being tested are not known by the tester.
For example, in a black box test on a software design the tester only knows the
inputs and what the expected outcomes should be and not how the program arrives
at those outputs.
Gray box testing is a software testing technique that uses a combination of black box testing
and white box testing. Gray box testing is not black box testing, because the tester does know
some of the internal workings of the software under test.
In gray box testing, the tester applies a limited number of test cases to the internal workings of
the software under test. In the remaining part of the gray box testing, one takes a black box
approach in applying inputs to the software under test and observing the outputs.
This is particularly important when conducting integration testing between two modules of code
written by two different developers, where only the interfaces are exposed for test.
Inspections
Walkthroughs
Desk Checking
Peer Ratings
In software development, static testing, also called dry run testing, is a form
of software testing where the authors manually read their own
documents/code to find any errors.
The term “static” in this context means “not while running” or “not while
executing”
For the first step, clean up the cosmetic appearance of the document: check spelling, check
grammar, check punctuation, and check formatting.
The benefit of doing the first step is that when the document is cosmetically clean, the readers can
concentrate on the content.
The liability of skipping the first step is that if the document is not cosmetically clean, the readers
will surely stop reading the document for meaning and start proofreading.
For the second step, use whatever techniques seem appropriate to focus expert review on
document contents.
Some popular and effective techniques used for content review are discussed in the next section
Desk checking
Peer Ratings
Gilb Inspection
N-Fold Inspection
Meetingless Inspection
1. The programmer narrates, statement by statement, the logic of the program. During the discourse, other participants should
raise questions, and they should be pursued to determine whether errors exist.
2. The program is analyzed with respect to a checklist of historically common programming errors.
The moderator is responsible for ensuring that the discussions proceed along productive lines and that the
participants focus their attention on finding errors, not correcting them.
The optimal amount of time for the inspection session appears to be from 90 to 120 minutes. Since the
session is a mentally taxing experience, longer sessions tend to be less productive.
Most inspections proceed at a rate of approximately 150 program statements per hour.
For that reason, large programs should be examined in multiple inspections, each inspection dealing with one
or several modules or subroutines.
Desk Checking is one of the older practice of human error-detection process. A desk
check can be viewed as a one-person inspection or walkthrough.
Desk checking is the least formal and least time-consuming static testing technique.
Of all the techniques, desk checking is the only one whereby the author test his or her
own document.
A second, and more important, reason is that it runs counter to a testing principle (“that
people are generally ineffective in testing their own programs”). For this reason, you could
deduce that desk checking is best performed by a person other than the author of the
program.
Introduction
Procedure
The code walkthrough, like the inspection, is a set of procedures and error-detection
techniques for group code reading.
It shares much in common with the inspection process, but the procedures are slightly
different, and a different error-detection technique is employed.
Like the inspection, the walkthrough is an uninterrupted meeting of one to two hours in
duration.
One of these people plays a role similar to that of the moderator in the inspection process,
another person plays the role of a secretary (a person who records all errors found), and a
third person plays the role of a tester.
The participants are given the materials several days in advance to allow them to bone up on
the program.
However, the procedure in the meeting is different. Rather than simply reading the program or
using error checklists, the participants “play computer.”
The person designated as the tester comes to the meeting armed with a small set of paper test
cases—representative sets of inputs (and expected outputs) for the program or module.
Peer rating is a technique of evaluating unidentified programs in terms of their overall quality,
maintainability, extensibility, usability, and clarity. The purpose of the technique is to provide
programmer self-evaluation.
Each participant is asked to select two of his or her own programs to be reviewed. One
program should be representative of what the participant considers to be his or her finest work;
the other should be a program that the programmer considers to be poorer in quality.
Once the programs have been collected, they are randomly distributed to the participants.
Each participant is given four programs to review. Two of the programs are the “finest” programs
and two are" poorer” programs, but the reviewer is not told which is which.
Each participant spends 30 minutes with each program and then completes an evaluation form
after reviewing the program. After reviewing all four programs, each participant rates the relative
quality of the four programs. The evaluation form asks the reviewer to answer, on a scale from 1 to
7 (1 meaning definitely “yes,” 7 meaning definitely “no”),
The reviewer also is asked for general comments and suggested improvements.
After the review, the participants are given the anonymous evaluation forms for their
two contributed programs. The participants also are given a statistical summary showing
the overall and detailed ranking of their original programs across the entire set of
programs, as well as an analysis of how their ratings of other programs compared with
those ratings of other reviewers of the same program.
Check list
Pareto chart
Histogram
Run Charts
Control charts
Scatter diagram
Cause-and-effect diagram