Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

SOFTWARE SYSTEM QUALITY

LECTURE # 8

SOFTWARE TESTING-I

30th November, 2017 Dr. Ali Javed


Contact Information
2

 Instructor: Dr. Ali Javed


Assistant Professor
Department of Software Engineering
U.E.T Taxila

 Email: ali.javed@uettaxila.edu.pk
 Contact No: +92-51-9047747
 Office hours:
 Monday, 09:00 - 11:00, Office # 7 S.E.D

Dr. Ali Javed


Course Information
3

 Course Name: Software System Quality

 Course Code: SE-5001

Dr. Ali Javed


4 Topics to Cover
 Software Testing
 Software Testing Lifecycle
 Software Testing Activities
 Role of Tester
 Testing Limits
 Test Cases
 Testing Methods
 SQA Team
 Testing Stages
 Testing Types
 Static Testing

Dr. Ali Javed


Software Testing
5

 “Testing is the process of executing a program or system with the intent of finding
errors.” by Myers 1979

 Software testing is the process of analyzing a software item to detect the


differences between existing and required conditions (that is, bugs) and to evaluate
the features of the software item (IEEE, 1986; IEEE, 1990).

Dr. Ali Javed


Testing Perspective
6

Developer Understands the system but will test “gently” and is driven by
“delivery”

Independent Tester must learn about the system but will attempt to break it and is
driven by quality

Dr. Ali Javed


Software Testing Limits
7

 Due to the testing time limit, it is impossible to achieve total confidence.

 We can never be sure the specifications are 100% correct.

 We can never be certain that a testing system (or tool) is correct.

 Test engineers never be sure that they completely understand a software product.

 We never have enough resources to perform software testing.

 We can never be certain that we achieve 100% adequate software testing.

Dr. Ali Javed


Testing Lifecycle [8]
8

Dr. Ali Javed


Testing Activities
9

Test Planning
Define a software test plan by specifying a test schedule for a test process and its
activities, as well as assignments test requirements and items test strategy

Test Design and Specification


Conduct software design based well-defined test generation methods. Specify test
cases to achieve a targeted test coverage.

Test Set up
Testing Lab Space and tools (Environment Set-up) Test Suite Set-up

Test Operation and Execution


Run test cases manually or automatically

Test Result Analysis and Reporting


Report software testing results and conduct test result analysis

Dr. Ali Javed


Testing Activities
10

Problem Reporting
Report program errors using a systematic solution.

Test Management and Measurement


Manage software testing activities, control testing schedule, measure testing
complexity and cost

Test Automation
Define software test tools
Adopt and use software test tools
Write software test scripts and facility

Test Configuration Management


Manage and maintain different versions of software test suites, test environment and
tools, and documents for various product versions.

Dr. Ali Javed


11 Test Case

Dr. Ali Javed


Test case
12

 A test case in software engineering is a set of conditions or variables under which a


tester will determine whether an application or software system meets specifications.

 A test case contains:


 A sequence of Steps describing actions to be performed,
 Test data to be used
 An expected response for each action performed.

 Test Cases are written based on Business and Functional requirements, use cases and
Technical design documents.
 There can be 1:1 or 1:N or N:1 or N:N relationship between requirements and Test
cases
 Construction of Test Cases also helps in:
 Finding issues or gaps in the requirements
 Technical design itself.

Dr. Ali Javed


Test case
13

 Test Case construction activity would make tester to think through different possible Positive
and Negative scenarios.

 It is test cases against which tester will verify the application is working as expected.

 Number of test cases to be created depends on the size, complexity and type of testing being
performed

 Documented Test Cases are referred to as Test Scripts.

 Test Suite: A collection of related test cases/ test scripts

 A test case may include many subsets.

Dr. Ali Javed


Writing better test cases
 Requirement ID(s) being covered in the Test Case.

 Test Case ID

 Test Condition(s) and Expected Result(s) being exercised in the Test Case.

 Initial setup required for executing the test script. This could be environment or data or configuration setup
to be done before running the test case.

 Post execution activities. For e.g.:- Delete the application user ‘WebAdmin’ after test execution is
completed.

 Priority (High, Medium and Low) of the Test Case. Priority will help the tester to decide which test case(s)
have to be run earlier than others.

 Complexity of the Test Case. It will help to identify and filter Test Cases based on complexity. This would
help in assigning test cases to testers, before test execution.

 Approximate time required for executing the test case. This entry is required from Project management
perspective to track the productivity and also to ensure we can still meet the test execution deadlines.

Dr. Ali Javed


Writing better test cases
 Test Steps. This contains instruction on what actions to perform and what test data to use.

 Expected results. Each Test Step will have a corresponding Expected result field that would specify the expected response.

 Actual result. Each Test Step will have a corresponding Actual result field. Tester would enter the details on the response he
saw after executing the test step.

 Test Step result. Typically this field would contain values Not Applicable, No Run, Passed, Failed or in progress etc

 Test Case Version number

 Test case creation timestamp.

 Revision history. When and who wrote or modified the test case etc.

 Test Case status (Draft, completed, reviewed, Not Valid etc.)

 Test Case execution timestamps.

 Associated Defects. This field will help to identify what are the existing defect(s) that are associated with the test case.

 Project Name/Application Name


Dr. Ali Javed
High Level vs Low Level Test Case [13]
16

High Level Test Case Low Level Test Case


 High Level Test cases define the  Low level test cases define the
functionality of a system/module in a functionality/scenario in substantial detail.
broader sense, without getting into the These test cases are more refined and are
depth of the functionality. generally written with details such as
‘Expected Result’, ‘Test Data’, etc.
 For a web based application that requires  For a web based application that requires
user to login, a high-level test case could user to login, a low-level test case could
be “User should be able to Login with describe the URL of the Login page, the UI
correct credentials”. of the Login page (username and
password text boxes, login button, forgot
password link), etc. along with the proper
test data.

Dr. Ali Javed


High Level Test Cases [13]
17

 The advantage of high-level test cases is  When only high-level test cases are present in
that each time a tester executes the test a test plan, the biggest disadvantage is that
cases, he is not bound by strict test steps. you can never be sure if all the scenarios that
This gives the tester a chance to explore needed to be covered, have actually been
more edge cases and find gaps in test covered as the test case describes the
coverages. functionality in a broad sense only.

 Giving a good tester this sort of freedom  Another disadvantage is that it is very difficult
will increase the chances of finding new for an inexperienced tester to work with these
bugs. types of test cases.

Dr. Ali Javed


How detailed should be a test case?

Dr. Ali Javed


Example of detail
 Example of a Test Case that is written at high level.
 Step No: 1
Step Description: Login to test application with valid user id/password
Expected Result: Home page is displayed
 Step No: 2
Step Description: Click “Logout” link on home page
Expected Result: Login page is displayed
 Example of a Test case written that is written in detail.
 Step No: 1
Step Description: Open URL https://www.example.com/login.asp
Expected Result: Login page is displayed and contain the below fields
a) “User Name” text field
b) “Password” text field
c) “Submit” button
 Step No: 2
Step Description: Once Login page is displayed. Enter valid userid/password.
a) Enter “user1” in “User Name” text field
b) Enter “abc123″ in “password” text field
Expected Result: a) Verify “User Name” is populated with text “user1”
b) Verify text entered in “password” field is masked and is not readable.
 Step No: 3
Step Description: Click “Submit” button
Expected Result: Verify application “Home” page is displayed
a) Verify “Home” page displays “Welcome user1” message on top of left navigation Menu.
b) Verify Left Navigation menu contains links “Directory”, “Submission”, “Latest Links”, “Approve Links”, “Logout”.
 Step No: 4
Step Description: Click “Logout” link on the left menu.
Expected Result: Verify user is successfully logged out of the application and application login page is displayed
https://www.example.com/login.asp

Dr. Ali Javed


Test case examples
 Test case for ATM
 TC 1 :- successful card insertion.
 TC 2 :- unsuccessful operation due to wrong angle card insertion.
 TC 3:- unsuccessful operation due to invalid account card.
 TC 4:- successful entry of pin number.
 TC 5:- unsuccessful operation due to wrong pin number entered 3 times.
 TC 6:- successful selection of language.
 TC 7:- successful selection of account type.
 TC 8:- unsuccessful operation due to wrong account type selected w/r to that inserted card.
 TC 9:- successful selection of withdrawal option.
 TC 10:- successful selection of amount.
 TC 11:- unsuccessful operation due to wrong denominations.
 TC 12:- successful withdrawal operation.
 TC 13:- unsuccessful withdrawal operation due to amount greater than possible balance.
 TC 14:- unsuccessful due to lack of amount in ATM.
 TC 15:- un due to amount greater than the day limit.
 TC 16:- un due to server down.
 TC 17:- un due to click cancel after insert card.
 TC 18:- un due to click cancel after insert card and pin no.
 TC 19:- un due to click cancel after language selection, account type selection, withdrawal selection,
enter amount

Dr. Ali Javed


Test case examples
 Test cases for a web page
 Testing without entering any username and password
 Test it only with Username
 Test it only with password.
 User name with wrong password
 Password with wrong user name
 Right username and right password
 Cancel, after entering username and password.
 Enter long username and password that exceeds the set limit of characters.
 Try copy/paste in the password text box.
 After successful sign-out, try “Back” option from your browser. Check whether it
gets you to the “signed-in” page.

Dr. Ali Javed


22 SQA Team [7]

Dr. Ali Javed


SQA Team
23

 The Software Quality Assurance group can include the following Professionals

 Testing Manager

 Test Team Lead

 Test Analyst

 Tester

 Independent Test Observer

Dr. Ali Javed


24 Testing Methods
 Manual Testing

 Automated Testing

Dr. Ali Javed


Manual testing[10,11]
25

 This type includes the testing of the Software manually i.e. without using any automated tool or
any script.

 In this type the tester takes over the role of an end user and test the Software to identify any
un-expected behavior or bug.

 Testers use test plan, test cases or test scenarios to test the Software to ensure the completeness
of testing.

 Condition: Manual tests can be used in situations where the steps cannot be automated, for
example to determine a component's behavior when network connectivity is lost; this test could
be performed by manually unplugging the network cable.

 In real time 60 to 70 % testing is performed manually. Tester will create test cases, Tester will
execute test cases, tester will write bug report manually.

 Except performance testing and Stress Testing every thing we can do manually

Dr. Ali Javed


Manual testing[10,11]
26

 Since a person thinks, therefore,  More Time Consuming activity


the tester will find ways to best
explore the product aside from  Efficiency depends on tester(
the pre-set ways presented to Variability of results depending on
him/her. In short, a person can who is performing the tests can also
do exploratory or monkey be a problem)
testing.

Dr. Ali Javed


Automated testing[10]
27

 Automation testing, also known as Test Automation, is when the tester writes scripts and uses
another software to test the software.

 This process involves automation of a manual process. Automation Testing is used to re-run the
test scenarios that were performed manually, quickly and repeatedly.

 Automation is not a Replacement of Manual Testing

 Done properly, automated software testing can help


 to minimize the variability of results,
 speed up the testing process,
 increase test coverage, and
 ultimately provide greater confidence in the quality of the software being tested.

Dr. Ali Javed


Automated testing[10]
28

 Code-driven testing

 A growing trend in software development is the use of testing frameworks such as


the xUnit frameworks (for example, JUnit and NUnit) that allow the execution of unit tests to
determine whether various sections of the code are acting as expected under various
circumstances.

 Code driven test automation is a key feature of agile software development, where it is known
as test-driven development (TDD). Unit tests are written to define the functionality before the
code is written. However, these unit tests evolve and are extended as coding progresses, issues
are discovered and the code is subjected to refactoring

Dr. Ali Javed


Automated testing[10]
29

 Test-driven development (TDD)

 Add a test- In test-driven development, each new feature begins with writing a test. To write a
test, the developer must clearly understand the feature's specification and requirements. The
developer can accomplish this through use cases and user stories and can write the test in
whatever testing framework is appropriate to the software environment. It could be a modified
version of an existing test.
 Run all tests and see if the new one fails- This validates that the test harness is working
correctly, that the new test does not mistakenly pass without requiring any new code, and that
the required feature does not already exist
 Write some code- The next step is to write some code that causes the test to pass. The new code
written at this stage is not perfect and may, for example, pass the test in an inelegant way. That
is acceptable because it will be improved in Step 5.
 Run tests- If all test cases now pass, the programmer can be confident that the new code meets
the test requirements, and does not break or degrade any existing features. If they do not, the
new code must be adjusted until they do.
 Refactor code- Here you improves the Non-Functional attributes of your code without modifying
functional requirements

Dr. Ali Javed


Automated testing[10]
30

 Graphical User Interface (GUI) testing

 Many test automation tools provide record and playback features that allow users to
interactively record user actions and replay them back any number of times, comparing actual
results to those expected.

 The advantage of this approach is that it requires little or no software development. This
approach can be applied to any application that has a graphical user interface.

 API driven testing- The API Testing is performed for the system, which has a collection of
API that must to be tested. Common Tests performed on API's are
 Return Value based on input condition - The return value from the API's are checked based on
the input condition.
 Verify if the API's does not return anything.
 Verify if the API triggers some other event or calls another API. The Events output should be
tracked and verified.
 Verify if the API is updating any data structure.

Dr. Ali Javed


Automated testing[10]
31

 Efficient (No variation is results)  No human insight (During automated


testing, the machine only executes
what the conditions of the pre-set
steps are. It has no capacity to think
outside of the pre-set steps and do
exploratory or monkey testing.)

Dr. Ali Javed


Manual vs Automated Testing
32

MANUAL AUTOMATED

Time consuming and tedious Fast


Huge investment in human resources Less investment in human resources
Less reliable More reliable
Non Programmable Programmable
Not Reusable Reusable
High Risk of missing out something No Risk of missing out something

Dr. Ali Javed


Testing Stages
33

 Unit testing/Component Testing/Module testing[2,3,6]

 Integration Testing

 System testing

 Acceptance testing

Dr. Ali Javed


34 Testing Types

 Black Box

 White Box

 Gray Box

Dr. Ali Javed


Black Box Testing
35

 In science and engineering, a black box is a device, system or object which can be
viewed solely in terms of its input, output and transfer characteristics without any
knowledge of its internal workings, that is, its implementation is "opaque" (black).

 Also known as functional testing. A software testing technique whereby the internal
workings of the item being tested are not known by the tester.

 For example, in a black box test on a software design the tester only knows the
inputs and what the expected outcomes should be and not how the program arrives
at those outputs.

Dr. Ali Javed


Black-Box Testing
36

 Tester can be non-technical.  Chances of having repetition of


 This testing is most likely to find
tests that are already done by
those bugs as the user would find. programmer.

 Testing helps to identify the  It is difficult to identify all possible


contradiction in functional inputs in limited testing time.
specifications.

 Test cases can be designed as soon


as the functional specifications are
complete.

Dr. Ali Javed


White-Box Testing
37

 White-box testing (also known as clear box testing, glass


box testing, transparent box testing, and structural
testing) is a method of testing software that tests internal
structures or workings of an application, as opposed to its
functionality (i.e. black-box testing). [4]

 The meanings of “clear box” and “glass box”


appropriately indicate that you have full visibility of the
internal workings of the software product, specifically, the
logic and the structure of the code. [5]

 In white-box testing an internal perspective of the system,


as well as programming skills, are required and used to
design test cases. The tester chooses inputs to exercise
paths through the code and determine the appropriate
outputs [4]

Dr. Ali Javed


White-Box Testing
38

 As the knowledge of internal  As knowledge of code and internal


coding structure is prerequisite, it structure is a prerequisite, a skilled
becomes very easy to find out tester is needed to carry out this
which type of input/data can type of testing.
help in testing the application
effectively.

 The other advantage of white


box testing is that it helps in
optimizing the code

Dr. Ali Javed


Gray Box Testing
39

 Gray box testing is a software testing technique that uses a combination of black box testing
and white box testing. Gray box testing is not black box testing, because the tester does know
some of the internal workings of the software under test.

 In gray box testing, the tester applies a limited number of test cases to the internal workings of
the software under test. In the remaining part of the gray box testing, one takes a black box
approach in applying inputs to the software under test and observing the outputs.

 This is particularly important when conducting integration testing between two modules of code
written by two different developers, where only the interfaces are exposed for test.

Dr. Ali Javed


40 Static Testing
 Introduction of static testing

 Approach of static testing

 Static Testing Methods

 Inspections
 Walkthroughs
 Desk Checking
 Peer Ratings

 Quality Control Tools

Dr. Ali Javed


Introduction
41

 In software development, static testing, also called dry run testing, is a form
of software testing where the authors manually read their own
documents/code to find any errors.

 It is generally not detailed testing, but primarily syntax checking of the


code/document and/or manually reviewing the code or document to find logic
errors also.

 The term “static” in this context means “not while running” or “not while
executing”

 Objective of static analysis


 To reveal defects or parts that are defect-prone in a document

Dr. Ali Javed


Static Testing Approach
42

 Consider using a two-step approach to static testing.

 For the first step, clean up the cosmetic appearance of the document: check spelling, check
grammar, check punctuation, and check formatting.

 The benefit of doing the first step is that when the document is cosmetically clean, the readers can
concentrate on the content.

 The liability of skipping the first step is that if the document is not cosmetically clean, the readers
will surely stop reading the document for meaning and start proofreading.

For the second step, use whatever techniques seem appropriate to focus expert review on
document contents.

 Some popular and effective techniques used for content review are discussed in the next section

Dr. Ali Javed


43 Static Testing Techniques
 Inspections
 Walk-throughs

 Desk checking
 Peer Ratings

Dr. Ali Javed


44 Inspection
 Fagan Inspection

 Gilb Inspection

 Two Person Inspection

 N-Fold Inspection

 Meetingless Inspection

 Over the Shoulder Reviews

 E-mail pass-around reviews


Dr. Ali Javed
Inspection Session
45

 During the session, two activities occur:

1. The programmer narrates, statement by statement, the logic of the program. During the discourse, other participants should
raise questions, and they should be pursued to determine whether errors exist.

2. The program is analyzed with respect to a checklist of historically common programming errors.

 The moderator is responsible for ensuring that the discussions proceed along productive lines and that the
participants focus their attention on finding errors, not correcting them.

 The optimal amount of time for the inspection session appears to be from 90 to 120 minutes. Since the
session is a mentally taxing experience, longer sessions tend to be less productive.

 Most inspections proceed at a rate of approximately 150 program statements per hour.

 For that reason, large programs should be examined in multiple inspections, each inspection dealing with one
or several modules or subroutines.

Dr. Ali Javed


46 Desk Checking

Dr. Ali Javed


Desk Checking [9]
47

 Desk Checking is one of the older practice of human error-detection process. A desk
check can be viewed as a one-person inspection or walkthrough.

 Desk checking is the least formal and least time-consuming static testing technique.

 Of all the techniques, desk checking is the only one whereby the author test his or her
own document.

 For most people, desk checking is relatively unproductive.

 One reason is that it is a completely undisciplined process.

 A second, and more important, reason is that it runs counter to a testing principle (“that
people are generally ineffective in testing their own programs”). For this reason, you could
deduce that desk checking is best performed by a person other than the author of the
program.

Dr. Ali Javed


48 Code Walkthroughs

 Introduction

 Procedure

Dr. Ali Javed


Code Walkthrough- Introduction [9]
49

 The code walkthrough, like the inspection, is a set of procedures and error-detection
techniques for group code reading.

 It shares much in common with the inspection process, but the procedures are slightly
different, and a different error-detection technique is employed.

 Like the inspection, the walkthrough is an uninterrupted meeting of one to two hours in
duration.

 The walkthrough team consists of three to five people.

 One of these people plays a role similar to that of the moderator in the inspection process,
another person plays the role of a secretary (a person who records all errors found), and a
third person plays the role of a tester.

 Suggestions for the other participants include a highly experienced programmer, a


programming-language expert, A new programmer (to give a fresh, unbiased outlook), the
person who will eventually maintain the program

Dr. Ali Javed


Code Walkthrough-Procedure [9]
50

 The initial procedure is identical to that of the inspection process:

 The participants are given the materials several days in advance to allow them to bone up on
the program.

 However, the procedure in the meeting is different. Rather than simply reading the program or
using error checklists, the participants “play computer.”

 The person designated as the tester comes to the meeting armed with a small set of paper test
cases—representative sets of inputs (and expected outputs) for the program or module.

Dr. Ali Javed


Code Walkthrough [9]
51

 During the meeting, each test case is


mentally executed. That is, the test data are
walked through the logic of the program.
The state of the program (i.e., the values of
the variables) is monitored on paper or
whiteboard.

 Of course, the test cases must be simple in


nature and few in number, because people
execute programs at a rate that is many
orders of magnitude slower than a machine.

 The walkthrough should have a follow-up


process similar to that described for the
inspection process.

Dr. Ali Javed


52 Peer Ratings

Dr. Ali Javed


Peer Ratings [9]
53

 Peer rating is a technique of evaluating unidentified programs in terms of their overall quality,
maintainability, extensibility, usability, and clarity. The purpose of the technique is to provide
programmer self-evaluation.

 A programmer is selected to serve as an administrator of the process. The administrator, in turn,


selects approximately 6 to 20 participants(6 is the minimum to preserve secrecy). The participants
are expected to have similar backgrounds (you shouldn’t group Java application programmers
with assembly language system programmers, for example).

 Each participant is asked to select two of his or her own programs to be reviewed. One
program should be representative of what the participant considers to be his or her finest work;
the other should be a program that the programmer considers to be poorer in quality.

 Once the programs have been collected, they are randomly distributed to the participants.

Dr. Ali Javed


Peer Ratings [9]
54

 Each participant is given four programs to review. Two of the programs are the “finest” programs
and two are" poorer” programs, but the reviewer is not told which is which.

 Each participant spends 30 minutes with each program and then completes an evaluation form
after reviewing the program. After reviewing all four programs, each participant rates the relative
quality of the four programs. The evaluation form asks the reviewer to answer, on a scale from 1 to
7 (1 meaning definitely “yes,” 7 meaning definitely “no”),

 The reviewer also is asked for general comments and suggested improvements.

Dr. Ali Javed


Peer Ratings [9]
55

 After the review, the participants are given the anonymous evaluation forms for their
two contributed programs. The participants also are given a statistical summary showing
the overall and detailed ranking of their original programs across the entire set of
programs, as well as an analysis of how their ratings of other programs compared with
those ratings of other reviewers of the same program.

 The purpose of the process is to allow programmers to self-assess their programming


skills.

Dr. Ali Javed


56 Seven Quality Control Tools
Seven tools for quality control in an organization are:

 Check list
 Pareto chart
 Histogram
 Run Charts
 Control charts
 Scatter diagram
 Cause-and-effect diagram

Dr. Ali Javed


References
57

[1] Software Engineering by Roger Pressman


[2]http://jamesmccaffrey.wordpress.com/2008/08/29/the-difference-between-unit-testing-and-module-testing/
[3] http://www.faqs.org/faqs/software-eng/testing-faq/section-14.html
[4] http://en.wikipedia.org/wiki/White-box_testing
[5] http://agile.csc.ncsu.edu/SEMaterials/WhiteBox.pdf
[6] http://blogs.ebusinessware.com/2009/06/26/unit-testing-vs-module-testing/
[7] John Watkins ,”Testing IT”, 2001, Cambridge University Press
[8] http://chamaras.blogspot.com/2008/08/what-is-software-testing-life-cycle.html
[9] GlenFord Myers, “The Art of Software Testing” 2nd Edition
[10] http://www.tutorialspoint.com/software_testing/testing_types.htm
[11] www.onestoptesting.com
[12] http://www.softwaretestingmentor.com/automation/manual-vs-automation.php
[13] http://www.optimusqa.com/blog/high-low-level-test-cases/

Dr. Ali Javed


For any query Feel Free to ask
58

Dr. Ali Javed

You might also like