Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 34

Software Engineering

Course Instructor: Sonia Kaleem


Test Automation
Testing can be defined as:

1. Intension of finding an error


2. Software testing is the process of executing a software system
to determine whether it matches its specifications and executes
in its intended environment
3. High probability of finding as yet undiscovered errors
Why
to do
Testing
Why to do Testing

 As software becomes more pervasive and is used more


often to perform critical tasks, therefore it is required to
be of higher quality.
 Greater the errors more successful testing would be
which would result in higher quality of the software.
 Helpful for future rework
 It ensures that the software developed is in accordance to
customer requirements
 It retain customers by adding quality i.e. reliability to
software
London Ambulance Service: Computer Aided Design
October 26th – 28th, 2004

 The objective behind London Ambulance Service:


Computer Aided Design (LASCAD) was to automate
human-intensive processes of manual dispatch
systems
London Ambulance Service: Computer Aided Design
October 26th – 28th, 2004

 It consisted of
call taking phase (receive calls, record incident details,
pinpoint location)
 resource identification (examine details, compare details
recorded, decide which ambulances to mobilise), and
 resource mobilisation (dispatcher passes on instructions
to ambulance operator).
 Problem
Multiple ambulances were sent to same location
Closest ambulance was not chosen for dispatch.
London Ambulance Service: Computer Aided Design
October 26th – 28th, 2004

 The consequences ranged from deaths of 20 to 30 people as


a result of ambulances arriving too late to severe cost and
reputation losses.
 The main reasons behind this tragic incident were
investigated.
 Some of the main reasons that were concluded were
 lack of quality assurance specially code changes not
documented and incomplete implementation
Furthermore, lack of testing was also one of the main
reasons.
London Ambulance Service: Computer Aided Design
October 26th – 28th, 2004

No integration testing was done for the LASCAD


system.
 Safety cases were neither prepared for the system
nor any ISO standards was used.
 All these reasons caused a tragedy that is worth for
learning the lessons in our history.
Yorktown Ship Failure for three hours
September 1997

 The ship was dead in the water for approximately three


hours because a program didn't check for valid input
 The ship had to be towed into the naval base at Norfolk
Yorktown Ship Failure for three hours
September 1997

 USS Yorktown who mistakenly entered a zero for a data


value
 It resulted in a division by zero. 
 Database overflow caused its propulsion system to fail.
 The error cascaded from the application software to the
database server causing it to overflow and continue to
propagate till it eventually shut down the ship's
propulsion system. 
Summer Passport delay
 In UK summer they were to develop computerized system
for passports
 The software to be developed for the desired job was
delayed
 So the time to be spent on testing was scarified as the
system was deployed along with errors (no testing)
 This resulted in disaster and they suffered heavy setback
Tips For Beneficial Testing
 Testing should be performed against specs
 Test documentation has to be answered
 Test planning has to be done
 Testing has to be introduced as early as possible even
before designing and coding starts.
 Finding more errors earlier will reduce cost
 Always test positively that a software does what it should
and at the same time negatively that software does not do
what it should not.
Tips For Beneficial Testing
 Differentiation between tester and a developer
 Complement testing with:
Testing techniques
Automated testing tools
Testing strategies
 Have the right attitude
It should be a challenge as compared to a headache
Testing and Debugging
 Bug refers to fault, so debugging is a process of removing
bugs, as software does not behave as expected.
 Where as testing is concerned with the identification of
Errors.
 Debugging supports testing but it never does replace
testing .
 Debugging is done by the same person to find bugs out of
his/her own work, where as after debugging is finished
testing is started to uncover all errors which are not as yet
discovered.
Verification and Validation
 Verification is a process performed by the developers to
ensure that the software is correctly developed.
Are we building the product right
 Validation is a process performed by the users (Acceptance
Testing) to ensure that the software is in accordance to their
satisfaction.
Are we building the right product
TYPES
OF
TESTS
Types of Testing
Functional Testing
The software’s user is normally concerned only with
functionality and features
Functional testing takes the user’s point of view.
The program or system is treated as a black box
Inputs are verified in accordance to outputs to ensure the
desired functionality of the software.
Types of Testing
Usability Testing

Usability is an attempt to quantify user friendliness


User friendliness can be measured in terms of:
Physical and or intellectual skill required to learn the system
The time required to become moderately efficient in the use of
the system
Types of Testing
Stress Testing
Stress testing is conducted to evaluate a system or
component at or beyond the limits of specified requirements.
Stress testing executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume
Essentially, the tester attempts to break the
system.
Types of Testing
Regression Testing
These tests help ensure that changes or additions to a system
have not degraded existing functionality, system operation.
It involves selective retesting to detect faults introduced
during modification of a system or system component, to
verify that a modified system or component still meets its
specified requirements.
Regression testing involves conducting all or
some of the previous tests to ensure that new errors
have not been introduced.
Types of Testing

Real-Time Response
Time involvement
Making runtime changes depending upon change in
customer requirements
LEVELS
of
TESTING
Testing Levels
Unit Testing
A unit is the smallest testable piece of software that can
be compiled or assembled
Unit testing is the testing done to show that the unit does
satisfy its functional specification
Testing Levels
Component Testing
A component is an integrated aggregate of one or more
units.
A component can be anything from a unit to an entire
system.
Component testing is the testing done to show that the
component does satisfy its functional specification.
Testing Levels

Integration Testing
Integration is a process by which components are
aggregated to create larger components.
Integration testing is testing done to show that even though
the components were individually satisfactory, does not
reveal errors due to integration
Testing Levels
System Testing
A system is a big component.
System testing is aimed at revealing bugs that cannot be
attributed to components as such
System testing concerns issues and behaviors that can
only be exposed by testing the entire integrated system or
a major part of it.
Test
Strategies
Test Strategies
I/O First
I/O First testing is the use of every valid input condition as
a test case.
Additional input conditions are also used as test cases to
demonstrate every valid output.
 Other possible input conditions may need to be tested to
find other errors.
Test Strategies
Alpha Test
 Alpha testing performed by a team of highly skilled testers who
are usually the internal employee of the organization.
 Software is tested at the developer’s site, sometimes by a
customer.
Beta Test
 Beta testing performed by clients or end-users in a real-time
environment, who is not an employee of the organization.
 The customer records all the problems (real or imagined) that are
encountered during beta testing and reports these to the developer
at regular intervals. controlled by the developer.
Test Strategies
Black box testing
 Black box testing methods focus on the functional

requirements of the software.


 Black box testing enables the software engineer to derive

sets of input conditions that will fully exercise all functional


requirements for a program.
Test Strategies
White Box Testing
 Using white box testing methods, the software engineer can derive

test cases that guarantee that all independent paths within a module
have been exercised at least once
 Exercise all logical conditions on their true and false sides

 Execute all loops in their boundaries and within their operational

bounds, and
 Exercise internal data structures to ensure their validity.
Test Strategies
Gray Box Testing
 Combination of black and white box testing

 Tester has the knowledge of specs(BB) and internal

structure(WB) of the system as well


 If certain functionality is reused then by understanding

its internal structure many test cases will be eliminated


Other’s
 Test Coverage of Code

 Test Case Designing

 Test Documentation

 Test Management

You might also like