IT2032 Software Testing Unit-3

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 39

IT2032 Software Testing

System Testing
What is System Testing?
• The testing conducted on the complete
integrated products and solutions to
evaluate system compliance with
specified requirements on functional
and non – functional aspects is called
system testing.
Types of System Testing
• Functional Testing :- Real life customer
usage of the product and solutions.
• Non – Functional Testing :- System brings
in different testing types ( also called
quality factors )
Types of Non – Functional Testing
Performance Testing
Scalability Testing
Reliability Testing
Stress Testing
Interoperability Testing
Localization Testing
Functional Testing
• The tests should focus on the following goals.
• All types or classes of legal inputs must be
accepted by the software.
• All classes of illegal inputs must be rejected
(however, the system should remain available).
• All possible classes of system output must
exercised and examined.
• All effective system states and state
transitions must be exercised and examined.
• All functions must be exercised.
Performance Testing
• To evaluate the time taken or response
time of the system to perform it’s
required functions in comparison with
different versions same product or a
different competitive product is called
“Performance Testing”.
Performance Testing
• two major types of requirements:
• Functional requirements. Users describe what functions the
software should perform. We test for compliance of these
requirements at the system level with the functional-based
system tests.
• Quality requirements. There are nonfunctional in nature but
describe quality levels expected for the software. One
example of a quality requirement is performance level. The
users may have objectives for the software system in terms of
memory use, response time, throughput, and delays.
Factors governing performance testing
• Throughput
• Response time
• Latency
• Tuning
• Benchmarking
• Capacity planning
Methodology for Performance testing
• Collecting a requirements
• Writing test case
• Automating performance test case
• Executing performance test case
• Analyzing performance test results
• Performance tuning
• Performance benchmarking
• Recommending right configuration for the
customer ( Capacity planning )
Challenges
• Availability of skills is a major problem
facing performance testing.
• Large number and amount of recourse such
as hardware, software, effort, time tool
• Results need to reflect real life environment
and expectations.
• Interfacing with different teams that include
a set of customer is another challenge.
Configuration Testing
• Typical software systems interact with hardware
devices such as disc drives, tape drives, and
printers. Many software systems also interact with
multiple CPUs, some of which are redundant.
Software that controls real time processes, or
embedded software also interfaces with devices,
but these are very specialized hardware items such
as missile launchers, and nuclear power device
sensors.
Configuration Testing
• According to Beizer configuration testing has the
following objectives:
• Show that all the configuration changing
commands and menus work properly.
• Show that all interchangeable devices are really
interchangeable, and that they each enter the proper
states for the specified conditions.
• Show that the systems‘ performance level is
maintained when devices are interchanged, or when
they fail.
• Several types of operations should be performed
during configuration test.
• Some sample operations for testers are:
• (i) rotate and per mutate the positions of devices to
ensure physical/ logical device permutations work
for each device (e.g., if there are two printers A
and B, exchange their positions);
(ii) induce malfunctions in each device, to see if
the system properly handles the malfunction;
(iii) induce multiple device malfunctions to see
how the system reacts. These operations will help
to reveal problems (defects) relating to
hardware/software interactions when hardware
exchanges, and reconfigurations occur. Testers
observe the consequences of these operations and
determine whether the system can recover
gracefully particularly in the case of a
malfunction.
Security Testing
• Designing and testing software systems to insure
that they are safe and secure is a big issue facing
software developers and test specialists.
• Recently, safety and security issues have taken on
additional importance due to the proliferation of
commercial applications for use on the Internet.
• Computer software and data can be compromised
by:
(i) criminals intent on doing damage, stealing
data and information, causing denial of service,
invading privacy;
(ii) errors on the part of honest
developers/maintainers who modify, destroy, or
compromise data because of misinformation,
misunderstandings, and/or lack of knowledge
• Both criminal behavior and errors that do
damage can be perpetuated by those inside and
outside of an organization.
• Attacks can be random or systematic.
• Damage can be done through various means
such as:
(i) viruses;
(ii) Trojan horses;
(iii) trap doors;
(iv) illicit channels.
• The effects of security breaches could
be extensive and can cause:
(i) loss of information;
(ii) corruption of information;
(iii) misinformation;
(iv) privacy violations;
(v) denial of service.
• Password checking and examples of other areas
to focus on during security testing are described
below.
– Password Checking
– Legal and Illegal Entry with Passwords
– Password Expiration
– Encryption
– Browsing
– Trap Doors
– Viruses
Recovery Testing
• Recovery testing subjects a system to losses of
resources in order to determine if it can recover
properly from these losses.
• This type of testing is especially important for
transaction systems,
• for example, on-line banking software.
• Beizer advises that testers focus on the following
areas during recovery testing :
• Restart. The current system state and transaction
states are discarded. The most recent checkpoint
record is retrieved and the system initialized to
the states in the checkpoint record.
• Switchover. The ability of the system to switch to
a new processor must be tested. Switchover is the
result of a command or a detection of a faulty
processor by a monitor.
• situations all transactions and processes must be
carefully examined to detect:
(i) loss of transactions;
(ii) merging of transactions;
(iii) incorrect transactions;
(iv) an unnecessary duplication of a transaction.
A good way to expose such problems is to perform
recovery testing under a stressful load. Transaction
inaccuracies and system crashes are likely to occur
with the result that defects and design flaws will be
revealed.
Regression Testing
• Regression testing is not a level of testing, but it
is the retesting of software that occurs when
changes are made to ensure that the new version
of the software has retained the capabilities of
the old version and that no new defects have
been introduced due to the changes.
• Test cases, test procedures, and other test-
related items from previous releases should be
available so that these tests can be run with the
new versions of the software.
Types of regression testing
• Regular regression testing
– It is done between test cycle to ensure that the
defects fixes that are done and the functionality
that were working with the earlier test cycles
continue to work.
• Final regression testing
– It is done to validate the final build before
release.
Scalability Testing
• A testing that requires enormous amount
of recourse to find out the maximum
capability of the system parameters is
called “Scalability Testing”.
Reliability Testing
• To evaluate the ability of the system or an
independent component of the system to
perform it’s required functions repeatedly
for specified repeatedly for a specified
period of time is called “Reliability
Testing”.
This product reliability is
achieved by focusing on the
following activities
• Defined engineering processes
• Review of work products at each stage
• Change management procedure
• Review of testing coverage
• Ongoing monitoring of the product
Stress Testing
• Evaluating a system beyond the limits
of the specified requirements or system
recourse to ensure the system does not
break down unexpectedly is called
“Stress Testing ”.
Guidelines can be used to select
the tests for stress testing
• Repetitive tests
• Concurrency
• Magnitude
• Random variation
Interoperability Testing
• This testing is done to ensure that two or
more products can exchange information,
use the information and work closely.
• Integration is a method and interoperability
is the end result.
Some guidelines that help in
improving interoperability
• Consistency of information flow across
systems.
• Changes to data represent as per the system
requirements.
• Correlated interchange of message and
receiving appropriate response.
• Communication and message.
• Meeting quality factors.
Localization Testing
• Testing conducted to verify that the localized
product works in different language is called
“Localization Testing”.
Acceptance Testing
• It is a phase after system testing that is normally
done by the customers or representative of the
customers.
• The customer defines a set of test cases that will
be executed to qualify and accept the product.
Acceptance Criteria
• Product Acceptance
• Procedure Acceptance
• Service level agreements
Select Test Cases for acceptance Testing

• End – to – End functionality verification


• Domain Tests
• User scenario tests
• Basic sanity tests
• New functionality
• A few non – functional tests
• Tests pertaining to legal obligations and
service level agreements
Executing Acceptance Tests
• Acceptance testing is done by the customer
or by the representative of the customer to
check whether the product is ready for use in
the real – life environment.
• 90% of business knowledge of the product
and 10% representatives of the technical
testing team.
Alpha & Beta Testing
• Alpha Testing:-
– Alpha Tests bring potential users to the
development’s site to use software. Developer note
any problem.
• Beta Testing :-
– Beta tests send the software out to potential users
who use it under real - world conditions and report
defects to the developing organization.
You
nk
Th a

You might also like