Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

Session (computer science)

From Wikipedia, the free encyclopedia


This article needs additional citations for verification. Please help improve this article
by adding citations to reliable sources. Unsourced material may be challenged and
removed. (July 2014) (Learn how and when to remove this template message)

In computer science, in particular networking, a session is a semi-permanent interactive


information interchange, also known as a dialogue, a conversation or a meeting, between two or
more communicating devices, or between a computer and user (see Login session). A session is
set up or established at a certain point in time, and then torn down at some later point. An
established communication session may involve more than one message in each direction. A
session is typically, but not always, stateful, meaning that at least one of the communicating parts
needs to save information about the session history in order to be able to communicate, as
opposed to stateless communication, where the communication consists of independent requests
with responses.

An established session is the basic requirement to perform a connection-oriented communication.


A session also is the basic step to transmit in connectionless communication modes. However
any unidirectional transmission does not define a session.[1]

Communication Transport may be implemented as part of protocols and services at the


application layer, at the session layer or at the transport layer in the OSI model.

 Application layer examples:


o HTTP sessions, which allow associating information with individual visitors
o A telnet remote login session
 Session layer example:
o A Session Initiation Protocol (SIP) based Internet phone call
 Transport layer example:
o A TCP session, which is synonymous to a TCP virtual circuit, a TCP connection,
or an established TCP socket.

In the case of transport protocols that do not implement a formal session layer (e.g., UDP) or
where sessions at the application layer are generally very short-lived (e.g., HTTP), sessions are
maintained by a higher level program using a method defined in the data being exchanged. For
example, an HTTP exchange between a browser and a remote host may include an HTTP cookie
which identifies state, such as a unique session ID, information about the user's preferences or
authorization level.

HTTP/1.0 was thought to only allow a single request and response during one Web/HTTP
Session. Protocol version HTTP/1.1 improved this by completing the Common Gateway
Interface (CGI), making it easier to maintain the Web Session and supporting HTTP cookies and
file uploads.
Most client-server sessions are maintained by the transport layer - a single connection for a
single session. However each transaction phase of a Web/HTTP session creates a separate
connection. Maintaining session continuity between phases required a session ID. The session ID
is embedded within the <A HREF> or <FORM> links of dynamic web pages so that it is passed
back to the CGI. CGI then uses the session ID to ensure session continuity between transaction
phases. One advantage of one connection-per-phase is that it works well over low bandwidth
(modem) connections.

Contents
 1 Software implementation
 2 Server side web sessions
 3 Client side web sessions
 4 HTTP session token
 5 Session management
o 5.1 Desktop session management
o 5.2 Browser session management
o 5.3 Web server session management
o 5.4 Session Management over SMS
 6 See also
 7 References
 8 External links

Software implementation
TCP sessions are typically implemented in software using child processes and/or multithreading,
where a new process or thread is created when the computer establishes or joins a session. HTTP
sessions are typically not implemented using one thread per session, but by means of a database
with information about the state of each session. The advantage with multiple processes or
threads is relaxed complexity of the software, since each thread is an instance with its own
history and encapsulated variables. The disadvantage is large overhead in terms of system
resources, and that the session may be interrupted if the system is restarted.

When a client may connect to any server in a cluster of servers, a special problem is encountered
in maintaining consistency when the servers must maintain session state. The client must either
be directed to the same server for the duration of the session, or the servers must transmit server-
side session information via a shared file system or database. Otherwise, the client may
reconnect to a different server than the one it started the session with, which will cause problems
when the new server does not have access to the stored state of the old one.

Server side web sessions


Server-side sessions are handy and efficient, but can become difficult to handle in conjunction
with load-balancing/high-availability systems and are not usable at all in some embedded
systems with no storage. The load-balancing problem can be solved by using shared storage or
by applying forced peering between each client and a single server in the cluster, although this
can compromise system efficiency and load distribution.

A method of using server-side sessions in systems without mass-storage is to reserve a portion of


RAM for storage of session data. This method is applicable for servers with a limited number of
clients (e.g. router or access point with infrequent or disallowed access to more than one client at
a time).

Client side web sessions


Client-side sessions use cookies and cryptographic techniques to maintain state without storing
as much data on the server. When presenting a dynamic web page, the server sends the current
state data to the client (web browser) in the form of a cookie. The client saves the cookie in
memory or on disk. With each successive request, the client sends the cookie back to the server,
and the server uses the data to "remember" the state of the application for that specific client and
generate an appropriate response.

This mechanism may work well in some contexts; however, data stored on the client is
vulnerable to tampering by the user or by software that has access to the client computer. To use
client-side sessions where confidentiality and integrity are required, the following must be
guaranteed:

1. Confidentiality: Nothing apart from the server should be able to interpret session data.
2. Data integrity: Nothing apart from the server should manipulate session data
(accidentally or maliciously).
3. Authenticity: Nothing apart from the server should be able to initiate valid sessions.

To accomplish this, the server needs to encrypt the session data before sending it to the client,
and modification of such information by any other party should be prevented via cryptographic
means.

Transmitting state back and forth with every request is only practical when the size of the cookie
is small. In essence, client-side sessions trade server disk space for the extra bandwidth that each
web request will require. Moreover, web browsers limit the number and size of cookies that may
be stored by a web site. To improve efficiency and allow for more session data, the server may
compress the data before creating the cookie, decompressing it later when the cookie is returned
by the client.

HTTP session token


A session token is a unique identifier that is generated and sent from a server to a client to
identify the current interaction session. The client usually stores and sends the token as an HTTP
cookie and/or sends it as a parameter in GET or POST queries. The reason to use session tokens
is that the client only has to handle the identifier—all session data is stored on the server (usually
in a database, to which the client does not have direct access) linked to that identifier. Examples
of the names that some programming languages use when naming their HTTP cookie include
JSESSIONID (JSP), PHPSESSID (PHP), CGISESSID (CGI), and ASPSESSIONID (ASP).

Session management
In human–computer interaction, session management is the process of keeping track of a user's
activity across sessions of interaction with the computer system.

Typical session management tasks in a desktop environment include keeping track of which
applications are open and which documents each application has opened, so that the same state
can be restored when the user logs out and logs in later. For a website, session management
might involve requiring the user to re-login if the session has expired (i.e., a certain time limit
has passed without user activity). It is also used to store information on the server-side between
HTTP requests.

Desktop session management

A desktop session manager is a program that can save and restore desktop sessions. A desktop
session is all the windows currently running and their current content. Session management on
Linux-based systems is provided by X session manager. On Microsoft Windows systems, session
management is provided by the Session Manager Subsystem (smss.exe); user session
functionality can be extended by third-party applications like twinsplay.

Browser session management

Session management is particularly useful in a web browser where a user can save all open pages
and settings and restore them at a later date. To help recover from a system or application crash,
pages and settings can also be restored on next run. Google Chrome, Mozilla Firefox, Internet
Explorer, OmniWeb and Opera are examples of web browsers that support session management.
Session management is often managed through the application of cookies.

Web server session management

Hypertext Transfer Protocol (HTTP) is stateless: a client computer running a web browser must
establish a new Transmission Control Protocol (TCP) network connection to the web server with
each new HTTP GET or POST request. The web server, therefore, cannot rely on an established
TCP network connection for longer than a single HTTP GET or POST operation. Session
management is the technique used by the web developer to make the stateless HTTP protocol
support session state. For example, once a user has been authenticated to the web server, the
user's next HTTP request (GET or POST) should not cause the web server to ask for the user's
account and password again. For a discussion of the methods used to accomplish this see HTTP
cookie and Session ID

In situations where multiple web servers must share knowledge of session state (as is typical in a
cluster environment) session information must be shared between the cluster nodes that are
running web server software. Methods for sharing session state between nodes in a cluster
include: multicasting session information to member nodes (see JGroups for one example of this
technique), sharing session information with a partner node using distributed shared memory or
memory virtualization, sharing session information between nodes using network sockets,
storing session information on a shared file system such as a distributed file system or a global
file system, or storing the session information outside the cluster in a database.

If session information is considered transient, volatile data that is not required for non-
repudiation of transactions and does not contain data that is subject to compliance auditing (in
the U.S. for example, see the Health Insurance Portability and Accountability Act and the
Sarbanes-Oxley Act for examples of two laws that necessitate compliance auditing) then any
method of storing session information can be used. However, if session information is subject to
audit compliance, consideration should be given to the method used for session storage,
replication, and clustering.

In a service-oriented architecture, Simple Object Access Protocol or SOAP messages constructed


with Extensible Markup Language (XML) messages can be used by consumer applications to
cause web servers to create sessions.

Session Management over SMS

Just as HTTP is a stateless protocol, so is SMS. As SMS became interoperable across rival
networks in 1999,[2] and text messaging started its ascent towards becoming a ubiquitous global
form of communication,[3] various enterprises became interested in using the SMS channel for
commercial purposes. Initial services did not require session management since they were only
one-way communications (for example, in 2000, the first mobile news service was delivered via
SMS in Finland). Today, these applications are referred to as application-to-peer (A2P)
messaging as distinct from peer-to-peer (P2P) messaging. The development of interactive
enterprise applications required session management, but because SMS is a stateless protocol as
defined by the GSM standards,[4] early implementations were controlled client-side by having the
end-users enter commands and service identifiers manually.
The Difference Between Use Cases and Test
Cases
April 12, 2007Business Analysis, Requirements, Requirements Models, Software development, Testing,
Use Cases

People who are new to software, requirements, or testing often ask “What’s the difference
between a use case and a test case?” This article answers that question, by building on earlier
articles about use cases and use case scenarios. At the soundbite level, each use case has one or
more scenarios, and each use case scenario would lead to the creation of one or more test cases.

Definition of a Use Case


Use cases tell the story of how someone interacts with a software system to achieve a goal. A
good use case will describe the interactions that lead to either achieving or abandoning the goal.
The use case will describe multiple paths that the user can follow within the use case.

Definition of a Use Case Scenario

A use case is made up of one or more use case scenarios. Each path that can be followed within
the use case is a use case scenario. Any given example of following a use case also follows a
single scenario. Multiple executions of the use case can use the same or different scenarios.

Definition of a Test Case


There are many different kinds of testing, and they can be described in different ways. They can
be done by different people to achieve different objectives. They can be manual or automated.
Testing is a very large field, and this article is not trying to define all of the ways that people can
test software.

When someone asks the question “What’s the difference between a use case and a test case?”
they are generally referring to system tests, “end to end” tests, or blackbox tests. They are
probably not thinking about unit tests or integration tests.

Check out this explanation of the difference between unit tests and system tests. Or this article
for an introduction to Continuous Integration – an approach to test automation and software
development, or this article on the essential practices of continuous integration.
System Test Cases

Many system tests are designed to simulate how a user interacts with the system, to make sure
that the system responds appropriately. If you’ve defined your requirements by using goal driven
use cases, you can use the use cases as a framework for defining these test cases.

These system tests should be created to test a single situation. When using the approach of use
cases and use case scenarios to describe requirements, a system test should test a single use case
scenario.

In an example use case we recently wrote, one alternate flow “3A1” involves the user entering
shipping and billing information that is unique to the order they are placing. You would define at
least one use case scenario that involves these steps. Assume that the scenario you have defined
follows alternate flow “3A1” but otherwise follows the normal flow.

You could create two system tests that are designed to validate that the requirements of the use
case are met. The first system test would involve a user that has an account but elects to use
different information for this order. This is a realistic scenario – perhaps someone is ordering a
gift to be delivered to their cousin who lives at a different address.

You could also define a system test that involves a user without a pre-existing account.
Both test cases follow the use case, and both test cases follow the use case scenario. But the test
cases test different things (from a business / requirements perspective). They may be testing
exactly the same code, but from a system test perspective, you neither know nor care because a
system test is a blackbox test.

Summary

A use case represents the interactions (or observed behaviors) associated with achieving or
abandoning a goal. A use case scenario represents one of the possible paths through a use case. A
test case represents one set of inputs that exercises a single use case scenario.
Performance vs. load vs. stress testing

Here's a good interview question for a tester: how do you define performance/load/stress testing?
Many times people use these terms interchangeably, but they have in fact quite different
meanings. This post is a quick review of these concepts, based on my own experience, but also
using definitions from testing literature -- in particular: "Testing computer software" by Kaner et
al, "Software testing techniques" by Loveland et al, and "Testing applications on the Web" by
Nguyen et al.

Update July 7th, 2005

From the referrer logs I see that this post comes up fairly often in Google searches. I'm updating
it with a link to a later post I wrote called 'More on performance vs. load testing'.

Performance testing

The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a
baseline for future regression testing. To conduct performance testing is to engage in a carefully
controlled process of measurement and analysis. Ideally, the software under test is already stable
enough so that this process can proceed smoothly.

A clearly defined set of expectations is essential for meaningful performance testing. If you don't
know where you want to go in terms of the performance of the system, then it matters little
which direction you take (remember Alice and the Cheshire Cat?). For example, for a Web
application, you need to know at least two things:

 expected load in terms of concurrent users or HTTP connections


 acceptable response time

Once you know where you want to be, you can start on your way there by constantly increasing
the load on the system while looking for bottlenecks. To take again the example of a Web
application, these bottlenecks can exist at multiple levels, and to pinpoint them you can use a
variety of tools:

 at the application level, developers can use profilers to spot inefficiencies in their code
(for example poor search algorithms)
 at the database level, developers and DBAs can use database-specific profilers and query
optimizers
 at the operating system level, system engineers can use utilities such as top, vmstat,
iostat (on Unix-type systems) and PerfMon (on Windows) to monitor hardware resources
such as CPU, memory, swap, disk I/O; specialized kernel monitoring software can also
be used
 at the network level, network engineers can use packet sniffers such as tcpdump,
network protocol analyzers such as ethereal, and various utilities such as netstat, MRTG,
ntop, mii-tool
From a testing point of view, the activities described above all take a white-box approach, where
the system is inspected and monitored "from the inside out" and from a variety of angles.
Measurements are taken and analyzed, and as a result, tuning is done.

However, testers also take a black-box approach in running the load tests against the system
under test. For a Web application, testers will use tools that simulate concurrent users/HTTP
connections and measure response times. Some lightweight open source tools I've used in the
past for this purpose are ab, siege, httperf. A more heavyweight tool I haven't used yet is
OpenSTA. I also haven't used The Grinder yet, but it is high on my TODO list.

When the results of the load test indicate that performance of the system does not meet its
expected goals, it is time for tuning, starting with the application and the database. You want to
make sure your code runs as efficiently as possible and your database is optimized on a given
OS/hardware configurations. TDD practitioners will find very useful in this context a framework
such as Mike Clark's jUnitPerf, which enhances existing unit test code with load test and timed
test functionality. Once a particular function or method has been profiled and tuned, developers
can then wrap its unit tests in jUnitPerf and ensure that it meets performance requirements of
load and timing. Mike Clark calls this "continuous performance testing". I should also mention
that I've done an initial port of jUnitPerf to Python -- I called it pyUnitPerf.

If, after tuning the application and the database, the system still doesn't meet its expected goals in
terms of performance, a wide array of tuning procedures is available at the all the levels
discussed before. Here are some examples of things you can do to enhance the performance of a
Web application outside of the application code per se:

 Use Web cache mechanisms, such as the one provided by Squid


 Publish highly-requested Web pages statically, so that they don't hit the database
 Scale the Web server farm horizontally via load balancing
 Scale the database servers horizontally and split them into read/write servers and read-
only servers, then load balance the read-only servers
 Scale the Web and database servers vertically, by adding more hardware resources (CPU,
RAM, disks)
 Increase the available network bandwidth

Performance tuning can sometimes be more art than science, due to the sheer complexity of the
systems involved in a modern Web application. Care must be taken to modify one variable at a
time and redo the measurements, otherwise multiple changes can have subtle interactions that are
hard to qualify and repeat.

In a standard test environment such as a test lab, it will not always be possible to replicate the
production server configuration. In such cases, a staging environment is used which is a subset of
the production environment. The expected performance of the system needs to be scaled down
accordingly.

The cycle "run load test->measure performance->tune system" is repeated until the system under
test achieves the expected levels of performance. At this point, testers have a baseline for how
the system behaves under normal conditions. This baseline can then be used in regression tests to
gauge how well a new version of the software performs.

Another common goal of performance testing is to establish benchmark numbers for the system
under test. There are many industry-standard benchmarks such as the ones published by TPC,
and many hardware/software vendors will fine-tune their systems in such ways as to obtain a
high ranking in the TCP top-tens. It is common knowledge that one needs to be wary of any
performance claims that do not include a detailed specification of all the hardware and software
configurations that were used in that particular test.

Load testing

We have already seen load testing as part of the process of performance testing and tuning. In
that context, it meant constantly increasing the load on the system via automated tools. For a
Web application, the load is defined in terms of concurrent users or HTTP connections.

In the testing literature, the term "load testing" is usually defined as the process of exercising the
system under test by feeding it the largest tasks it can operate with. Load testing is sometimes
called volume testing, or longevity/endurance testing.

Examples of volume testing:

 testing a word processor by editing a very large document


 testing a printer by sending it a very large job
 testing a mail server with thousands of users mailboxes
 a specific case of volume testing is zero-volume testing, where the system is fed empty
tasks

Examples of longevity/endurance testing:

 testing a client-server application by running the client in a loop against the server over
an extended period of time

Goals of load testing:

 expose bugs that do not surface in cursory testing, such as memory management bugs,
memory leaks, buffer overflows, etc.
 ensure that the application meets the performance baseline established during
performance testing. This is done by running regression tests against the application at a
specified maximum load.

Although performance testing and load testing can seem similar, their goals are different. On one
hand, performance testing uses load testing techniques and tools for measurement and
benchmarking purposes and uses various load levels. On the other hand, load testing operates at
a predefined load level, usually the highest load that the system can accept while still functioning
properly. Note that load testing does not aim to break the system by overwhelming it, but instead
tries to keep the system constantly humming like a well-oiled machine.

In the context of load testing, I want to emphasize the extreme importance of having large
datasets available for testing. In my experience, many important bugs simply do not surface
unless you deal with very large entities such thousands of users in repositories such as
LDAP/NIS/Active Directory, thousands of mail server mailboxes, multi-gigabyte tables in
databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated
tools to generate these large data sets, but fortunately any good scripting language worth its salt
will do the job.

Stress testing

Stress testing tries to break the system under test by overwhelming its resources or by taking
resources away from it (in which case it is sometimes called negative testing). The main purpose
behind this madness is to make sure that the system fails and recovers gracefully -- this quality is
known as recoverability.

Where performance testing demands a controlled environment and repeatable measurements,


stress testing joyfully induces chaos and unpredictability. To take again the example of a Web
application, here are some ways in which stress can be applied to the system:

 double the baseline number for concurrent users/HTTP connections


 randomly shut down and restart ports on the network switches/routers that connect the
servers (via SNMP commands for example)
 take the database offline, then restart it
 rebuild a RAID array while the system is running
 run processes that consume resources (CPU, memory, disk, network) on the Web and
database servers

I'm sure devious testers can enhance this list with their favorite ways of breaking systems.
However, stress testing does not break the system purely for the pleasure of breaking it, but
instead it allows testers to observe how the system reacts to failure. Does it save its state or does
it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to
recover from the last good state? Does it print out meaningful error messages to the user, or does
it merely display incomprehensible hex codes? Is the security of the system compromised
because of unexpected failures? And the list goes on.

Conclusion

I am aware that I only scratched the surface in terms of issues, tools and techniques that deserve
to be mentioned in the context of performance, load and stress testing. I personally find the topic
of performance testing and tuning particularly rich and interesting, and I intend to post more
articles on this subject in the future.
Regression testing
From Wikipedia, the free encyclopedia

Regression testing is a type of software testing that verifies that software previously developed
and tested still performs correctly after it was changed or interfaced with other software.
Changes may include software enhancements, patches, configuration changes, etc. During
regression testing, new software bugs or regressions may be uncovered. Sometimes a software
change impact analysis is performed to determine what areas could be affected by the proposed
changes. These areas may include functional and non-functional areas of the system.

The purpose of regression testing is to ensure that changes such as those mentioned above have
not introduced new faults.[1] One of the main reasons for regression testing is to determine
whether a change in one part of the software affects other parts of the software.[2]

Common methods of regression testing include rerunning previously completed tests and
checking whether program behavior has changed and whether previously fixed faults have re-
emerged. Regression testing can be performed to test a system efficiently by systematically
selecting the appropriate minimum set of tests needed to adequately cover a particular change.

Contrast with non-regression testing (usually validation-test for a new issue), which aims to
verify whether, after introducing or updating a given software application, the change has had the
intended effect.

Contents
 1 Background
 2 Uses
 3 See also
 4 References
 5 External links

Background
As software is fixed, emergence of new faults and/or re-emergence of old faults is quite
common. Sometimes re-emergence occurs because a fix gets lost through poor revision control
practices (or simple human error in revision control). Often, a fix for a problem will be "fragile"
in that it fixes the problem in the narrow case where it was first observed but not in more general
cases which may arise over the lifetime of the software. Frequently, a fix for a problem in one
area inadvertently causes a software bug in another area. Finally, it may happen that, when some
feature is redesigned, some of the same mistakes that were made in the original implementation
of the feature are made in the redesign.
Therefore, in most software development situations, it is considered good coding practice, when
a bug is located and fixed, to record a test that exposes the bug and re-run that test regularly after
subsequent changes to the program.[3] Although this may be done through manual testing
procedures using programming techniques, it is often done using automated testing tools.[4] Such
a test suite contains software tools that allow the testing environment to execute all the
regression test cases automatically; some projects even set up automated systems to
automatically re-run all regression tests at specified intervals and report any failures (which
could imply a regression or an out-of-date test).[5] Common strategies are to run such a system
after every successful compile (for small projects), every night, or once a week. Those strategies
can be automated by an external tool.

Regression testing is an integral part of the extreme programming software development method.
In this method, design documents are replaced by extensive, repeatable, and automated testing of
the entire software package throughout each stage of the software development process.
Regression testing is done after functional testing has concluded, to verify that the other
functionalities are working.

In the corporate world, regression testing has traditionally been performed by a software quality
assurance team after the development team has completed work. However, defects found at this
stage are the most costly to fix. This problem is being addressed by the rise of unit testing.
Although developers have always written test cases as part of the development cycle, these test
cases have generally been either functional tests or unit tests that verify only intended outcomes.
Developer testing compels a developer to focus on unit testing and to include both positive and
negative test cases.[6]

Uses
Regression testing can be used not only for testing the correctness of a program, but often also
for tracking the quality of its output.[7] For instance, in the design of a compiler, regression
testing could track the code size, and the time it takes to compile and execute the test suite cases.

Also as a consequence of the introduction of new bugs, program maintenance requires far more
system testing per statement written than any other programming. Theoretically, after each fix
one must run the entire batch of test cases previously run against the system, to ensure that it has
not been damaged in an obscure way. In practice, such regression testing must indeed
approximate this theoretical idea, and it is very costly.

— Fred Brooks, The Mythical Man Month, p. 122

Regression tests can be broadly categorized as functional tests or unit tests. Functional tests
exercise the complete program with various inputs. Unit tests exercise individual functions,
subroutines, or object methods. Both functional testing tools and unit testing tools tend to be
third-party products that are not part of the compiler suite, and both tend to be automated. A
functional test may be a scripted series of program inputs, possibly even involving an automated
mechanism for controlling mouse movements and clicks. A unit test may be a set of separate
functions within the code itself, or a driver layer that links to the code without altering the code
being tested.
Smoke Vs Sanity Testing - Introduction and
Differences

Smoke and Sanity testing are the most misunderstood topics in Software Testing. There is
enormous amount of literature on the subject, but most of them are confusing. The following
article makes an attempt to address the confusion.

The key differences between Smoke and Sanity Testing can be learned with the help of
following diagram -
To appreciate the above diagram, lets first understand -

what is a Software Build?

If you are developing a simple computer program which consists of only one source code file,
you merely need to compile and link this one file, to produce an executable file. This process is
very simple.
Usually this is not the case. A typical Software Project consists of hundreds or even thousands of
source code files. Creating an executable program from these source files is a complicated and
time-consuming task.
You need to use "build" software to create an executable program and the process is called
"Software Build"
what is Smoke Testing?

Smoke Testing is performed after software build to ascertain that the critical functionalities of
the program is working fine.It is executed "before" any detailed functional or regression tests
are executed on the software build.The purpose is to reject a badly broken application, so that
the QA team does not waste time installing and testing the software application.

In Smoke Testing, the test cases chosen cover the most important functionality or component
of the system. The objective is not to perform exhaustive testing, but to verify that the critical
functionalities of the system is working fine.
For Example a typical smoke test would be - Verify that the application launches successfully,
Check that the GUI is responsive ... etc.

what is Sanity Testing?

After receiving a software build, with minor changes in code, or functionality, Sanity testing
is performed to ascertain that the bugs have been fixed and no further issues are
introduced due to these changes.The goal is to determine that the proposed functionality works
roughly as expected. If sanity test fails, the build is rejected to save the time and costs
involved in a more rigorous testing.

The objective is "not" to verify thoroughly the new functionality, but to determine that the
developer has applied some rationality (sanity) while producing the software. For instance, if
your scientific calculator gives the result of 2 + 2 =5! Then, there is no point testing the advanced
functionalities like sin 30 + cos 50.
Smoke Testing Vs Sanity Testing - Key Differences
Smoke Testing Sanity Testing
Smoke Testing is performed to ascertain that the
Sanity Testing is done to check the new
critical functionalities of the program is
functionality / bugs have been fixed
working fine
The objective of this testing is to verify the The objective of the testing is to verify the
"stability" of the system in order to proceed "rationality" of the system in order to proceed
with more rigorous testing with more rigorous testing
This testing is performed by the developers or
Sanity testing is usually performed by testers
testers
Sanity testing is usually not documented and is
Smoke testing is usually documented or scripted
unscripted

Smoke testing is a subset of Regression testing Sanity testing is a subset of Acceptance testing

Smoke testing exercises the entire system from Sanity testing exercises only the particular
end to end component of the entire system
Sanity Testing is like specialized health check
Smoke testing is like General Health Check Up
up

Points to note.
 Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly
determining whether an application is too flawed to merit any rigorous testing.
 Sanity Testing is also called tester acceptance testing.
 Smoke testing performed on a particular build is also known as a build verification test.
 One of the best industry practice is to conduct a Daily build and smoke test in software
projects.
 Both smoke and sanity tests can be executed manually or using an automation
tool. When automated tools are used, the tests are often initiated by the same process that
generates the build itself.
 As per the needs of testing, you may have to execute both Sanity and Smoke Tests
on the software build. In such cases you will first execute Smoke tests and then go
ahead with Sanity Testing. In industry, test cases for Sanity Testing are commonly
combined with that for smoke tests, to speed up test execution. Hence it's a common
that the terms are often confused and used interchangeably
What is static Testing?
43

24 5

 Static testing is the testing of the software work products manually, or with a set of tools,
but they are not executed.
 It starts early in the Life cycle and so it is done during the verification process.
 It does not need computer as the testing of program is done without executing the
program. For example: reviewing, walk through, inspection, etc.

You might also like