Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

SOFTWARE ENGINEERING

B.TECH II YEAR - IV SEM (2024-25)


DEPARTMENT OF
ARTIFICIAL INTELLIGENCE AND DATA
SCIENCE
Unit I: Introduction to Software Engineering Software Development Life Cycles, SDLC Models:
Waterfall, V-Model, Prototype Model, Incremental, Evolutionary, RAD, Spiral. Project Planning,
Metrics for Project Size Estimation, Project Estimation Techniques, Requirements Gathering and
Analysis,
Software Requirements Specification (SRS). Software Product and Process Characteristics,
Software Process Models, Evolutionary Process Models and Agile processes. Software Process
customization and improvement, CMM, Product and Process Metrics, Functional and Non-
functional requirements, Requirement Sources and Elicitation Techniques, Analysis Modeling for
Function-oriented and Object-oriented software development, Use case Modeling, System and
Software Requirement Specifications, Requirement Validation, Traceability
Unit II: Software Design, Analysis and Testing The Software Design Process, Design Concepts
and Principles, Software Modeling and UML, Architectural Design, Architectural Views and
Styles, User Interface Design, Function oriented Design, SA/SD Component Based Design, Design
Metrics.
Software Static and Dynamic analysis, Code inspections, Software Testing, Fundamentals,
Software Test Process, Testing Levels, Test Criteria, Test Case Design, Test Oracles, Test
Techniques, Testing Frameworks, Test Plan, Test Metrics, Testing Tools.
Unit-III: Software Maintenance & Software Project Measurement Need and Types of
Maintenance, Software Configuration Management (SCM), Software Change Management,
Version Control, Change control and Reporting, Program Comprehension Techniques, Re-
engineering, Reverse Engineering, Tool Support. Project Management Concepts, Feasibility
Analysis, Project and Process Planning, Resources
Allocations, Software efforts, Schedule, and Cost estimations, Project Scheduling and Tracking,
Risk Assessment and Mitigation, Software Quality Assurance (SQA). Project Plan, Project
Metrics.
Unit-IV: Fundamentals of Agile Methodology Introduction to Agile software development
methodology, Life Cycle of Agile Development, Agile v/s Traditional software
development(Waterfall model)Agile
Manifesto: Principles, Benefits and Challenges of Agile, Agile Values, Agile Model, Phases of
Agile Model.
Unit-V: Software Development using Agile Methodology Gathering requirement using Agile way,
User Stories: The currency of agile development, Characteristics of good user stories, Generating
User Stories, Agile estimation and planning, Implementation of agile, Applying an Agile Mindset
to a Project, Roles in agile development, Agile Frameworks: Scrum, Kanban, Crystal, XP, ASD,
DSDM. Practical and Lab work: Lab work should include a running case study problem for which
different deliverables set at the end of each phase of a software development life cycle are to be
developed. This will include modeling the requirements, analysis, detailed design, implementation,
testing, deployment, and maintenance. Subsequently the design models will be coded and tested.
For modeling, Open Source tools like StarUML and Licensed Tools like Rational Rose products.
For coding and testing, IDE like Eclipse, Net Beans, and
Visual Studio can be used.

Content
Unit II:
o Software Design, Analysis and Testing
o The Software Design Process,
o Design Concepts and Principles,
o Software Modeling and UML,
o Architectural Design,
o Architectural Views and Styles,
o User Interface Design,
o Function oriented Design,
o SA/SD Component Based Design,
o Design Metrics.
o Software Static and Dynamic analysis,
o Code inspections, Software Testing,
o Fundamentals, Software Test Process,
o Testing Levels,
o Test Criteria,
o Test Case Design,
o Test Oracles,
o Test Techniques,
o Testing Frameworks,
o Test Plan,
o Test Metrics,
o Testing Tools.
Software Design
Software design is a mechanism to transform user requirements into some suitable form, which helps the
programmer in software coding and implementation. It deals with representing the client's requirement,
as described in SRS (Software Requirement Specification) document, into a form, i.e., easily
implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from the problem domain to the solution domain. In software design, we consider the
system to be a set of components or modules with clearly defined behaviors & boundaries.

Objectives of Software Design


Following are the purposes of Software design:
1. Correctness:Software design should be correct as per requirement.
2. Completeness:The design should have all components like data structures, modules, and external
interfaces, etc.
3. Efficiency:Resources should be used efficiently by the program.
4. Flexibility:Able to modify on changing needs.
5. Consistency:There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable by other designers.

Software Design Principles


Software design principles are concerned with providing means to handle the complexity of the design
process effectively. Effectively managing the complexity will not only reduce the effort needed for
design but can also reduce the scope of introducing errors during design.

Following are the principles of Software Design


Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem, divide the
problems and conquer the problem it means to divide the problem into smaller pieces so that each piece
can be captured separately.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning


1. Software is easy to understand
2. Software becomes simple
3. Software is easy to test
4. Software is easy to modify
5. Software is easy to maintain
6. Software is easy to expand
These pieces cannot be entirely independent of each other as they together form the system. They have
to cooperate and communicate to solve the problem. This communication adds complexity.
Note: As the number of partition increases = Cost of partition and complexity increases

Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level without
bothering about the internal details of the implementation. Abstraction can be used for existing element
as well as the component being designed.
Here, there are two common abstraction mechanisms
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.

Modularity
Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only
property that allows a program to be intellectually manageable. Single large programs are difficult to
understand and read due to a large number of reference variables, control paths, global variables, etc.
The desirable properties of a modular system are:
o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.

Advantages and Disadvantages of Modularity


In this topic, we will discuss various advantage and disadvantage of Modularity.

Advantages of Modularity
There are several advantages of Modularity
o It allows large programs to be written by several or different people
o It encourages the creation of commonly used routines to be placed in the library and used by other
programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides more checkpoints to measure progress.
o It provides a framework for complete testing, more accessible to test
o It produced the well designed and more readable program.

Disadvantages of Modularity

There are several disadvantages of Modularity


o Execution time maybe, but not certainly, longer
o Storage size perhaps, but is not certainly, increased
o Compilation and loading time may be longer
o Inter-module communication problems may be increased
o More linkage required, run-time may be longer, more source lines must be written, and more
documentation has to be done

Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of modular
design in detail in this section:
1. Functional Independence: Functional independence is achieved by developing functions that
perform only one kind of task and do not excessively interact with other modules. Independence is
important because it makes implementation more accessible and faster. The independent modules are
easier to maintain, test, and reduce error propagation and can be reused in other programs as well. Thus,
functional independence is a good design feature which ensures software quality.

It is measured using two criteria:


o Cohesion: It measures the relative function strength of a module.
o Coupling: It measures the relative interdependence among modules.

2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules should be
specified that data include within a module is inaccessible to other modules that do not need for such
information.
The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance. This is
because as most data and procedures are hidden from other parts of the software, inadvertent errors
introduced during modifications are less likely to propagate to different locations within the software.

Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy to
develop and latter too, change. Structured design methods help developers to deal with the size and
complexity of programs. Analysts generate instructions for the developers about how code should be
composed and how pieces of code should fit together to form a program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components and then
decomposing them into their more detailed sub-components.

2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves towards up
the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Coupling and Cohesion
Module Coupling
In software engineering, the coupling is the degree of interdependence between software modules. Two modules that are
tightly coupled are strongly dependent on each other. However, two modules that are loosely coupled are not dependent on
each other. Uncoupled modules have no interdependence at all within them.
The various types of coupling techniques are shown in fig:

A good design is the one that has low coupling. Coupling is measured by the number of relations between the modules. That
is, the coupling increases as the number of calls between modules increase or the amount of shared data is large. Thus, it can
be said that a design with high coupling will have more errors.
Types of Module Coupling

1. No Direct Coupling: There is no direct coupling between M1 and M2.

2.

In this case, modules are subordinates to different modules. Therefore, no direct coupling.

2. Data Coupling: When data of one module is passed to another module, this is called data coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data items such as structure,
objects, etc. When the module passes non-global data structure or entire structure to another module, they are said to be stamp
coupled. For example, passing structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is used to direct the structure of
instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed data format, communication
protocols, or device interface. This is related to communication to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through some global data items.

7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a branch from one module into
another module.

Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module belong together. Thus, cohesion
measures the strength of relationships between pieces of functionality within a given module. For example, in highly cohesive
systems, functionality is strongly related.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or "low cohesion."

Types of Modules Cohesion

1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a module, cooperate to
achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a module form the
components of the sequence, where the output from one component of the sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all tasks of the module refer
to or update the same data structure, e.g., the set of functions defined on an array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of the module are all parts of
a procedure in which particular sequence of steps has to be carried out for achieving a goal, e.g., the algorithm for
decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact that all the methods must be
executed in the same time, the module is said to exhibit temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the module perform a similar
operation. For example Error handling, data input and data output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a set of tasks that are
associated with each other very loosely, if at all.

Differentiate between Coupling and Cohesion

Coupling Cohesion

Coupling is also called Inter-Module Cohesion is also called Intra-Module Binding.


Binding.

Coupling shows the relationships between Cohesion shows the relationship within the module.
modules.

Coupling shows the Cohesion shows the module's relative functional strength.
relative independence between the
modules.

While creating, you should aim for low While creating you should aim for high cohesion, i.e., a
coupling, i.e., dependency among modules cohesive component/ module focuses on a single function
should be less. (i.e., single-mindedness) with little interaction with other
modules of the system.

In coupling, modules are linked to the other In cohesion, the module focuses on a single thing.
modules.

Software Design Process


The design phase of software development deals with transforming the customer requirements as described in the SRS
documents into a form implementable using a programming language. The software design process can be divided into the
following three levels or phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Elements of a System
1. Architecture: This is the conceptual model that defines the structure, behavior, and views of a system. We can
use flowcharts to represent and illustrate the architecture.
2. Modules: These are components that handle one specific task in a system. A combination of the modules makes
up the system.
3. Components: This provides a particular function or group of related functions. They are made up of modules.
4. Interfaces: This is the shared boundary across which the components of a system exchange information and
relate.
5. Data: This is the management of the information and data flow.

Software Design Process


Interface Design
Interface design is the specification of the interaction between a system and its environment. This phase proceeds at a high
level of abstraction with respect to the inner workings of the system i.e, during interface design, the internal of the systems
are completely ignored, and the system is treated as a black box. Attention is focused on the dialogue between the target
system and the users, devices, and other systems with which it interacts. The design problem statement produced during the
problem analysis step should identify the people, other systems, and devices which are collectively called agents.
Interface design should include the following details:
1. Precise description of events in the environment, or messages from agents to which the system must respond.
2. Precise description of the events or messages that the system must produce.
3. Specification of the data, and the formats of the data coming into and going out of the system.
4. Specification of the ordering and timing relationships between incoming events or messages, and outgoing
events or outputs.
Architectural Design
Architectural design is the specification of the major components of a system, their responsibilities, properties, interfaces, and
the relationships and interactions between them. In architectural design, the overall structure of the system is chosen, but the
internal details of major components are ignored. Issues in architectural design includes:
1. Gross decomposition of the systems into major components.
2. Allocation of functional responsibilities to components.
3. Component Interfaces.
4. Component scaling and performance properties, resource consumption properties, reliability properties, and so
forth.
5. Communication and interaction between components.
The architectural design adds important details ignored during the interface design. Design of the internals of the major
components is ignored until the last phase of the design.
Detailed Design
Detailed design is the specification of the internal elements of all major system components, their properties, relationships,
processing, and often their algorithms and the data structures. The detailed design may include:
1. Decomposition of major system components into program units.
2. Allocation of functional responsibilities to units.
3. User interfaces.
4. Unit states and state changes.
5. Data and control interaction between units.
6. Data packaging and implementation, including issues of scope and visibility of program elements.
7. Algorithms and data structures.

What is System modelling?


System modelling is the process of developing abstract models of a system. Each model presents a different view or
perspective of that system. Represent the system using some kind of graphical notation (Mostlybased on UML). System
modelling helps the analyst to understand the functionality of the system and models are used to communicate with
customers. It also provides the developer and the customer with the means to assess quality, once the software is built.
Different Models different perspectives
You may develop different models to represent the system from different perspectives. For example:
 An external perspective, where you model the environment of the system.
 An interaction perspective where you model the interactions between a system and its environment or between
the components of a system.
 A structural perspective, where you model the organization of a system or the structure of the data that is
processed by the system.
 A behavioural perspective, where you model the dynamic behaviour of the system and how it responds to
events
What is UML?
UML stands for Unified Modeling Language,
The Unified Modeling Language (UML) is a general-purpose, developmental, modelling language in the field of software
engineering, that is intended to provide a standard way to visualize the design of a system.
UML diagrams represent two different views of a system model.
1. Static (or structural) view: emphasizes the static structure of the system using objects, attributes, operations and
relationships. It includes class diagrams and composite structure diagrams.
2. Dynamic (or behavioural) view: emphasizes the dynamic behaviour of the system by showing collaborations among
objects and changes to the internal states of objects. This view includes sequence diagrams, activity diagrams and uses case
diagrams.

Unified Modeling Language (UML) is a general-purpose modeling language. The main aim of UML is to define a standard
way to visualize the way a system has been designed. It is quite similar to blueprints used in other fields of engineering.
UML is not a programming language, it is rather a visual language.
 We use UML diagrams to portray the behavior and structure of a system.
 UML helps software engineers, businessmen, and system architects with modeling, design, and analysis.
 The Object Management Group (OMG) adopted Unified Modelling Language as a standard in 1997. It’s been
managed by OMG ever since.
 The International Organization for Standardization (ISO) published UML as an approved standard in 2005.
UML has been revised over the years and is reviewed periodically.
Essential UML diagrams for system modelling
Use case diagrams, which show the interactions between a system and its environment.
Online Shopping System Use Case diagram
Sequence diagrams, which show interactions between actors and the system and between system components.
Activity diagrams, which show the activities involved in a process or in the data processing.
Class diagrams, which show the object classes in the system and the associations between these classes.
State diagrams, which show how the system reacts to internal and external events.
Each diagram will be discussed in detail in the following articles. So make sure to follow me.

Software Engineering | Architectural Design


Architectural design is the designing and planning of structures where functionality and aesthetics are the two
key elements of the process. The design must be suitable for the experience of the user as well as meet the needs
of the client and or project requirements.
Introduction: The software needs the architectural design to represents the design of software. IEEE defines
architectural design as “the process of defining a collection of hardware and software components and their interfaces to
establish the framework for the development of a computer system.” The software that is built for computer-based
systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of :

 A set of components(eg: a database, computational modules) that will perform a function required by the
system.
 The set of connectors will help in coordination, communication, and cooperation between the components.
 Conditions that how components can be integrated to form the system.
 Semantic models that help the designer to understand the overall properties of the system.
The use of architectural styles is to establish a structure for all the components of the system.
You can design software architectures at two levels of abstraction, which I call
architecture in the small and architecture in the large:

1. Architecture in the small is concerned with the architecture of individual pro- grams. At this level, we are
concerned with the way that an individual program is decomposed into components. This chapter is mostly
concerned with pro- gram architectures.
2. Architecture in the large is concerned with the architecture of complex enter- prise systems that include other
systems, programs, and program compo- nents. These enterprise systems are distributed over different
computers, which may be owned and managed by different companies. I cover architec- ture in the large in

Vision
System

Object
Arm Gripper
Identification
Controller Controller
System

Packaging
Selection
System

Packing Conveyor
System Controller
Figure The
architectre of a
packingrobot
control system

Taxonomy of Architectural styles:

Architectural style
Architectural style is defined as a set of characteristics and features that make a building or other structure notable
or historically identifiable.
An architectural style is a set of principles. You can think of it as a coarse-grained pattern that provides an abstract
framework for a family of systems. An architectural style improves partitioning and promotes design reuse by
providing solutions to frequently recurring problems.
According to Architectural Styles CS 377 Introduction to Software Engineering:
 Named collection. An architectural style is a named collection of architectural design decisions that are
applicable in a given development context, constrain architectural design decisions that are specific to a
particular system within that context, elicit beneficial qualities in each resulting system.
 Recurring organizational patterns and idioms. Established, shared understanding of common design
forms. Mark of mature engineering field.
 Abstraction. Abstraction of recurring composition and interaction characteristics in a set of architectures.

Benefits of Architectural Styles in Software Engineering


Architectural styles provide several benefits. The most important of these benefits is that they provide a common
language.
Another benefit is that they provide a way to have a conversation that is technology-agnostic.
This allows you to facilitate a higher level of conversation that is inclusive of patterns and principles, without
getting into the specifics. For example, by using architecture styles, you can talk about client-server versus N-Tier.
 Design Reuse. Well-understood solutions applied to new problems.
 Code reuse. Shared implementations of invariant aspects of a style.
 Understandability of system organization. A phrase such as ‘client-server” conveys a lot of
information.
 Interoperability. Supported by style standardization.
 Style-specific analysis. Enabled by the constrained design space.
 Visualizations. Style-specific descriptions matching engineer’s mental models.

1] Data centered architectures:

 A data store will reside at the center of this architecture and is accessed frequently by the other components that
update, add, delete or modify the data present within the store.
 The figure illustrates a typical data centered style. The client software access a central repository. Variation of
this approach are used to transform the repository into a blackboard when data related to client or data of
interest for the client change the notifications to client software.
 This data-centered architecture will promote integrability. This means that the existing components can be
changed and new client components can be added to the architecture without the permission or concern of other
clients.
 Data can be passed among clients using blackboard mechanism.
Advantage of Data centered architecture
 Repository of data is independent of clients
 Client work independent of each other
 It may be simple to add additional clients.
 Modification can be very easy

Data centered architecture


2] Data flow architectures:

 This kind of architecture is used when input data is transformed into output data through a series of
computational manipulative components.
 The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a set of components
called filters connected by lines.
 Pipes are used to transmitting data from one component to the next.
 Each filter will work independently and is designed to take data input of a certain form and produces data output
to the next filter of a specified form. The filters don’t require any knowledge of the working of neighboring
filters.
 If the data flow degenerates into a single line of transforms, then it is termed as batch sequential. This structure
accepts the batch of data and then applies a series of sequential components to transform it.
Advantages of Data Flow architecture
 It encourages upkeep, repurposing, and modification.
 With this design, concurrent execution is supported.
The disadvantage of Data Flow architecture
 It frequently degenerates to batch sequential system
 Data flow architecture does not allow applications that require greater user engagement.
 It is not easy to coordinate two different but related streams

Data Flow architecture


3] Call and Return architectures: It is used to create a program that is easy to scale and modify. Many sub-styles exist
within this category. Two of them are explained below.

 Remote procedure call architecture: This components is used to present in a main program or sub program
architecture distributed among multiple computers on a network.
 Main program or Subprogram architectures: The main program structure decomposes into number of
subprograms or function into a control hierarchy. Main program contains number of subprograms that can
invoke other components.

4] Object Oriented architecture: The components of a system encapsulate data and the operations that must be applied
to manipulate the data. The coordination and communication between the components are established via the message
passing.
Characteristics of Object Oriented architecture
 Object protect the system’s integrity.
 An object is unaware of the depiction of other items.
Advantage of Object Oriented architecture
 It enables the designer to separate a challenge into a collection of autonomous objects.
 Other objects are aware of the implementation details of the object, allowing changes to be made without having
an impact on other objects.
5] Layered architecture:

 A number of different layers are defined with each layer performing a well-defined set of operations. Each layer
will do some operations that becomes closer to machine instruction set progressively.
 At the outer layer, components will receive the user interface operations and at the inner layers, components will
perform the operating system interfacing(communication and coordination with OS)
 Intermediate layers to utility services and application software functions.
 One common example of this architectural style is OSI-ISO (Open Systems Interconnection-International
Organisation for Standardisation) communication system.

Layered architecture:

Architectural views
Software architecture views are representations of the overall architecture of a software system that are
meaningful to one or more stakeholders. They enable the architecture to be communicated to and understood by
the stakeholders, so they can verify that the system will address their concerns.
Different types of software architecture views:
 Logical view: This view shows the system's functionality and how it is organized into components. It is
typically represented using diagrams such as class diagrams, sequence diagrams, and activity diagrams.
 Physical view: This view shows the system's physical components, such as hardware and software, and
how they are interconnected. It is typically represented using diagrams such as deployment diagrams and
network diagrams.
 Process view: This view shows the system's dynamic behavior, such as how its components interact with
each other and with the environment. It is typically represented using diagrams such as state diagrams and
sequence diagrams.
 Development view: This view shows how the system will be developed and maintained. It typically
includes information such as the development process, the tools and technologies used, and the roles and
responsibilities of the different stakeholders.
 Operational view: This view shows how the system will be operated and monitored. It typically includes
information such as the operating environment, the performance requirements, and the security
requirements.
4+1 View Model:
The 4+1 View Model is a popular approach to software architecture views. It defines four views:
1. Logical view: This view is the same as the logical view described above.
2. Process view: This view is also the same as the process view described above.
3. Physical view: This view is also the same as the physical view described above.
4. Development view: This view is also the same as the development view described above.
The fifth view is a set of scenarios that illustrate how the system will be used. The scenarios are used to validate
the architecture and to ensure that it meets the needs of the stakeholders.
The 4+1 architectural view model is a popular way to document software architectures because it provides a
comprehensive view of the system from different perspectives.
Here are some examples of how software architecture views can be used:
 Developers can use the logical view to understand the system's functionality and how it is organized into
components. This information can help them to write code that is consistent with the architecture and to
identify potential problems.
 System engineers can use the physical view to plan for the system's deployment and to ensure that it
meets the performance and reliability requirements.
 Testers can use the process view to design test cases that ensure that the system's components interact
correctly with each other and with external systems.
 Business analysts can use the scenarios view to understand how the system will be used in real-world
situations and to identify any potential gaps between the system's functionality and the needs of the
business.
Benefits of using software architecture views:
 Improved communication: Software architecture views can help to improve communication between
different stakeholders, such as developers, system engineers, and project managers.
 Reduced risk: Software architecture views can help to identify and mitigate risks early in the
development process.
 Increased quality: Software architecture views can help to ensure that the system is designed in a way
that meets the needs of the stakeholders and that it is reliable, maintainable, and scalable.
Other common software architecture views include:
 Context view: The context view shows the system's relationship to its external environment, such as other
systems, users, and devices.
 Information view: The information view shows the system's data model and how the data is used and
exchanged between the system's components.
 Concurrency view: The concurrency view shows how the system's components interact with each other
concurrently.
 Security view: The security view shows how the system is protected from unauthorized access and other
security threats.
User Interface Design,
User interface (UI) design is the process designers use to build interfaces in software or computerized devices,
focusing on looks or style. Designers aim to create interfaces which users find easy to use and pleasurable. UI
design refers to graphical user interfaces and other forms—e.g., voice-controlled interfaces
UI stands for user interface. It is the bridge between humans and computers. Anything you interact with as a user
is part of the user interface. For example, screens, sounds, overall style, and responsiveness are all elements that
need to be considered in UI
The user interface is the front-end application view to which the user interacts to use the software. The software
becomes more popular if its user interface is:
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a command prompt, where the user
types the command and feeds it to the system. The user needs to remember the syntax of the command
and its use.
2. Graphical User Interface: Graphical User Interface provides a simple interactive interface to interact
with the system. GUI can be a combination of both hardware and software. Using GUI, the user interprets
the software.

How to Design User Interfaces for Users


User interfaces are the access points where users interact with designs. They come in three formats:
1. Graphical user interfaces (GUIs): Users interact with visual representations on digital control panels. A
computer’s desktop is a GUI.
2. Voice-controlled interfaces (VUIs): Users interact with these through their voices. Most smart assistants,
E.g., Siri on iPhone and Alexa on Amazon devices—are VUIs.
3. Gesture-based interfaces: Users engage with 3D design spaces through bodily motions—e.g., in virtual
reality (VR) games.
The analysis and design process of a user interface is iterative and can be represented by a
spiral model. The analysis and design process of user interface consists of four framework
activities. The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user interface consists of

four framework activities.

The analysis and design process of a user interface is iterative and can be represented by a spiral model. The
analysis and design process of user interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the system, i.e., understanding, skill and
knowledge, type of user, etc., based on the user’s profile users are made into categories. From each category
requirements are gathered. Based on the requirement’s developer understand how to develop the interface. Once
all the requirements are gathered a detailed analysis is conducted. In the analysis part, the tasks that the user
performs to establish the goals of the system are identified, described and elaborated. The analysis of the user
environment focuses on the physical work environment. Among the questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to the interface?
3. Does the interface hardware accommodate space, light, or noise constraints?
4. Are there special human factors considerations driven by environmental factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e., control mechanisms that enable the
user to perform desired tasks. Indicate how these control mechanisms affect the system. Specify the action
sequence of tasks and subtasks, also called a user scenario. Indicate the state of the system when the user performs
a particular task. Always follow the three golden rules stated by Theo Mandel. Design issues such as response
time, command and action structure, error handling, and help facilities are considered as the design model is
refined. This phase serves as the foundation for the implementation phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that enables usage scenarios to be
evaluated. As iterative design process continues a User Interface toolkit that allows the creation of windows,
menus, device interaction, error messages, commands, and many other elements of an interactive environment can
be used for completing the construction of an interface.
4. Interface Validation
This phase focuses on testing the interface. The interface should be in such a way that it should be able to perform
tasks correctly, and it should be able to handle a variety of tasks. It should achieve all the user’s requirements. It
should be easy to use and easy to learn. Users should accept the interface as a useful one in their work.

User Interface Design Golden Rules


The following are the golden rules stated by Theo Mandel that must be followed during the design of the
interface. Place the user in control:
1. Define the interaction modes in such a way that does not force the user into unnecessary or
undesired actions: The user should be able to easily enter and exit the mode with little or no effort.
2. Provide for flexible interaction: Different people will use different interaction mechanisms, some might
use keyboard commands, some might use mouse, some might use touch screen, etc., Hence all interaction
mechanisms should be provided.
3. Allow user interaction to be interruptible and undoable: When a user is doing a sequence of actions
the user must be able to interrupt the sequence to do some other work without losing the work that had
been done. The user should also be able to do undo operation.
4. Streamline interaction as skill level advances and allow the interaction to be customized: Advanced
or highly skilled user should be provided a chance to customize the interface as user wants which allows
different interaction mechanisms so that user doesn’t feel bored while using the same interaction
mechanism.
5. Hide technical internals from casual users: The user should not be aware of the internal technical
details of the system. He should interact with the interface just to do his work.
6. Design for direct interaction with objects that appear on-screen: The user should be able to use the
objects and manipulate the objects that are present on the screen to perform a necessary task. By this, the
user feels easy to control over the screen.
Reduce the User’s Memory Load
1. Reduce demand on short-term memory: When users are involved in some complex tasks the demand
on short-term memory is significant. So the interface should be designed in such a way to reduce the
remembering of previously done actions, given inputs and results.
2. Establish meaningful defaults: Always an initial set of defaults should be provided to the average user,
if a user needs to add some new features then he should be able to add the required features.
3. Define shortcuts that are intuitive: Mnemonics should be used by the user. Mnemonics means the
keyboard shortcuts to do some action on the screen.
4. The visual layout of the interface should be based on a real-world metaphor: Anything you represent
on a screen if it is a metaphor for a real-world entity then users would easily understand.
5. Disclose information in a progressive fashion: The interface should be organized hierarchically i.e., on
the main screen the information about the task, an object or some behavior should be presented first at a
high level of abstraction. More detail should be presented after the user indicates interest with a mouse
pick.

Function oriented Design


The design process for software systems often has two levels. At the first level, the focus is on deciding which
modules are needed for the system based on SRS (Software Requirement Specification) and how the modules
should be interconnected.
Function Oriented Design is an approach to software design where the design is decomposed into a set of
interacting units where each unit has a clearly defined function.
Generic Procedure
Start with a high-level description of what the software/program does. Refine each part of the description by
specifying in greater detail the functionality of each part. These points lead to a Top-Down Structure.

Problem in Top-Down Design Method


Mostly each module is used by at most one other module and that module is called its Parent module.
Solution to the Problem
Designing of reusable module. It means modules use several modules to do their required functions.

Function Oriented Design Strategies


Function Oriented Design Strategies are as follows:
1. Data Flow Diagram (DFD): A data flow diagram (DFD) maps out the flow of information for any
process or system. It uses defined symbols like rectangles, circles and arrows, plus short text labels, to
show data inputs, outputs, storage points and the routes between each destination.
2. Data Dictionaries: Data dictionaries are simply repositories to store information about all data items
defined in DFDs. At the requirement stage, data dictionaries contains data items. Data dictionaries include
Name of the item, Aliases (Other names for items), Description / purpose, Related data items, Range of
values, Data structure definition / form.
3. Structure Charts: Structure chart is the hierarchical representation of system which partitions the system
into black boxes (functionality is known to users, but inner details are unknown). Components are read
from top to bottom and left to right. When a module calls another, it views the called module as a black
box, passing required parameters and receiving results.
4. Pseudo Code: Pseudo Code is system description in short English like phrases describing the function. It
uses keyword and indentation. Pseudocodes are used as replacement for flow charts. It decreases the
amount of documentation required.

Structured Analysis and Structured Design


(SA/SD)
Overview of SA/SD methodology.

As the name itself implies, SA/SD methodology involves carrying out two distinct activities:
● Structured analysis (SA)
● Structured design (SD).
The roles of structured analysis (SA) and structured design (SD) have been shownschematically in
below figure.

● The structured analysis activity transforms the SRS document into a graphic model called the
DFD model. During structured analysis, functional decomposition of the system is achieved. The
purpose of structured analysis is to capture the detailed structure of the system as perceived by the
user
● During structured design, all functions identified during structured analysis are mapped to a
module structure. This module structure is also called the high level design or the software
architecture for the given problem. This is represented using a structure chart. The purpose of
structured design is to define the structure of the solution that is suitable for implementation
● The high-level design stage is normally followed by a detailed design stage.

Structured Analysis

During structured analysis, the major processing tasks (high-level functions) of the system are analyzed,
and t h e data flow among these processing tasks are represented graphically.
The structured analysis technique is based on the following underlying principles:
● Top-down decomposition approach.
● Application of divide and conquer principle.
● Through this each high level function is independently decomposed into detailed functions.
Graphical representation of the analysis results using data flow diagrams (DFDs).

What is DFD?
● A DFD is a hierarchical graphical model of a system that shows the different processing activities
or functions that the system performs and the data interchange among those functions.
● Though extremely simple, it is a very powerful tool to tackle the complexity of industry standard
problems.
● A DFD model only represents the data flow aspects and does not show the sequence of execution of
the different functions and the conditions based on which a function may or may not be executed.
● In the DFD terminology, each function is called a process or a bubble. each function as a
processing station (or process) that consumes some input data and produces some output data.
● DFD is an elegant modeling technique not only to represent the results of structured analysis but
also useful for several other applications.
● Starting with a set of high-level functions that a system performs, a DFD model represents the
subfunctions performed by the functions using a hierarchy of diagrams.

Primitive symbols used for constructing DFDs

There are essentially five different


types of symbols used for constructing
DFDs.
● Function symbol: A function is represented using a circle. This symbol is called a process or a
bubble. Bubbles are annotated with the names of the corresponding functions
● External entity symbol: represented by a rectangle. The external entities are essentially those
physical entities external to the software system which interact with the system by inputting data to
the system or by consuming the data produced by the system.
● Data flow symbol: A directed arc (or an arrow) is used as a data flow symbol. A data flow symbol
represents the data flow occurring between two processes or between an external entity and a process
in the direction of the data flow arrow.
● Data store symbol: A data store is represented using two parallel lines. It represents a logical file.
That is, a data store symbol can represent either a data structure or a physical file on disk. Connected
to a process by means of a data flow symbol. The direction of the data flow arrow shows whether
data is being read from or written into a data store.
● Output symbol: The output symbol i s as shown in Figure 6.2. The output symbol is used when a
hard copy is produced.

Important concepts associated with constructing DFD modelsSynchronous and

asynchronous operations
● If two bubbles are directly connected by a data flow arrow, then they are synchronous. Thismeans that
they operate at the same speed.
● If two bubbles are connected through a data store, as in Figure (b) then the speed of operation
of the bubbles are independent.

Data dictionary
● Every DFD model of a system must be accompanied by a data dictionary. A data dictionarylists all
data items that appear in a DFD model.
● A data dictionary lists the purpose of all data items and the definition of all composite data items in
terms of their component data items.
● It includes all data flows and the contents of all data stores appearing on all the DFDs in a DFD
model.
● For the smallest units of data items, the data dictionary simply lists their name and their type.
● Composite data items are expressed in terms of the component data items using certain operators.
● The dictionary plays a very important role in any software development process, especiallyfor the
following reasons:
○ A data dictionary provides a standard terminology for all relevant data for use by the
developers working in a project.
○ The data dictionary helps the developers to determine the definition of different datastructures
in terms of their component elements while implementing the design.
○ The data dictionary helps to perform impact analysis. That is, it is possible todetermine the
effect of some data on various processing activities and vice versa.
● For large systems, the data dictionary can become extremely complex and voluminous.
● Computer-aided software engineering (CASE) tools come handy to overcome this problem.
● Most CASE tools usually capture the data items appearing in a DFD as the DFD is drawn, and
automatically generate the data dictionary.

Self Study:
Data Definition

Developing the DFD model of a system:

● The DFD model of a problem consists of many DFDs and a single data dictionary. The DFDmodel
of a system i s constructed by using a hierarchy of DFDs.
● The top level DFD is called the level 0 DFD or the context diagram.
○ This is the most abstract (simplest) representation of the system (highest level). It is the easiest
to draw and understand.
● At each successive lower level DFDs, more and more details are gradually introduced.
● To develop a higher-level DFD model, processes are decomposed into their subprocessesand the
data flow among these subprocesses are identified.
● Level 0 and Level 1 consist of only one DFD each. Level 2 may contain up to 7 separateDFDs,
and level 3 up to 49 DFDs, and so on.
● However, there is only a single data dictionary for the entire DFD model.

Context Diagram/Level 0 DFD:


● The context diagram is the most abstract (highest level) data flow representation of a system. It
represents the entire system as a single bubble.
● The bubble in the context diagram is annotated with the name of the software system being
developed.
● The context diagram establishes the context in which the system operates; that is, who are the users,
what data do they input to the system, and what data they received by the system.
● The data input to the system and the data output from the system are represented as incoming and
outgoing arrows.

Level 1 DFD:
● The level 1 DFD usually contains three to seven bubbles.
● The system is represented as performing three to seven important functions.
● To develop the level 1 DFD, examine the high-level functional requirements in the SRSdocument.
● If there are three to seven high level functional requirements, then each of these can be directly
represented as a bubble in the level 1 DFD.
● Next, examine the input data to these functions and the data output by these functions as
documented in the SRS document and represent them appropriately in the diagram.

Decomposition:
● Each bubble in the DFD represents a function performed by the system.
● The bubbles are decomposed into subfunctions at the successive levels of the DFD model.
Decomposition of a bubble is also known as factoring o r exploding a bubble.
● Each bubble at any level of DFD is usually decomposed to anything from three to seven bubbles.
Decomposition of a bubble should be carried on until a level is reached at which the function of the
bubble can be described using a simple algorithm.

Self Study: Examples of DFD


Models RMS Calculator
Trading House Automation System

Structured Design

● The aim of structured design is to transform the results of the structured analysis into a structure
chart.
● A structure chart represents the software architecture.
● The various modules making up the system, the module dependency (i.e. which module calls which
other modules), and the parameters that are passed among the different modules.
● The structure chart representation can be easily implemented using some programming language.
● The basic building blocks using which structure charts are designed are as following:
○ Rectangular boxes: A rectangular box represents a module.
○ Module invocation arrows: An arrow connecting two modules implies that during program
execution control is passed from one module to the other in the direction of the connecting
arrow.
○ Data flow arrows: These are small arrows appearing alongside the module invocation arrows.
represent the fact that the named data passes from one module to the other in the direction of the
arrow.
○ Library modules: A library module is usually represented by a rectangle with double edges.
Libraries comprise the frequently called modules.
○ Selection: The diamond symbol represents the fact that one module of several modules
connected with the diamond symbol i s invoked depending on the outcome of the condition
attached with the diamond symbol.
○ Repetition: A loop around the control flow arrows denotes that the respective modules are
invoked repeatedly.
● In any structure chart, there should be one and only one module at the top, called the root.
● There should be at most one control relationship between any two modules in the structurechart. This
means that if module A invokes module B, module B cannot invoke module A.

Flow Chart vs Structure chart:


● Flow chart is a convenient technique to represent the flow of control in a program.
● A structure chart differs from a flow chart in three principal ways:
1. It is usually difficult to identify the different modules of a program from its flow chart
representation.
2. Data interchange among different modules is not represented in a flow chart.
3. Sequential ordering of tasks that i s inherent to a flow chart is suppressed in a structurechart.

Transformation of a DFD Model into Structure Chart:


● Systematic techniques are available to transform the DFD representation of a problem intoa module
structure represented as a structure chart.
● Structured design provides two strategies to guide transformation of a DFD into a structurechart:
○ Transform analysis
○ Transaction analysis
● Normally, one would start with the level 1 DFD, transform it into module representation using either
the transform or transaction analysis and then proceed toward the lower level DFDs
● At each level of transformation, it is important to first determine whether the transform or the
transaction analysis is applicable to a particular DFD.

Whether to apply transform or transaction processing?


Given a specific DFD of a model, one would have to examine the data input to the diagram.
If all the data flow into the diagram are processed in similar ways (i.e. if all the input data flow arrows are
incident on the same bubble in the DFD) then transform analysis is applicable. Otherwise, transaction
analysis is applicable.

Transform Analysis:
● Transform analysis identifies the primary functional components (modules) and the input and output
data for these components.
● The first step in transform analysis is to divide the DFD into three types of parts:
○ Input (afferent branch)
○ Processing (central transform)
○ Output (efferent branch)
● In the next step of transform analysis, the structure chart is derived by drawing one functional
component each for the central transform, the afferent and efferent branches.
● These are drawn below a root module, which would invoke these modules.
● In the third step of transform analysis, the structure chart is refined by adding sub functionsrequired by
each of the high-level functional components.

Transaction Analysis:
● Transaction analysis is an alternative to transform analysis and is useful while designing transaction
processing programs.
● A transaction allows the user to perform some specific type of work by using the software.
● For example, ‘issue book’, ‘return book’, ‘query book’, etc., are transactions.
● As in transform analysis, first all data entering into the DFD need to be identified.
● In a transaction-driven system, different data items may pass through differentcomputation paths
through the DFD.
● This is in contrast to a transform centered system where each data item entering the DFD goes
through the same processing steps.
● Each different way in which input data is processed is a transaction. For each identified
transaction, trace the input data to the output.
● All the traversed bubbles belong to the transaction.
● These bubbles should be mapped to the same module on the structure chart.
● In the structure chart, draw a root module and below this module draw each identified
transaction as a module.

Detailed Design:
● During detailed design the pseudo code description of the processing and the different data structures
are designed for the different modules of the structure chart.
● These are usually described in the form of module specifications (MSPEC). MSPEC is usually
written using structured English.
● The MSPEC for the non-leaf modules describe the different conditions under which the
responsibilities are delegated to the lower level modules.
● The MSPEC for the leaf-level modules should describe in algorithmic form how the primitive
processing steps are carried out.
● To develop the MSPEC of a module, it is usually necessary to refer to the DFD model andthe SRS
document to determine the functionality of the module.

Design Review:
● After a design is complete, the design is required to be reviewed.
● The review team usually consists of members with design, implementation, testing, andmaintenance
perspectives.
● The review team checks the design documents especially for the following aspects:
○ Traceability: Whether each bubble of the DFD can be traced to some module in the
structure chart and vice versa.
○ Correctness: Whether all the algorithms and data structures of the detailed design are correct.
○ Maintainability: Whether the design can be easily maintained in future.
○ Implementation: Whether the design can be easily and efficiently implemented.
● After the points raised by the reviewers are addressed by the designers, the design document
becomes ready for implementation.

Design Metrics
The design matrix contains data on the independent variables (also called explanatory variables), in a statistical
model that is intended to explain observed data on a response variable (often called a dependent variable).

Software Static and Dynamic analysis


Static code analysis examines code to identify issues within the logic and techniques. Dynamic code analysis
involves running code and examining the outcome, which also entails testing possible execution paths of the
code.
Static analysis and dynamic analysis act as a two-pronged approach to improving the development
process in terms of reliability, bug detection, efficiency, and security.
Dynamic analysis is the testing and evaluation of an application during runtime.
Static analysis is the testing and evaluation of an application by examining the code without executing the
application.

What Is Static Analysis?


Static code analysis testing includes various types with the main two being pattern-based and flow-based.
Pattern-based static analysis looks for code patterns that violate defined coding rules. In addition to ensuring
that code meets uniform expectations for regulatory compliance or internal initiatives, it helps teams prevent
defects such as resource leaks, performance and security issues, logical errors, and API misuse.
Flow-based static analysis involves finding and analyzing the various paths that can be taken through the code.
This can happen by control (the order in which lines can be executed) and by data (the sequences in which a
variable or similar entity can be created, changed, used, and destroyed). These processes can expose problems
that lead to critical defects such as:
 Memory corruptions (buffer overwrites)
 Memory access violations
 Null pointer dereferences
 Race conditions
 Deadlocks
It can also detect security issues by pointing out paths that bypass security-critical code such as code for
authentication or encryption.
Additionally, metrics analysis involves measuring and visualizing various aspects of the code. It can help detect
existing defects, but more often, it warns of potential difficulty in preventing and detecting future defects when
code is maintained. This is done by finding complexity and unwieldiness such as:
 Overly large components
 Excessive nesting of loops
 Too-lengthy series of decisions
 Convoluted intercomponent dependencies
What Is Dynamic Analysis?
Sometimes referred to as runtime error detection, dynamic analysis is where distinctions among testing types
start to blur. For embedded systems, dynamic analysis examines the internal workings and structure of an
application rather than external behavior. Therefore, code execution is performed by way of white box testing.
Dynamic analysis testing detects and reports internal failures the instant they occur. This makes it easier for the
tester to precisely correlate these failures with test actions for incident reporting.
Expanding into the external behavior of the application with emphasis on security, dynamic application security
testing (DAST) is analytical testing with the intent to examine the test item rather than exercise it. Yet the code
under test must be executed.
DAST also extends the capability of empirical testing at all levels—from unit to acceptance. It does this by
making it possible to detect internal failures that point to otherwise unobservable external failures that occur or
will occur after testing has stopped.
Pros & Cons of Static Analysis
As with all avenues toward DevSecOps perfection, there are pros and cons with static analysis testing.
PROS
 Evaluates source code without executing it.
 Analyzes the code in its entirety for vulnerabilities and bugs.
 Follows customized, defined rules.
 Enhances developer accountability.
 Capable of automation.
 Highlights bugs early and reduces the time it takes to fix them.
 Reduces project costs.
CONS
 Can return false positives and false negatives that might distract developers.
 Can take a long time to operate manually.
 Can’t locate bugs or vulnerabilities that come about in runtime environments.
 Deciding which industry coding standards to apply can be confusing.
 May be challenging to determine if deviating from a rule violation is appropriate.
While the list of cons might look intimidating, the holes of static analysis can be patched with two things.
1. Automating static analysis.
2. Using dynamic techniques.

Code inspections
The development of any software application/product goes through SDLC (Software Development Life
Cycle) where every phase is very important and needs to be followed accordingly to develop a quality software
product. Inspection is such an important element which have a great impact on the software development
process.
The software development team not only develops the software application rather during the coding
phase of software development they check for any error in the code of the software which is called code
verification. This code verification checks the software code in all aspects and finds out the errors that exist in
the code. Generally, there are two types of code verification techniques available i.e.
1. Dynamic technique – It is performed by executing some test data and the outputs of the program are
monitored to find errors in the software code.
2. Static technique – It is performed by executing the program conceptually and without any data. Code
reading, static analysis, symbolic execution, code inspection, reviews, etc. are some of the commonly
used static techniques.
Code Inspection :
Code inspection is a type of Static testing that aims to review the software code and examine for any
errors. It helps reduce the ratio of defect multiplication and avoids later-stage error detection by simplifying all
the initial error detection processes. This code inspection comes under the review process of any application.
How does it work?
 Moderator, Reader, Recorder, and Author are the key members of an Inspection team.
 Related documents are provided to the inspection team who then plan the inspection meeting and
coordinate with inspection team members.
 If the inspection team is not aware of the project, the author provides an overview of the project and
code to inspection team members.
 Then each inspection team performs code inspection by following some inspection checklists.
 After completion of the code inspection, conduct a meeting with all team members and analyze the
reviewed code.
Purpose of code inspection :
1. It checks for any error that is present in the software code.
2. It identifies any required process improvement.
3. It checks whether the coding standard is followed or not.
4. It involves peer examination of codes.
5. It documents the defects in the software code.
Advantages Of Code Inspection :
 Improves overall product quality.
 Discovers the bugs/defects in software code.
 Marks any process enhancement in any case.
 Finds and removes defective efficiently and quickly.
 Helps to learn from previous defeats.
Disadvantages of Code Inspection:
 Requires extra time and planning.
 The process is a little slower.
What is Software Testing?
Software testing can be stated as the process of verifying and validating whether a software or
application is bug-free, meets the technical requirements as guided by its design and development, and meets
the user requirements effectively and efficiently by handling all the exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy, and usability. The article focuses on
discussing Software Testing in detail.
Software Testing is a method to assess the functionality of the software program. The process checks
whether the actual software matches the expected requirements and ensures the software is bug-free. The
purpose of software testing is to identify the errors, faults, or missing requirements in contrast to actual
requirements. It mainly aims at measuring the specification, functionality, and performance of a software
program or application.
Software testing can be divided into two steps:
1. Verification: It refers to the set of tasks that ensure that the software correctly implements a specific
function. It means “Are we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements. It means “Are we building the right product?”.
Importance of Software Testing:
 Defects can be identified early: Software testing is important because if there are any bugs they can be
identified early and can be fixed before the delivery of the software.
 Improves quality of software: Software Testing uncovers the defects in the software, and fixing them
improves the quality of the software.
 Increased customer satisfaction: Software testing ensures reliability, security, and high performance
which results in saving time, costs, and customer satisfaction.
 Helps with scalability: Software testing type non-functional testing helps to identify the scalability
issues and the point where an application might stop working.
 Saves time and money: After the application is launched it will be very difficult to trace and resolve
the issues, as performing this activity will incur more costs and time. Thus, it is better to conduct
software testing at regular intervals during software development.
Different Types Of Software Testing

Software Testing can be broadly classified into 3 types:


1. Functional testing: It is a type of software testing that validates the software systems against the
functional requirements. It is performed to check whether the application is working as per the
software’s functional requirements or not. Various types of functional testing are Unit testing,
Integration testing, System testing, Smoke testing, and so on.
2. Non-functional testing: It is a type of software testing that checks the application for non-functional
requirements like performance, scalability, portability, stress, etc. Various types of non-functional
testing are Performance testing, Stress testing, Usability Testing, and so on.
3. Maintenance testing: It is the process of changing, modifying, and updating the software to keep up
with the customer’s needs. It involves regression testing that verifies that recent changes to the code
have not adversely affected other previously working parts of the software.
Apart from the above classification software testing can be further divided
into 2 more ways of testing:
1. Manual testing: It includes testing software manually, i.e., without using any automation tool or script.
In this type, the tester takes over the role of an end-user and tests the software to identify any
unexpected behavior or bug. There are different stages for manual testing such as unit testing,
integration testing, system testing, and user acceptance testing. Testers use test plans, test cases, or test
scenarios to test software to ensure the completeness of testing. Manual testing also includes
exploratory testing, as testers explore the software to identify errors in it.
2. Automation testing: It is also known as Test Automation, is when the tester writes scripts and uses
another software to test the product. This process involves the automation of a manual process.
Automation Testing is used to re-run the test scenarios quickly and repeatedly, that were performed
manually in manual testing.
Apart from Regression testing, Automation testing is also used to test the application from a load,
performance, and stress point of view. It increases the test coverage, improves accuracy, and saves time
and money when compared to manual testing.

Different Types of Software Testing


Techniques
Software testing techniques can be majorly classified into two categories:
1. Black box Testing: Testing in which the tester doesn’t have access to the source code of the software
and is conducted at the software interface without any concern with the internal logical structure of the
software known as black-box testing.
2. White box Testing: Testing in which the tester is aware of the internal workings of the product, has
access to its source code, and is conducted by making sure that all internal operations are performed
according to the specifications is known as white box testing.
3. Grey Box Testing: Testing in which the testers should have knowledge of implementation, however,
they need not be experts.

S No. Black Box Testing White Box Testing

Internal workings of an Knowledge of the internal


1
application are not required. workings is a must.

Also known as closed Also known as clear


2
box/data-driven testing. box/structural testing.

End users, testers, and Normally done by testers


3
developers. and developers.

This can only be done by Data domains and internal


4
a trial and error method. boundaries can be better tested.

Different Levels of Software Testing


Software level testing can be majorly classified into 4 levels:
1. Unit testing: It a level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that each unit of the software performs as
designed.
2. Integration testing: It is a level of the software testing process where individual units are combined
and tested as a group. The purpose of this level of testing is to expose faults in the interaction between
integrated units.
3. System testing: It is a level of the software testing process where a complete, integrated
system/software is tested. The purpose of this test is to evaluate the system’s compliance with the
specified requirements.
4. Acceptance testing: It is a level of the software testing process where a system is tested for
acceptability. The purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.

Software Test Process


Software testing is the process of checking the quality, functionality, and performance of a software product
before launching. To do software testing, testers either interact with the software manually or execute test
scripts to find bugs and errors, ensuring that the software works as expected.
The software test process in software engineering is a crucial aspect of ensuring the quality and reliability of
software products. It involves systematic procedures to identify defects, errors, or bugs in the software so that
they can be fixed before the product is released to users. Here's an overview of the typical software test process:
1. Test Planning:
 Define the objectives of testing, including quality goals and criteria.
 Identify test activities, resources, timelines, and responsibilities.
 Determine test environment setup requirements.
2. Test Design:
 Develop test cases based on requirements, design specifications, and user scenarios.
 Define test data required for executing test cases.
 Create test procedures and expected outcomes for each test case.
3. Test Execution:
 Execute test cases using the defined test procedures and test data.
 Record actual outcomes and compare them against expected outcomes.
 Log defects or issues found during testing in a defect tracking system.
4. Defect Reporting and Tracking:
 Document defects with detailed information, including steps to reproduce.
 Prioritize defects based on severity and impact on the software.
 Assign defects to development teams for resolution and track their status.
5. Test Reporting:
 Generate test reports summarizing test activities, results, and metrics.
 Provide insights into the quality of the software and readiness for release.
 Share test findings and recommendations with stakeholders.
6. Test Closure:
 Evaluate test completion criteria and readiness for product release.
 Conduct test closure activities such as finalizing test documentation.
 Capture lessons learned and identify areas for process improvement.

Levels of Testing in Software Testing


Introduction: Levels of Testing
The purpose of Levels of testing is to make software testing structured/efficient and easily find all possible test
cases/test scenarios at a particular level. In the SDLC model, there are personalized phases such as requirement
meeting, analysis, coding, design, execution, testing, and deployment. These all phases go through the process
of levels of testing in software testing.
Many different testing levels are used or help to check actions and performance for software testing. These
testing levels are designed for missing areas and re-coded and re-linked between the development lifecycle
states.
Levels of Testing in Software Testing
In general, mainly four levels of testing in software testing: Unit Testing, System Testing, Integration Testing,
and Acceptance Testing.
 Unit Testing
 Integration Testing
 System Testing
 Acceptance Testing
Every testing level is crucial testing for software testing but these four levels of testing are crucial testing for
software engineering.
Unit Testing
This type of testing uses tests for a single component or a single unit in software testing and this kind of testing
is performed by the developer.
Unit testing is also the first level of functional testing. The primary goal of unit testing is to validate the
performance of unit components.
Unit is the smallest testable portion of the system or application. The main aim is to test that each component or
unit is correct in fulfilling requirements and desired functionality.
The main advantage of this testing is that by detecting any errors in the software early in the day is that by doing
so the team reduces software development risks, as well as time and money wasted in having to go back and
take back fundamental defects in the program once it is nearly completed.
Integration Testing
Integration testing means combining different software modules and phases and testing as a group to ensure that
the integrated system is ready for system testing or not, and there are many ways to test how different
components of the system function at their interface.
This type of testing is performed by testers and integration testing finds the data flow from one module to other
modules.
System Testing
System testing is most probably the final test to identify that the system meets the specification and criteria and
it evaluates both function and non-functional needs for the testing.
System testing is allowing to check the system’s compliance as per the requirements and all the components of
the software are tested as a whole to ensure that the overall product meets the requirements specified. It
involves load, reliability, performance, and security testing.
System testing is a very important step as the software is almost ready for production in the market and once it
is deployed it can be tested in an environment that is very close to the market/user-friendly environment which
the user will experience.
Acceptance Testing
Acceptance testing aims to evaluate whether the system complies with the end-user requirements and if it is
ready for deployment.
The tester will utilize a different method such as pre-written scenarios and test cases to test the software and use
the results obtained from these tools to find ways in which the system can be improved also QA team or testing
team can find out how the product will perform when it is installed on the user’s system.
Acceptance testing ranges from easily finding spelling mistakes and cosmetic errors to relatable bugs that could
cause a major error in the application.

Test Criteria
In software engineering, test criteria refer to the standards or guidelines used to determine whether a software
product or system has met specified requirements and is ready for release or further development stages. Test
criteria are essential for defining the conditions under which testing is conducted and for making decisions
about the quality and readiness of the software. Here are the key types of test criteria commonly used in
software engineering:
1. Functional Test Criteria:
 Requirements Coverage: Ensures that all functional requirements specified for the software
have been tested.
 Boundary Conditions: Tests the software behavior at the boundaries of input ranges to verify
correct handling of edge cases.
 Use Case Coverage: Verifies that all defined use cases of the software have been tested and
behave as expected.
 Error Handling: Validates that the software handles expected and unexpected errors
gracefully.
2. Non-Functional Test Criteria:
 Performance Criteria: Sets performance targets such as response time, throughput, and
resource utilization that the software must meet under various conditions.
 Usability Criteria: Evaluates the ease of use, user interface design, and overall user experience
of the software.
 Reliability Criteria: Defines reliability metrics such as mean time between failures (MTBF) or
mean time to failure (MTTF) that the software should achieve.
 Security Criteria: Specifies security requirements and checks for vulnerabilities such as
unauthorized access or data breaches.
3. Code Coverage Criteria:
 Statement Coverage: Measures the percentage of executable statements in the source code that
have been executed during testing.
 Branch Coverage: Ensures that all possible branches and decision points in the code have been
exercised during testing.
 Path Coverage: Verifies that every possible path through the code has been executed, ensuring
comprehensive testing.
4. Regression Test Criteria:
 Critical Path Testing: Focuses on testing the most critical functionalities or modules of the
software to ensure they work correctly after changes.
 Risk-Based Testing: Prioritizes testing efforts based on identified risks to the software,
ensuring that high-risk areas receive more attention.
 Test Case Selection: Determines which test cases need to be rerun based on the impact of
changes made to the software.
5. Acceptance Test Criteria:
 Acceptance Criteria: Defines the conditions that must be met for the software to be accepted
by stakeholders or customers.
 Exit Criteria: Specifies the conditions that must be satisfied before exiting a testing phase,
such as passing certain test cases or meeting specific quality metrics.
6. Quality Criteria:
 Defect Criteria: Sets thresholds for acceptable defect rates or severity levels that the software
must meet to be considered of high quality.
 Compliance Criteria: Ensures that the software complies with relevant industry standards,
regulations, or internal policies.

3.1. Test Planning


During test planning, the testing team defines the test approach, objectives, and scope of testing.
To start this phase, everything should be clear about the project requirements, scope, and testing strategy.
Similarly, before concluding this phase, we need the resource allocation and test plan documents:

Project requirements are important because understanding the project goals, functionalities, and constraints is
essential to formulate an effective testing strategy. The testing team should have a well-defined scope for the
testing process. It includes the software components to be tested, the testing techniques to be used, and the level
of testing to be performed.
A comprehensive test strategy outlining the overall testing approach should also be formulated in test planning.
It includes details about the testing objectives, test levels, test types, test environments, and resource allocation.
To conclude this phase, we should have a resource allocation plan. It’s a specification of how to allocate testers,
test environments, and testing tools for the upcoming activities per the test plan. That’s another exit criterion.
The test plan document should be available to all team members and stakeholders for reference and guidance
during testing.
3.2. Entry and Exit Criteria in Test Design
During the test design phase, the testing team transforms the testing strategy defined in test planning into
tangible test cases and test scripts.
This phase also has several entry and exit criteria:

The test plan, approved in the test planning phase, serves as the foundation for test design. It provides us with
essential information about testing objectives, scope, and strategies that guide the creation of test cases.
The testing team must thoroughly analyze the project requirements and ensure the test cases cover all specified
functionalities. Test data required for executing the test cases should be prepared and available in the testing
environment. This data should be relevant to the test scenarios.
The exit criteria of the test design phase include the development, review, and approval of the test cases. Each
case should undergo a thorough review process to identify and address any defects or ambiguities before
approval. The test design phase is complete when we finalize the test design document containing test cases,
test scenarios, and test data and get it ready for execution.
Meeting the entry criteria ensures that test cases and data are well-prepared and aligned with the testing
strategy before execution begins. The exit criteria in test design act as checkpoints to verify that the test cases
and associated documents have been created and reviewed according to the defined standards.
3.3. Entry and Exit Criteria in Test Execution
Test execution is the phase where the software’s actual testing occurs. During this phase, the testing team
executes the prepared test cases, records test results, and logs defects.
Test cases must be fully developed and approved in the test design phase before test execution begins. The test
data required for executing the test cases should be available and relevant to the test scenarios. Let’s look at the
visual representation of various entry and exit criteria for the test execution phase:

The testing environment must be set up correctly, and all prerequisites, including the availability of the
application under test, should be met before test execution starts. We should record test results, including
pass/fail status and metrics such as test execution time and defect density.
The test execution phase concludes when all identified test cases have been executed, and the test coverage is
adequate. All defects encountered during test execution should be logged in the defect tracking system with
appropriate descriptions and reproducing steps. Test results and metrics recorded during test execution should
be reviewed to identify trends or patterns requiring further investigation or action.
Meeting the entry criteria ensures the testing environment is set up correctly and test cases and data are ready
for execution. The exit criteria in test execution confirm that the test execution phase has been completed
according to the defined testing scope and objectives.
3.4. Entry and Exit Criteria in Test Closure
The test closure phase marks the conclusion of the software testing effort for a specific project. During this
phase, the testing team completes the testing activities, summarises the testing outcomes, and gathers
valuable insights for future improvements.
Let’s look at the prerequisites for entering the phase and the conditions for successfully completing it:

We should thoroughly review and analyze the test results and metrics we record during testing to identify
trends, patterns, and overall testing outcomes.
The test closure phase concludes when relevant stakeholders finalize, review, and approve the test closure
report. Valuable insights and lessons learned during testing should be documented to guide future testing
efforts.
Meeting the entry criteria means we have completed the testing activities successfully. The exit criteria in test
closure indicate we have all the necessary exit documents, including the test closure report and lessons.
4. Importance of Setting Clear Entry and Exit Criteria
The entry criteria act as a quality gate that ensures the testing phase begins only when we meet all the
prerequisites. This helps avoid premature or incomplete testing and ensures that the test environment, data, and
resources are in place for efficient testing.
With proper entry criteria, we can focus on actual test execution rather than dealing with unprepared or unstable
environments. This enables us to provide accurate and reliable test results and bug reports.
Exit criteria also help identify potential risks and defects we need to address before moving on to the next
phase. They ensure the software product meets the required quality standards before delivery.
5. Best Practices for Defining Entry and Exit Criteria
Defining entry and exit criteria requires careful consideration and collaboration among the testing team,
stakeholders, and project managers.
The best practices are:
 Defining specific and measurable criteria
 Involving stakeholders and test team collaboration
 Periodic review and adaptation for improvement
Entry and exit criteria should be clear, specific, and measurable. We should avoid vague or subjective
criteria that may lead to misunderstandings or differing interpretations.
We should also incorporate input from stakeholders, developers, and other team members when defining the
criteria. Collaborative discussions can help us identify critical factors and ensure that all perspectives are
considered in the criteria definition.
We also need to review and update the entry and exit criteria regularly. As the project progresses or changes, we
might need to revise specific criteria to align with new requirements or project goals. Continuous
improvement ensures that the criteria remain relevant and effective throughout the testing process.

Test Case Design


A test case design is a specific set of steps that will exercise a particular functionality within the software. Each
test case should have a clear expected result so that you can determine whether or not the software is
functioning as intended.
test case design with a detailed example. Suppose we are developing an e-commerce website with a feature that
allows users to add items to their shopping cart. We'll create test cases to validate this functionality.
Test Case Scenario:
Feature: Adding Items to Shopping Cart
Test Case 1: Valid Item Addition
 Test Case ID: TC001
 Test Case Description: Verify that a user can successfully add an item to the shopping cart.
 Preconditions:
 User is logged in to the e-commerce website.
 The item is available and displayed on the product details page.
 Test Steps:
1. Navigate to the product details page of the desired item.
2. Click on the "Add to Cart" button.
3. View the shopping cart page.
 Expected Results:
 The item should be added to the shopping cart.
 The shopping cart page should display the added item with correct details (name, price,
quantity).
Test Case 2: Empty Cart Check
 Test Case ID: TC002
 Test Case Description: Verify that the shopping cart is empty initially.
 Preconditions:
 User is logged in to the e-commerce website.
 Test Steps:
1. Navigate to the shopping cart page.
 Expected Results:
 The shopping cart should display a message indicating that it is empty.
 No items should be visible in the cart.
Test Case 3: Invalid Item Addition (Out of Stock)
 Test Case ID: TC003
 Test Case Description: Verify that a user cannot add an out-of-stock item to the shopping cart.
 Preconditions:
 User is logged in to the e-commerce website.
 The item is currently out of stock.
 Test Steps:
1. Navigate to the product details page of the out-of-stock item.
2. Attempt to click on the "Add to Cart" button.
 Expected Results:
 The "Add to Cart" button should be disabled or display a message indicating that the item is out
of stock.
 The shopping cart should remain unchanged.
Test Case 4: Quantity Adjustment
 Test Case ID: TC004
 Test Case Description: Verify that a user can adjust the quantity of items in the shopping cart.
 Preconditions:
 User has already added items to the shopping cart.
 Test Steps:
1. Navigate to the shopping cart page.
2. Change the quantity of the selected item.
3. Update the cart.
 Expected Results:
 The shopping cart should reflect the updated quantity of the selected item.
 The total price should be adjusted accordingly based on the new quantity.
Test Case Attributes:
 Priority: Medium
 Complexity: Low
 Dependencies: Availability of test data (e.g., user login credentials, product details).
Test Data Considerations:
 Valid user login credentials.
 Product details including name, price, and availability status (in stock or out of stock).
Automation Considerations:
 Test Case 1 and Test Case 4 can be considered for automation to ensure quick regression testing of the
cart functionality.
Review and Validation:
 Review these test cases with relevant stakeholders (e.g., developers, product owners) to ensure
coverage and accuracy.
 Validate each test case against the specified requirements and acceptance criteria.
Test Oracles
Test Oracle is a mechanism, different from the program itself, that can be used to test the accuracy of a
program’s output for test cases. Conceptually, we can consider testing a process in which test cases are given
for testing and the program under test. The output of the two then compares to determine whether the program
behaves correctly for test cases. This is shown in figure.

Testing oracles are required for testing. Ideally, we want an automated oracle, which always gives the correct
answer. However, often oracles are human beings, who mostly calculate by hand what the output of the
program should be. As it is often very difficult to determine whether the behavior corresponds to the expected
behavior, our “human deities” may make mistakes. Consequently, when there is a discrepancy, between the
program and the result, we must verify the result produced by the oracle before declaring that there is a defect in
the result.
The human oracles typically use the program’s specifications to decide what the correct behavior of the
program should be. To help oracle determine the correct behavior, it is important that the behavior of the system
or component is explicitly specified and the specification itself be error-free. In other words actually specify the
true and correct behavior.
There are some systems where oracles are automatically generated from the specifications of programs or
modules. With such oracles, we are assured that the output of the oracle conforms to the specifications.
However, even this approach does not solve all our problems, as there is a possibility of errors in specifications.
As a result, a divine generated from the specifications will correct the result if the specifications are correct, and
this specification will not be reliable in case of errors. In addition, systems that generate oracles from
specifications require formal specifications, which are often not generated during design.

Test Techniques
Software testing techniques are methods or approaches used to design and execute test
cases with the goal of uncovering defects and ensuring the quality of software products.
These techniques help testers maximize test coverage, minimize redundancy, and
effectively validate software behavior. Here are some commonly used software testing
techniques:

Software testing techniques are methods used to design and execute tests to evaluate
software applications. The following are common testing techniques:
1. Manual testing – Involves manual inspection and testing of the software by a human
tester.
2. Automated testing – Involves using software tools to automate the testing process.
3. Functional testing – Tests the functional requirements of the software to ensure they
are met.
4. Non-functional testing – Tests non-functional requirements such as performance,
security, and usability.
5. Unit testing – Tests individual units or components of the software to ensure they are
functioning as intended.
6. Integration testing – Tests the integration of different components of the software to
ensure they work together as a system.
7. System testing – Tests the complete software system to ensure it meets the specified
requirements.
8. Acceptance testing – Tests the software to ensure it meets the customer’s or end-
user’s expectations.
9. Regression testing – Tests the software after changes or modifications have been
made to ensure the changes have not introduced new defects.
10. Performance testing – Tests the software to determine its performance characteristics
such as speed, scalability, and stability.
11. Security testing – Tests the software to identify vulnerabilities and ensure it meets
security requirements.
12. Exploratory testing – A type of testing where the tester actively explores the
software to find defects, without following a specific test plan.
13. Boundary value testing – Tests the software at the boundaries of input values to
identify any defects.
14. Usability testing – Tests the software to evaluate its user-friendliness and ease of use.
15. User acceptance testing (UAT) – Tests the software to determine if it meets the end-
user’s needs and expectations.
Software testing techniques are the ways employed to test the application under test
against the functional or non-functional requirements gathered from business. Each
testing technique helps to find a specific type of defect. For example, Techniques that
may find structural defects might not be able to find the defects against the end-to-end
business flow. Hence, multiple testing techniques are applied in a testing project to
conclude it with acceptable quality.
Principles Of Testing
Below are the principles of software testing:
1. All the tests should meet the customer’s requirements.
2. To make our software testing should be performed by a third party.
3. Exhaustive testing is not possible. As we need the optimal amount of testing based on
the risk assessment of the application.
4. All the tests to be conducted should be planned before implementing it.
5. It follows the Pareto rule (80/20 rule) which states that 80% of errors come from 20%
of program components.
6. Start testing with small parts and extend it to large parts.
Types Of Software Testing Techniques
There are two main categories of software testing techniques:
1. Static Testing Techniques are testing techniques that are used to find defects in an
application under test without executing the code. Static Testing is done to avoid
errors at an early stage of the development cycle thus reducing the cost of fixing them.
2. Dynamic Testing Techniques are testing techniques that are used to test the dynamic
behaviour of the application under test, that is by the execution of the code base. The
main purpose of dynamic testing is to test the application with dynamic inputs- some
of which may be allowed as per requirement (Positive testing) and some are not
allowed (Negative Testing).
Each testing technique has further types as showcased in the below diagram. Each one of
them will be explained in detail with examples below.

Testing Techniques

Static Testing Techniques


As explained earlier, Static Testing techniques are testing techniques that do not require
the execution of a code base. Static Testing Techniques are divided into two major
categories:
1. Reviews: They can range from purely informal peer reviews between two
developers/testers on the artifacts (code/test cases/test data) to
formal Inspections which are led by moderators who can be internal/external to the
organization.
1. Peer Reviews: Informal reviews are generally conducted without any formal
setup. It is between peers. For Example- Two developers/Testers review each
other’s artifacts like code/test cases.
2. Walkthroughs: Walkthrough is a category where the author of work (code or test
case or document under review) walks through what he/she has done and the logic
behind it to the stakeholders to achieve a common understanding or for the intent
of feedback.
3. Technical review: It is a review meeting that focuses solely on the technical
aspects of the document under review to achieve a consensus. It has less or no
focus on the identification of defects based on reference documentation. Technical
experts like architects/chief designers are required to do the review. It can vary
from Informal to fully formal.
4. Inspection: Inspection is the most formal category of reviews. Before the
inspection, the document under review is thoroughly prepared before going for an
inspection. Defects that are identified in the Inspection meeting are logged in the
defect management tool and followed up until closure. The discussion on defects
is avoided and a separate discussion phase is used for discussions, which makes
Inspections a very effective form of review.
2. Static Analysis: Static Analysis is an examination of requirement/code or design to
identify defects that may or may not cause failures. For Example- Review the code for
the following standards. Not following a standard is a defect that may or may not
cause a failure. Many tools for Static Analysis are mainly used by developers before
or during Component or Integration Testing. Even Compiler is a Static Analysis tool
as it points out incorrect usage of syntax, and it does not execute the code per se.
There are several aspects to the code structure – Namely Data flow, Control flow, and
Data Structure.
1. Data Flow: It means how the data trail is followed in a given program – How data
gets accessed and modified as per the instructions in the program. By Data flow
analysis, you can identify defects like a variable definition that never got used.
2. Control flow: It is the structure of how program instructions get executed i.e.
conditions, iterations, or loops. Control flow analysis helps to identify defects
such as Dead code i.e. a code that never gets used under any condition.
3. Data Structure: It refers to the organization of data irrespective of code. The
complexity of data structures adds to the complexity of code. Thus, it provides
information on how to test the control flow and data flow in a given code.
Dynamic Testing Techniques
Dynamic techniques are subdivided into three categories:
1. Structure-based Testing:
These are also called White box techniques. Structure-based testing techniques are
focused on how the code structure works and test accordingly. To understand Structure-
based techniques, we first need to understand the concept of code coverage.
Code Coverage is normally done in Component and Integration Testing. It establishes
what code is covered by structural testing techniques out of the total code written. One
drawback of code coverage is that- it does not talk about code that has not been written at
all (Missed requirement), There are tools in the market that can help measure code
coverage.
There are multiple ways to test code coverage:
1. Statement coverage: Number of Statements of code exercised/Total number of
statements. For Example, if a code segment has 10 lines and the test designed by you
covers only 5 of them then we can say that statement coverage given by the test is 50%.
2. Decision coverage: Number of decision outcomes exercised/Total number of
Decisions. For Example, If a code segment has 4 decisions (If conditions) and your test
executes just 1, then decision coverage is 25%
3. Conditional/Multiple condition coverage: It has the aim to identify that each
outcome of every logical condition in a program has been exercised.
2. Experience-Based Techniques:
These are techniques for executing testing activities with the help of experience gained
over the years. Domain skill and background are major contributors to this type of
testing. These techniques are used majorly for UAT/business user testing. These work on
top of structured techniques like Specification-based and Structure-based, and they
complement them. Here are the types of experience-based techniques:
1. Error guessing: It is used by a tester who has either very good experience in testing or
with the application under test and hence they may know where a system might have a
weakness. It cannot be an effective technique when used stand-alone but is helpful when
used along with structured techniques.
2. Exploratory testing: It is hands-on testing where the aim is to have maximum
execution coverage with minimal planning. The test design and execution are carried out
in parallel without documenting the test design steps. The key aspect of this type of
testing is the tester’s learning about the strengths and weaknesses of an application under
test. Similar to error guessing, it is used along with other formal techniques to be useful.
3. Specification-based Techniques:
This includes both functional and non-functional techniques (i.e. quality characteristics).
It means creating and executing tests based on functional or non-functional specifications
from the business. Its focus is on identifying defects corresponding to given
specifications. Here are the types of specification-based techniques:
1. Equivalence partitioning: It is generally used together and can be applied to any level
of testing. The idea is to partition the input range of data into valid and non-valid sections
such that one partition is considered “equivalent”. Once we have the partitions identified,
it only requires us to test with any value in a given partition assuming that all values in
the partition will behave the same. For example, if the input field takes the value between
1-999, then values between 1-999 will yield similar results, and we need NOT test with
each value to call the testing complete.
2. Boundary Value Analysis (BVA): This analysis tests the boundaries of the range-
both valid and invalid. In the example above, 0,1,999, and 1000 are boundaries that can
be tested. The reasoning behind this kind of testing is that more often than not,
boundaries are not handled gracefully in the code.
3. Decision Tables: These are a good way to test the combination of inputs. It is also
called a Cause-Effect table. In layman’s language, one can structure the conditions
applicable for the application segment under test as a table and identify the outcomes
against each one of them to reach an effective test.
1. It should be taken into consideration that there are not too many combinations so the
table becomes too big to be effective.
2. Take an example of a Credit Card that is issued if both credit score and salary limit
are met. This can be illustrated in below decision table below:

Decision Table

4. Use case-based Testing: This technique helps us to identify test cases that execute the
system as a whole- like an actual user (Actor), transaction by transaction. Use cases are a
sequence of steps that describe the interaction between the Actor and the system. They
are always defined in the language of the Actor, not the system. This testing is most
effective in identifying integration defects. Use case also defines any preconditions and
postconditions of the process flow. ATM example can be tested via use case:

Use case-based Testing

5. State Transition Testing: It is used where an application under test or a part of it can
be treated as FSM or finite state machine. Continuing the simplified ATM example
above, We can say that ATM flow has finite states and hence can be tested with the State
transition technique. There are 4 basic things to consider –
1. States a system can achieve
2. Events that cause the change of state
3. The transition from one state to another
4. Outcomes of change of state
A state event pair table can be created to derive test conditions – both positive and
negative.

State Transition

Advantages of software testing techniques:


1. Improves software quality and reliability – By using different testing techniques,
software developers can identify and fix defects early in the development process,
reducing the risk of failure or unexpected behaviour in the final product.
2. Enhances user experience – Techniques like usability testing can help to identify
usability issues and improve the overall user experience.
3. Increases confidence – By testing the software, developers, and stakeholders can have
confidence that the software meets the requirements and works as intended.
4. Facilitates maintenance – By identifying and fixing defects early, testing makes it
easier to maintain and update the software.
5. Reduces costs – Finding and fixing defects early in the development process is less
expensive than fixing them later in the life cycle.
Disadvantages of software testing techniques:
1. Time-consuming – Testing can take a significant amount of time, particularly if
thorough testing is performed.
2. Resource-intensive – Testing requires specialized skills and resources, which can be
expensive.
3. Limited coverage – Testing can only reveal defects that are present in the test cases,
and defects can be missed.
4. Unpredictable results – The outcome of testing is not always predictable, and defects
can be hard to replicate and fix.
5. Delivery delays – Testing can delay the delivery of the software if testing takes longer
than expected or if significant defects are identified.
6. Automated testing limitations – Automated testing tools may have limitations, such as
difficulty in testing certain aspects of the software, and may require significant
maintenance and updates.

Testing Frameworks
A testing framework is a set of guidelines or rules used for creating and designing test
cases. A framework is comprised of a combination of practices and tools that are
designed to help QA professionals test more efficiently.
examples
 Linear Automation Test Framework. A linear automation test framework involves
introductory level testing. ...
 Modular-Based Test Framework. Modular-based test frameworks break down test cases
into small modules. ...
 Library Architecture Test Framework. ...
 Keyword-Driven Test Framework. ...
 Data-Driven Test Framework. ...
 Hybrid Test Framework.

A testing framework, also known as a test automation framework, is a set of guidelines,


tools, and libraries that facilitate automated testing of software applications. It defines the
structure of tests, provides libraries for assertions, and automates the execution of test
cases.
Key Components of Testing Frameworks:
1. Test Structure: Frameworks define how tests are organized and written. This
includes naming conventions, setup and teardown methods, and test case structure.
2. Assertion Libraries: These libraries provide functions to check expected
outcomes against actual results. They help determine if the code behaves correctly
under different conditions.
3. Test Execution Management: Frameworks manage the execution of tests,
including running specific test cases, reporting results, and handling test
dependencies.
4. Fixture Management: Many frameworks support fixtures to set up preconditions
and clean up after tests. This ensures tests are isolated and repeatable.
Types of Testing Frameworks:
1. Unit Testing Frameworks: Used for testing individual units or components of
code in isolation. Examples include JUnit (for Java), pytest (for Python), and
NUnit (for .NET).
2. Integration Testing Frameworks: Validate interactions between different parts of
the application. Examples include TestNG (for Java) and Test::Unit (for Ruby).
3. Functional Testing Frameworks: Simulate end-user behavior to ensure the
software functions as expected. Examples include Selenium (for web applications)
and Robot Framework.
4. Performance Testing Frameworks: Measure and evaluate performance metrics
like response times and throughput. Examples include JMeter and Gatling.
Advantages of Using Testing Frameworks:
 Automation: Reduces manual effort in executing tests, allowing for faster and
more frequent testing cycles.
 Consistency: Provides a standardized way to write and structure tests, ensuring
uniformity across the development team.
 Scalability: Enables testing of complex systems by breaking them down into
manageable test cases.
 Integration: Often integrates with continuous integration (CI) tools to automate
the testing process in the software development lifecycle.
Popular Testing Frameworks:
 JUnit: Widely used for Java unit testing.
 pytest: Popular for Python testing due to its simplicity and powerful features.
 Selenium: Dominant framework for web application testing.
 Jest: Framework for JavaScript, particularly used in testing React applications.
 Cucumber: Supports behavior-driven development (BDD) by enabling tests to be
written in plain language.
Software Testing Tools
Software Testing tools are the tools that are used for the testing of software. Software testing tools are often
used to assure firmness, thoroughness, and performance in testing software products. Unit testing and
subsequent integration testing can be performed by software testing tools. These tools are used to fulfill all the
requirements of planned testing activities. These tools also work as commercial software testing tools. The
quality of the software is evaluated by software testers with the help of various testing tools.
Types of Testing Tools
Software testing is of two types, static testing, and dynamic testing. Also, the tools used during these testing are
named accordingly on these testings. Testing tools can be categorized into two types which are as follows:
1. Static Test Tools: Static test tools are used to work on the static testing processes. In the testing through
these tools, the typical approach is taken. These tools do not test the real execution of the software. Certain
input and output are not required in these tools. Static test tools consist of the following:
 Flow analyzers: Flow analyzers provides flexibility in the data flow from input to output.
 Path Tests: It finds the not used code and code with inconsistency in the software.
 Coverage Analyzers: All rationale paths in the software are assured by the coverage analyzers.
 Interface Analyzers: They check out the consequences of passing variables and data in the modules.
2. Dynamic Test Tools: Dynamic testing process is performed by the dynamic test tools. These tools test the
software with existing or current data. Dynamic test tools comprise the following:
 Test driver: The test driver provides the input data to a module-under-test (MUT).
 Test Beds: It displays source code along with the program under execution at the same time.
 Emulators: Emulators provide the response facilities which are used to imitate parts of the system not
yet developed.
 Mutation Analyzers: They are used for testing the fault tolerance of the system by knowingly
providing the errors in the code of the software.
There is one more categorization of software testing tools. According to this classification, software testing
tools are of 10 types:
1. Test Management Tools: Test management tools are used to store information on how testing is to be
done, help to plan test activities, and report the status of quality assurance activities. For example,
JIRA, Redmine, Selenium, etc.
2. Automated Testing Tools: Automated testing tools helps to conduct testing activities without human
intervention with more accuracy and less time and effort. For example, Appium, Cucumber, Ranorex,
etc.
3. Performance Testing Tools: Performance testing tools helps to perform effectively and efficiently
performance testing which is a type of non-functional testing that checks the application for parameters
like stability, scalability, performance, speed, etc. For example, WebLOAD, Apache JMeter, Neo Load,
etc.
4. Cross-browser Testing Tools: Cross-browser testing tools helps to perform cross-browser testing that
lets the tester check whether the website works as intended when accessed through different browser-
OS combinations. For example, Testsigma, Testim, Perfecto, etc.
5. Integration Testing Tools: Integration testing tools are used to test the interface between the modules
and detect the bugs. The main purpose here is to check whether the specific modules are working as per
the client’s needs or not. For example, Citrus, FitNesse, TESSY, etc.
6. Unit Testing Tools: Unit testing tools are used to check the functionality of individual modules and to
make sure that all independent modules works as expected. For example, Jenkins, PHPUnit, JUnit, etc.
7. Mobile Testing Tools: Mobile testing tools are used to test the application for compatibility on
different mobile devices. For example, Appium, Robotium, Test IO, etc.
8. GUI Testing Tools: GUI testing tools are used to test the graphical user interface of the software. For
example, EggPlant, Squish, AutoIT, etc.
9. Bug Tracking Tools: Bug tracking tool helps to keep track of various bugs that come up during the
application lifecycle management. It helps to monitor and log all the bugs that are detected during
software testing. For example, Trello, JIRA, GitHub, etc.
10. Security Testing Tools: Security testing is used to detect the vulnerabilities and safeguard the
application against the malicious attacks. For example, NetSparker, Vega, ImmuniWeb, etc.

You might also like