Professional Documents
Culture Documents
Unit II(1)
Unit II(1)
Content
Unit II:
o Software Design, Analysis and Testing
o The Software Design Process,
o Design Concepts and Principles,
o Software Modeling and UML,
o Architectural Design,
o Architectural Views and Styles,
o User Interface Design,
o Function oriented Design,
o SA/SD Component Based Design,
o Design Metrics.
o Software Static and Dynamic analysis,
o Code inspections, Software Testing,
o Fundamentals, Software Test Process,
o Testing Levels,
o Test Criteria,
o Test Case Design,
o Test Oracles,
o Test Techniques,
o Testing Frameworks,
o Test Plan,
o Test Metrics,
o Testing Tools.
Software Design
Software design is a mechanism to transform user requirements into some suitable form, which helps the
programmer in software coding and implementation. It deals with representing the client's requirement,
as described in SRS (Software Requirement Specification) document, into a form, i.e., easily
implementable using programming language.
The software design phase is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from the problem domain to the solution domain. In software design, we consider the
system to be a set of components or modules with clearly defined behaviors & boundaries.
For software design, the goal is to divide the problem into manageable pieces.
Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level without
bothering about the internal details of the implementation. Abstraction can be used for existing element
as well as the component being designed.
Here, there are two common abstraction mechanisms
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.
Modularity
Modularity specifies to the division of software into separate modules which are differently named and
addressed and are integrated later on in to obtain the completely functional software. It is the only
property that allows a program to be intellectually manageable. Single large programs are difficult to
understand and read due to a large number of reference variables, control paths, global variables, etc.
The desirable properties of a modular system are:
o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.
Advantages of Modularity
There are several advantages of Modularity
o It allows large programs to be written by several or different people
o It encourages the creation of commonly used routines to be placed in the library and used by other
programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides more checkpoints to measure progress.
o It provides a framework for complete testing, more accessible to test
o It produced the well designed and more readable program.
Disadvantages of Modularity
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of modular
design in detail in this section:
1. Functional Independence: Functional independence is achieved by developing functions that
perform only one kind of task and do not excessively interact with other modules. Independence is
important because it makes implementation more accessible and faster. The independent modules are
easier to maintain, test, and reduce error propagation and can be reused in other programs as well. Thus,
functional independence is a good design feature which ensures software quality.
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules should be
specified that data include within a module is inaccessible to other modules that do not need for such
information.
The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance. This is
because as most data and procedures are hidden from other parts of the software, inadvertent errors
introduced during modifications are less likely to propagate to different locations within the software.
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy to
develop and latter too, change. Structured design methods help developers to deal with the size and
complexity of programs. Analysts generate instructions for the developers about how code should be
composed and how pieces of code should fit together to form a program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components and then
decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves towards up
the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Coupling and Cohesion
Module Coupling
In software engineering, the coupling is the degree of interdependence between software modules. Two modules that are
tightly coupled are strongly dependent on each other. However, two modules that are loosely coupled are not dependent on
each other. Uncoupled modules have no interdependence at all within them.
The various types of coupling techniques are shown in fig:
A good design is the one that has low coupling. Coupling is measured by the number of relations between the modules. That
is, the coupling increases as the number of calls between modules increase or the amount of shared data is large. Thus, it can
be said that a design with high coupling will have more errors.
Types of Module Coupling
2.
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data items such as structure,
objects, etc. When the module passes non-global data structure or entire structure to another module, they are said to be stamp
coupled. For example, passing structure variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is used to direct the structure of
instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed data format, communication
protocols, or device interface. This is related to communication to external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a branch from one module into
another module.
Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module belong together. Thus, cohesion
measures the strength of relationships between pieces of functionality within a given module. For example, in highly cohesive
systems, functionality is strongly related.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or "low cohesion."
1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a module, cooperate to
achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a module form the
components of the sequence, where the output from one component of the sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all tasks of the module refer
to or update the same data structure, e.g., the set of functions defined on an array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of the module are all parts of
a procedure in which particular sequence of steps has to be carried out for achieving a goal, e.g., the algorithm for
decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact that all the methods must be
executed in the same time, the module is said to exhibit temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the module perform a similar
operation. For example Error handling, data input and data output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a set of tasks that are
associated with each other very loosely, if at all.
Coupling Cohesion
Coupling shows the relationships between Cohesion shows the relationship within the module.
modules.
Coupling shows the Cohesion shows the module's relative functional strength.
relative independence between the
modules.
While creating, you should aim for low While creating you should aim for high cohesion, i.e., a
coupling, i.e., dependency among modules cohesive component/ module focuses on a single function
should be less. (i.e., single-mindedness) with little interaction with other
modules of the system.
In coupling, modules are linked to the other In cohesion, the module focuses on a single thing.
modules.
Unified Modeling Language (UML) is a general-purpose modeling language. The main aim of UML is to define a standard
way to visualize the way a system has been designed. It is quite similar to blueprints used in other fields of engineering.
UML is not a programming language, it is rather a visual language.
We use UML diagrams to portray the behavior and structure of a system.
UML helps software engineers, businessmen, and system architects with modeling, design, and analysis.
The Object Management Group (OMG) adopted Unified Modelling Language as a standard in 1997. It’s been
managed by OMG ever since.
The International Organization for Standardization (ISO) published UML as an approved standard in 2005.
UML has been revised over the years and is reviewed periodically.
Essential UML diagrams for system modelling
Use case diagrams, which show the interactions between a system and its environment.
Online Shopping System Use Case diagram
Sequence diagrams, which show interactions between actors and the system and between system components.
Activity diagrams, which show the activities involved in a process or in the data processing.
Class diagrams, which show the object classes in the system and the associations between these classes.
State diagrams, which show how the system reacts to internal and external events.
Each diagram will be discussed in detail in the following articles. So make sure to follow me.
A set of components(eg: a database, computational modules) that will perform a function required by the
system.
The set of connectors will help in coordination, communication, and cooperation between the components.
Conditions that how components can be integrated to form the system.
Semantic models that help the designer to understand the overall properties of the system.
The use of architectural styles is to establish a structure for all the components of the system.
You can design software architectures at two levels of abstraction, which I call
architecture in the small and architecture in the large:
1. Architecture in the small is concerned with the architecture of individual pro- grams. At this level, we are
concerned with the way that an individual program is decomposed into components. This chapter is mostly
concerned with pro- gram architectures.
2. Architecture in the large is concerned with the architecture of complex enter- prise systems that include other
systems, programs, and program compo- nents. These enterprise systems are distributed over different
computers, which may be owned and managed by different companies. I cover architec- ture in the large in
Vision
System
Object
Arm Gripper
Identification
Controller Controller
System
Packaging
Selection
System
Packing Conveyor
System Controller
Figure The
architectre of a
packingrobot
control system
Architectural style
Architectural style is defined as a set of characteristics and features that make a building or other structure notable
or historically identifiable.
An architectural style is a set of principles. You can think of it as a coarse-grained pattern that provides an abstract
framework for a family of systems. An architectural style improves partitioning and promotes design reuse by
providing solutions to frequently recurring problems.
According to Architectural Styles CS 377 Introduction to Software Engineering:
Named collection. An architectural style is a named collection of architectural design decisions that are
applicable in a given development context, constrain architectural design decisions that are specific to a
particular system within that context, elicit beneficial qualities in each resulting system.
Recurring organizational patterns and idioms. Established, shared understanding of common design
forms. Mark of mature engineering field.
Abstraction. Abstraction of recurring composition and interaction characteristics in a set of architectures.
A data store will reside at the center of this architecture and is accessed frequently by the other components that
update, add, delete or modify the data present within the store.
The figure illustrates a typical data centered style. The client software access a central repository. Variation of
this approach are used to transform the repository into a blackboard when data related to client or data of
interest for the client change the notifications to client software.
This data-centered architecture will promote integrability. This means that the existing components can be
changed and new client components can be added to the architecture without the permission or concern of other
clients.
Data can be passed among clients using blackboard mechanism.
Advantage of Data centered architecture
Repository of data is independent of clients
Client work independent of each other
It may be simple to add additional clients.
Modification can be very easy
This kind of architecture is used when input data is transformed into output data through a series of
computational manipulative components.
The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a set of components
called filters connected by lines.
Pipes are used to transmitting data from one component to the next.
Each filter will work independently and is designed to take data input of a certain form and produces data output
to the next filter of a specified form. The filters don’t require any knowledge of the working of neighboring
filters.
If the data flow degenerates into a single line of transforms, then it is termed as batch sequential. This structure
accepts the batch of data and then applies a series of sequential components to transform it.
Advantages of Data Flow architecture
It encourages upkeep, repurposing, and modification.
With this design, concurrent execution is supported.
The disadvantage of Data Flow architecture
It frequently degenerates to batch sequential system
Data flow architecture does not allow applications that require greater user engagement.
It is not easy to coordinate two different but related streams
Remote procedure call architecture: This components is used to present in a main program or sub program
architecture distributed among multiple computers on a network.
Main program or Subprogram architectures: The main program structure decomposes into number of
subprograms or function into a control hierarchy. Main program contains number of subprograms that can
invoke other components.
4] Object Oriented architecture: The components of a system encapsulate data and the operations that must be applied
to manipulate the data. The coordination and communication between the components are established via the message
passing.
Characteristics of Object Oriented architecture
Object protect the system’s integrity.
An object is unaware of the depiction of other items.
Advantage of Object Oriented architecture
It enables the designer to separate a challenge into a collection of autonomous objects.
Other objects are aware of the implementation details of the object, allowing changes to be made without having
an impact on other objects.
5] Layered architecture:
A number of different layers are defined with each layer performing a well-defined set of operations. Each layer
will do some operations that becomes closer to machine instruction set progressively.
At the outer layer, components will receive the user interface operations and at the inner layers, components will
perform the operating system interfacing(communication and coordination with OS)
Intermediate layers to utility services and application software functions.
One common example of this architectural style is OSI-ISO (Open Systems Interconnection-International
Organisation for Standardisation) communication system.
Layered architecture:
Architectural views
Software architecture views are representations of the overall architecture of a software system that are
meaningful to one or more stakeholders. They enable the architecture to be communicated to and understood by
the stakeholders, so they can verify that the system will address their concerns.
Different types of software architecture views:
Logical view: This view shows the system's functionality and how it is organized into components. It is
typically represented using diagrams such as class diagrams, sequence diagrams, and activity diagrams.
Physical view: This view shows the system's physical components, such as hardware and software, and
how they are interconnected. It is typically represented using diagrams such as deployment diagrams and
network diagrams.
Process view: This view shows the system's dynamic behavior, such as how its components interact with
each other and with the environment. It is typically represented using diagrams such as state diagrams and
sequence diagrams.
Development view: This view shows how the system will be developed and maintained. It typically
includes information such as the development process, the tools and technologies used, and the roles and
responsibilities of the different stakeholders.
Operational view: This view shows how the system will be operated and monitored. It typically includes
information such as the operating environment, the performance requirements, and the security
requirements.
4+1 View Model:
The 4+1 View Model is a popular approach to software architecture views. It defines four views:
1. Logical view: This view is the same as the logical view described above.
2. Process view: This view is also the same as the process view described above.
3. Physical view: This view is also the same as the physical view described above.
4. Development view: This view is also the same as the development view described above.
The fifth view is a set of scenarios that illustrate how the system will be used. The scenarios are used to validate
the architecture and to ensure that it meets the needs of the stakeholders.
The 4+1 architectural view model is a popular way to document software architectures because it provides a
comprehensive view of the system from different perspectives.
Here are some examples of how software architecture views can be used:
Developers can use the logical view to understand the system's functionality and how it is organized into
components. This information can help them to write code that is consistent with the architecture and to
identify potential problems.
System engineers can use the physical view to plan for the system's deployment and to ensure that it
meets the performance and reliability requirements.
Testers can use the process view to design test cases that ensure that the system's components interact
correctly with each other and with external systems.
Business analysts can use the scenarios view to understand how the system will be used in real-world
situations and to identify any potential gaps between the system's functionality and the needs of the
business.
Benefits of using software architecture views:
Improved communication: Software architecture views can help to improve communication between
different stakeholders, such as developers, system engineers, and project managers.
Reduced risk: Software architecture views can help to identify and mitigate risks early in the
development process.
Increased quality: Software architecture views can help to ensure that the system is designed in a way
that meets the needs of the stakeholders and that it is reliable, maintainable, and scalable.
Other common software architecture views include:
Context view: The context view shows the system's relationship to its external environment, such as other
systems, users, and devices.
Information view: The information view shows the system's data model and how the data is used and
exchanged between the system's components.
Concurrency view: The concurrency view shows how the system's components interact with each other
concurrently.
Security view: The security view shows how the system is protected from unauthorized access and other
security threats.
User Interface Design,
User interface (UI) design is the process designers use to build interfaces in software or computerized devices,
focusing on looks or style. Designers aim to create interfaces which users find easy to use and pleasurable. UI
design refers to graphical user interfaces and other forms—e.g., voice-controlled interfaces
UI stands for user interface. It is the bridge between humans and computers. Anything you interact with as a user
is part of the user interface. For example, screens, sounds, overall style, and responsiveness are all elements that
need to be considered in UI
The user interface is the front-end application view to which the user interacts to use the software. The software
becomes more popular if its user interface is:
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a command prompt, where the user
types the command and feeds it to the system. The user needs to remember the syntax of the command
and its use.
2. Graphical User Interface: Graphical User Interface provides a simple interactive interface to interact
with the system. GUI can be a combination of both hardware and software. Using GUI, the user interprets
the software.
The analysis and design process of a user interface is iterative and can be represented by a spiral model. The
analysis and design process of user interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the system, i.e., understanding, skill and
knowledge, type of user, etc., based on the user’s profile users are made into categories. From each category
requirements are gathered. Based on the requirement’s developer understand how to develop the interface. Once
all the requirements are gathered a detailed analysis is conducted. In the analysis part, the tasks that the user
performs to establish the goals of the system are identified, described and elaborated. The analysis of the user
environment focuses on the physical work environment. Among the questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to the interface?
3. Does the interface hardware accommodate space, light, or noise constraints?
4. Are there special human factors considerations driven by environmental factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e., control mechanisms that enable the
user to perform desired tasks. Indicate how these control mechanisms affect the system. Specify the action
sequence of tasks and subtasks, also called a user scenario. Indicate the state of the system when the user performs
a particular task. Always follow the three golden rules stated by Theo Mandel. Design issues such as response
time, command and action structure, error handling, and help facilities are considered as the design model is
refined. This phase serves as the foundation for the implementation phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that enables usage scenarios to be
evaluated. As iterative design process continues a User Interface toolkit that allows the creation of windows,
menus, device interaction, error messages, commands, and many other elements of an interactive environment can
be used for completing the construction of an interface.
4. Interface Validation
This phase focuses on testing the interface. The interface should be in such a way that it should be able to perform
tasks correctly, and it should be able to handle a variety of tasks. It should achieve all the user’s requirements. It
should be easy to use and easy to learn. Users should accept the interface as a useful one in their work.
As the name itself implies, SA/SD methodology involves carrying out two distinct activities:
● Structured analysis (SA)
● Structured design (SD).
The roles of structured analysis (SA) and structured design (SD) have been shownschematically in
below figure.
● The structured analysis activity transforms the SRS document into a graphic model called the
DFD model. During structured analysis, functional decomposition of the system is achieved. The
purpose of structured analysis is to capture the detailed structure of the system as perceived by the
user
● During structured design, all functions identified during structured analysis are mapped to a
module structure. This module structure is also called the high level design or the software
architecture for the given problem. This is represented using a structure chart. The purpose of
structured design is to define the structure of the solution that is suitable for implementation
● The high-level design stage is normally followed by a detailed design stage.
Structured Analysis
During structured analysis, the major processing tasks (high-level functions) of the system are analyzed,
and t h e data flow among these processing tasks are represented graphically.
The structured analysis technique is based on the following underlying principles:
● Top-down decomposition approach.
● Application of divide and conquer principle.
● Through this each high level function is independently decomposed into detailed functions.
Graphical representation of the analysis results using data flow diagrams (DFDs).
What is DFD?
● A DFD is a hierarchical graphical model of a system that shows the different processing activities
or functions that the system performs and the data interchange among those functions.
● Though extremely simple, it is a very powerful tool to tackle the complexity of industry standard
problems.
● A DFD model only represents the data flow aspects and does not show the sequence of execution of
the different functions and the conditions based on which a function may or may not be executed.
● In the DFD terminology, each function is called a process or a bubble. each function as a
processing station (or process) that consumes some input data and produces some output data.
● DFD is an elegant modeling technique not only to represent the results of structured analysis but
also useful for several other applications.
● Starting with a set of high-level functions that a system performs, a DFD model represents the
subfunctions performed by the functions using a hierarchy of diagrams.
asynchronous operations
● If two bubbles are directly connected by a data flow arrow, then they are synchronous. Thismeans that
they operate at the same speed.
● If two bubbles are connected through a data store, as in Figure (b) then the speed of operation
of the bubbles are independent.
Data dictionary
● Every DFD model of a system must be accompanied by a data dictionary. A data dictionarylists all
data items that appear in a DFD model.
● A data dictionary lists the purpose of all data items and the definition of all composite data items in
terms of their component data items.
● It includes all data flows and the contents of all data stores appearing on all the DFDs in a DFD
model.
● For the smallest units of data items, the data dictionary simply lists their name and their type.
● Composite data items are expressed in terms of the component data items using certain operators.
● The dictionary plays a very important role in any software development process, especiallyfor the
following reasons:
○ A data dictionary provides a standard terminology for all relevant data for use by the
developers working in a project.
○ The data dictionary helps the developers to determine the definition of different datastructures
in terms of their component elements while implementing the design.
○ The data dictionary helps to perform impact analysis. That is, it is possible todetermine the
effect of some data on various processing activities and vice versa.
● For large systems, the data dictionary can become extremely complex and voluminous.
● Computer-aided software engineering (CASE) tools come handy to overcome this problem.
● Most CASE tools usually capture the data items appearing in a DFD as the DFD is drawn, and
automatically generate the data dictionary.
Self Study:
Data Definition
● The DFD model of a problem consists of many DFDs and a single data dictionary. The DFDmodel
of a system i s constructed by using a hierarchy of DFDs.
● The top level DFD is called the level 0 DFD or the context diagram.
○ This is the most abstract (simplest) representation of the system (highest level). It is the easiest
to draw and understand.
● At each successive lower level DFDs, more and more details are gradually introduced.
● To develop a higher-level DFD model, processes are decomposed into their subprocessesand the
data flow among these subprocesses are identified.
● Level 0 and Level 1 consist of only one DFD each. Level 2 may contain up to 7 separateDFDs,
and level 3 up to 49 DFDs, and so on.
● However, there is only a single data dictionary for the entire DFD model.
Level 1 DFD:
● The level 1 DFD usually contains three to seven bubbles.
● The system is represented as performing three to seven important functions.
● To develop the level 1 DFD, examine the high-level functional requirements in the SRSdocument.
● If there are three to seven high level functional requirements, then each of these can be directly
represented as a bubble in the level 1 DFD.
● Next, examine the input data to these functions and the data output by these functions as
documented in the SRS document and represent them appropriately in the diagram.
Decomposition:
● Each bubble in the DFD represents a function performed by the system.
● The bubbles are decomposed into subfunctions at the successive levels of the DFD model.
Decomposition of a bubble is also known as factoring o r exploding a bubble.
● Each bubble at any level of DFD is usually decomposed to anything from three to seven bubbles.
Decomposition of a bubble should be carried on until a level is reached at which the function of the
bubble can be described using a simple algorithm.
Structured Design
● The aim of structured design is to transform the results of the structured analysis into a structure
chart.
● A structure chart represents the software architecture.
● The various modules making up the system, the module dependency (i.e. which module calls which
other modules), and the parameters that are passed among the different modules.
● The structure chart representation can be easily implemented using some programming language.
● The basic building blocks using which structure charts are designed are as following:
○ Rectangular boxes: A rectangular box represents a module.
○ Module invocation arrows: An arrow connecting two modules implies that during program
execution control is passed from one module to the other in the direction of the connecting
arrow.
○ Data flow arrows: These are small arrows appearing alongside the module invocation arrows.
represent the fact that the named data passes from one module to the other in the direction of the
arrow.
○ Library modules: A library module is usually represented by a rectangle with double edges.
Libraries comprise the frequently called modules.
○ Selection: The diamond symbol represents the fact that one module of several modules
connected with the diamond symbol i s invoked depending on the outcome of the condition
attached with the diamond symbol.
○ Repetition: A loop around the control flow arrows denotes that the respective modules are
invoked repeatedly.
● In any structure chart, there should be one and only one module at the top, called the root.
● There should be at most one control relationship between any two modules in the structurechart. This
means that if module A invokes module B, module B cannot invoke module A.
Transform Analysis:
● Transform analysis identifies the primary functional components (modules) and the input and output
data for these components.
● The first step in transform analysis is to divide the DFD into three types of parts:
○ Input (afferent branch)
○ Processing (central transform)
○ Output (efferent branch)
● In the next step of transform analysis, the structure chart is derived by drawing one functional
component each for the central transform, the afferent and efferent branches.
● These are drawn below a root module, which would invoke these modules.
● In the third step of transform analysis, the structure chart is refined by adding sub functionsrequired by
each of the high-level functional components.
Transaction Analysis:
● Transaction analysis is an alternative to transform analysis and is useful while designing transaction
processing programs.
● A transaction allows the user to perform some specific type of work by using the software.
● For example, ‘issue book’, ‘return book’, ‘query book’, etc., are transactions.
● As in transform analysis, first all data entering into the DFD need to be identified.
● In a transaction-driven system, different data items may pass through differentcomputation paths
through the DFD.
● This is in contrast to a transform centered system where each data item entering the DFD goes
through the same processing steps.
● Each different way in which input data is processed is a transaction. For each identified
transaction, trace the input data to the output.
● All the traversed bubbles belong to the transaction.
● These bubbles should be mapped to the same module on the structure chart.
● In the structure chart, draw a root module and below this module draw each identified
transaction as a module.
Detailed Design:
● During detailed design the pseudo code description of the processing and the different data structures
are designed for the different modules of the structure chart.
● These are usually described in the form of module specifications (MSPEC). MSPEC is usually
written using structured English.
● The MSPEC for the non-leaf modules describe the different conditions under which the
responsibilities are delegated to the lower level modules.
● The MSPEC for the leaf-level modules should describe in algorithmic form how the primitive
processing steps are carried out.
● To develop the MSPEC of a module, it is usually necessary to refer to the DFD model andthe SRS
document to determine the functionality of the module.
Design Review:
● After a design is complete, the design is required to be reviewed.
● The review team usually consists of members with design, implementation, testing, andmaintenance
perspectives.
● The review team checks the design documents especially for the following aspects:
○ Traceability: Whether each bubble of the DFD can be traced to some module in the
structure chart and vice versa.
○ Correctness: Whether all the algorithms and data structures of the detailed design are correct.
○ Maintainability: Whether the design can be easily maintained in future.
○ Implementation: Whether the design can be easily and efficiently implemented.
● After the points raised by the reviewers are addressed by the designers, the design document
becomes ready for implementation.
Design Metrics
The design matrix contains data on the independent variables (also called explanatory variables), in a statistical
model that is intended to explain observed data on a response variable (often called a dependent variable).
Code inspections
The development of any software application/product goes through SDLC (Software Development Life
Cycle) where every phase is very important and needs to be followed accordingly to develop a quality software
product. Inspection is such an important element which have a great impact on the software development
process.
The software development team not only develops the software application rather during the coding
phase of software development they check for any error in the code of the software which is called code
verification. This code verification checks the software code in all aspects and finds out the errors that exist in
the code. Generally, there are two types of code verification techniques available i.e.
1. Dynamic technique – It is performed by executing some test data and the outputs of the program are
monitored to find errors in the software code.
2. Static technique – It is performed by executing the program conceptually and without any data. Code
reading, static analysis, symbolic execution, code inspection, reviews, etc. are some of the commonly
used static techniques.
Code Inspection :
Code inspection is a type of Static testing that aims to review the software code and examine for any
errors. It helps reduce the ratio of defect multiplication and avoids later-stage error detection by simplifying all
the initial error detection processes. This code inspection comes under the review process of any application.
How does it work?
Moderator, Reader, Recorder, and Author are the key members of an Inspection team.
Related documents are provided to the inspection team who then plan the inspection meeting and
coordinate with inspection team members.
If the inspection team is not aware of the project, the author provides an overview of the project and
code to inspection team members.
Then each inspection team performs code inspection by following some inspection checklists.
After completion of the code inspection, conduct a meeting with all team members and analyze the
reviewed code.
Purpose of code inspection :
1. It checks for any error that is present in the software code.
2. It identifies any required process improvement.
3. It checks whether the coding standard is followed or not.
4. It involves peer examination of codes.
5. It documents the defects in the software code.
Advantages Of Code Inspection :
Improves overall product quality.
Discovers the bugs/defects in software code.
Marks any process enhancement in any case.
Finds and removes defective efficiently and quickly.
Helps to learn from previous defeats.
Disadvantages of Code Inspection:
Requires extra time and planning.
The process is a little slower.
What is Software Testing?
Software testing can be stated as the process of verifying and validating whether a software or
application is bug-free, meets the technical requirements as guided by its design and development, and meets
the user requirements effectively and efficiently by handling all the exceptional and boundary cases.
The process of software testing aims not only at finding faults in the existing software but also at
finding measures to improve the software in terms of efficiency, accuracy, and usability. The article focuses on
discussing Software Testing in detail.
Software Testing is a method to assess the functionality of the software program. The process checks
whether the actual software matches the expected requirements and ensures the software is bug-free. The
purpose of software testing is to identify the errors, faults, or missing requirements in contrast to actual
requirements. It mainly aims at measuring the specification, functionality, and performance of a software
program or application.
Software testing can be divided into two steps:
1. Verification: It refers to the set of tasks that ensure that the software correctly implements a specific
function. It means “Are we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements. It means “Are we building the right product?”.
Importance of Software Testing:
Defects can be identified early: Software testing is important because if there are any bugs they can be
identified early and can be fixed before the delivery of the software.
Improves quality of software: Software Testing uncovers the defects in the software, and fixing them
improves the quality of the software.
Increased customer satisfaction: Software testing ensures reliability, security, and high performance
which results in saving time, costs, and customer satisfaction.
Helps with scalability: Software testing type non-functional testing helps to identify the scalability
issues and the point where an application might stop working.
Saves time and money: After the application is launched it will be very difficult to trace and resolve
the issues, as performing this activity will incur more costs and time. Thus, it is better to conduct
software testing at regular intervals during software development.
Different Types Of Software Testing
Test Criteria
In software engineering, test criteria refer to the standards or guidelines used to determine whether a software
product or system has met specified requirements and is ready for release or further development stages. Test
criteria are essential for defining the conditions under which testing is conducted and for making decisions
about the quality and readiness of the software. Here are the key types of test criteria commonly used in
software engineering:
1. Functional Test Criteria:
Requirements Coverage: Ensures that all functional requirements specified for the software
have been tested.
Boundary Conditions: Tests the software behavior at the boundaries of input ranges to verify
correct handling of edge cases.
Use Case Coverage: Verifies that all defined use cases of the software have been tested and
behave as expected.
Error Handling: Validates that the software handles expected and unexpected errors
gracefully.
2. Non-Functional Test Criteria:
Performance Criteria: Sets performance targets such as response time, throughput, and
resource utilization that the software must meet under various conditions.
Usability Criteria: Evaluates the ease of use, user interface design, and overall user experience
of the software.
Reliability Criteria: Defines reliability metrics such as mean time between failures (MTBF) or
mean time to failure (MTTF) that the software should achieve.
Security Criteria: Specifies security requirements and checks for vulnerabilities such as
unauthorized access or data breaches.
3. Code Coverage Criteria:
Statement Coverage: Measures the percentage of executable statements in the source code that
have been executed during testing.
Branch Coverage: Ensures that all possible branches and decision points in the code have been
exercised during testing.
Path Coverage: Verifies that every possible path through the code has been executed, ensuring
comprehensive testing.
4. Regression Test Criteria:
Critical Path Testing: Focuses on testing the most critical functionalities or modules of the
software to ensure they work correctly after changes.
Risk-Based Testing: Prioritizes testing efforts based on identified risks to the software,
ensuring that high-risk areas receive more attention.
Test Case Selection: Determines which test cases need to be rerun based on the impact of
changes made to the software.
5. Acceptance Test Criteria:
Acceptance Criteria: Defines the conditions that must be met for the software to be accepted
by stakeholders or customers.
Exit Criteria: Specifies the conditions that must be satisfied before exiting a testing phase,
such as passing certain test cases or meeting specific quality metrics.
6. Quality Criteria:
Defect Criteria: Sets thresholds for acceptable defect rates or severity levels that the software
must meet to be considered of high quality.
Compliance Criteria: Ensures that the software complies with relevant industry standards,
regulations, or internal policies.
Project requirements are important because understanding the project goals, functionalities, and constraints is
essential to formulate an effective testing strategy. The testing team should have a well-defined scope for the
testing process. It includes the software components to be tested, the testing techniques to be used, and the level
of testing to be performed.
A comprehensive test strategy outlining the overall testing approach should also be formulated in test planning.
It includes details about the testing objectives, test levels, test types, test environments, and resource allocation.
To conclude this phase, we should have a resource allocation plan. It’s a specification of how to allocate testers,
test environments, and testing tools for the upcoming activities per the test plan. That’s another exit criterion.
The test plan document should be available to all team members and stakeholders for reference and guidance
during testing.
3.2. Entry and Exit Criteria in Test Design
During the test design phase, the testing team transforms the testing strategy defined in test planning into
tangible test cases and test scripts.
This phase also has several entry and exit criteria:
The test plan, approved in the test planning phase, serves as the foundation for test design. It provides us with
essential information about testing objectives, scope, and strategies that guide the creation of test cases.
The testing team must thoroughly analyze the project requirements and ensure the test cases cover all specified
functionalities. Test data required for executing the test cases should be prepared and available in the testing
environment. This data should be relevant to the test scenarios.
The exit criteria of the test design phase include the development, review, and approval of the test cases. Each
case should undergo a thorough review process to identify and address any defects or ambiguities before
approval. The test design phase is complete when we finalize the test design document containing test cases,
test scenarios, and test data and get it ready for execution.
Meeting the entry criteria ensures that test cases and data are well-prepared and aligned with the testing
strategy before execution begins. The exit criteria in test design act as checkpoints to verify that the test cases
and associated documents have been created and reviewed according to the defined standards.
3.3. Entry and Exit Criteria in Test Execution
Test execution is the phase where the software’s actual testing occurs. During this phase, the testing team
executes the prepared test cases, records test results, and logs defects.
Test cases must be fully developed and approved in the test design phase before test execution begins. The test
data required for executing the test cases should be available and relevant to the test scenarios. Let’s look at the
visual representation of various entry and exit criteria for the test execution phase:
The testing environment must be set up correctly, and all prerequisites, including the availability of the
application under test, should be met before test execution starts. We should record test results, including
pass/fail status and metrics such as test execution time and defect density.
The test execution phase concludes when all identified test cases have been executed, and the test coverage is
adequate. All defects encountered during test execution should be logged in the defect tracking system with
appropriate descriptions and reproducing steps. Test results and metrics recorded during test execution should
be reviewed to identify trends or patterns requiring further investigation or action.
Meeting the entry criteria ensures the testing environment is set up correctly and test cases and data are ready
for execution. The exit criteria in test execution confirm that the test execution phase has been completed
according to the defined testing scope and objectives.
3.4. Entry and Exit Criteria in Test Closure
The test closure phase marks the conclusion of the software testing effort for a specific project. During this
phase, the testing team completes the testing activities, summarises the testing outcomes, and gathers
valuable insights for future improvements.
Let’s look at the prerequisites for entering the phase and the conditions for successfully completing it:
We should thoroughly review and analyze the test results and metrics we record during testing to identify
trends, patterns, and overall testing outcomes.
The test closure phase concludes when relevant stakeholders finalize, review, and approve the test closure
report. Valuable insights and lessons learned during testing should be documented to guide future testing
efforts.
Meeting the entry criteria means we have completed the testing activities successfully. The exit criteria in test
closure indicate we have all the necessary exit documents, including the test closure report and lessons.
4. Importance of Setting Clear Entry and Exit Criteria
The entry criteria act as a quality gate that ensures the testing phase begins only when we meet all the
prerequisites. This helps avoid premature or incomplete testing and ensures that the test environment, data, and
resources are in place for efficient testing.
With proper entry criteria, we can focus on actual test execution rather than dealing with unprepared or unstable
environments. This enables us to provide accurate and reliable test results and bug reports.
Exit criteria also help identify potential risks and defects we need to address before moving on to the next
phase. They ensure the software product meets the required quality standards before delivery.
5. Best Practices for Defining Entry and Exit Criteria
Defining entry and exit criteria requires careful consideration and collaboration among the testing team,
stakeholders, and project managers.
The best practices are:
Defining specific and measurable criteria
Involving stakeholders and test team collaboration
Periodic review and adaptation for improvement
Entry and exit criteria should be clear, specific, and measurable. We should avoid vague or subjective
criteria that may lead to misunderstandings or differing interpretations.
We should also incorporate input from stakeholders, developers, and other team members when defining the
criteria. Collaborative discussions can help us identify critical factors and ensure that all perspectives are
considered in the criteria definition.
We also need to review and update the entry and exit criteria regularly. As the project progresses or changes, we
might need to revise specific criteria to align with new requirements or project goals. Continuous
improvement ensures that the criteria remain relevant and effective throughout the testing process.
Testing oracles are required for testing. Ideally, we want an automated oracle, which always gives the correct
answer. However, often oracles are human beings, who mostly calculate by hand what the output of the
program should be. As it is often very difficult to determine whether the behavior corresponds to the expected
behavior, our “human deities” may make mistakes. Consequently, when there is a discrepancy, between the
program and the result, we must verify the result produced by the oracle before declaring that there is a defect in
the result.
The human oracles typically use the program’s specifications to decide what the correct behavior of the
program should be. To help oracle determine the correct behavior, it is important that the behavior of the system
or component is explicitly specified and the specification itself be error-free. In other words actually specify the
true and correct behavior.
There are some systems where oracles are automatically generated from the specifications of programs or
modules. With such oracles, we are assured that the output of the oracle conforms to the specifications.
However, even this approach does not solve all our problems, as there is a possibility of errors in specifications.
As a result, a divine generated from the specifications will correct the result if the specifications are correct, and
this specification will not be reliable in case of errors. In addition, systems that generate oracles from
specifications require formal specifications, which are often not generated during design.
Test Techniques
Software testing techniques are methods or approaches used to design and execute test
cases with the goal of uncovering defects and ensuring the quality of software products.
These techniques help testers maximize test coverage, minimize redundancy, and
effectively validate software behavior. Here are some commonly used software testing
techniques:
Software testing techniques are methods used to design and execute tests to evaluate
software applications. The following are common testing techniques:
1. Manual testing – Involves manual inspection and testing of the software by a human
tester.
2. Automated testing – Involves using software tools to automate the testing process.
3. Functional testing – Tests the functional requirements of the software to ensure they
are met.
4. Non-functional testing – Tests non-functional requirements such as performance,
security, and usability.
5. Unit testing – Tests individual units or components of the software to ensure they are
functioning as intended.
6. Integration testing – Tests the integration of different components of the software to
ensure they work together as a system.
7. System testing – Tests the complete software system to ensure it meets the specified
requirements.
8. Acceptance testing – Tests the software to ensure it meets the customer’s or end-
user’s expectations.
9. Regression testing – Tests the software after changes or modifications have been
made to ensure the changes have not introduced new defects.
10. Performance testing – Tests the software to determine its performance characteristics
such as speed, scalability, and stability.
11. Security testing – Tests the software to identify vulnerabilities and ensure it meets
security requirements.
12. Exploratory testing – A type of testing where the tester actively explores the
software to find defects, without following a specific test plan.
13. Boundary value testing – Tests the software at the boundaries of input values to
identify any defects.
14. Usability testing – Tests the software to evaluate its user-friendliness and ease of use.
15. User acceptance testing (UAT) – Tests the software to determine if it meets the end-
user’s needs and expectations.
Software testing techniques are the ways employed to test the application under test
against the functional or non-functional requirements gathered from business. Each
testing technique helps to find a specific type of defect. For example, Techniques that
may find structural defects might not be able to find the defects against the end-to-end
business flow. Hence, multiple testing techniques are applied in a testing project to
conclude it with acceptable quality.
Principles Of Testing
Below are the principles of software testing:
1. All the tests should meet the customer’s requirements.
2. To make our software testing should be performed by a third party.
3. Exhaustive testing is not possible. As we need the optimal amount of testing based on
the risk assessment of the application.
4. All the tests to be conducted should be planned before implementing it.
5. It follows the Pareto rule (80/20 rule) which states that 80% of errors come from 20%
of program components.
6. Start testing with small parts and extend it to large parts.
Types Of Software Testing Techniques
There are two main categories of software testing techniques:
1. Static Testing Techniques are testing techniques that are used to find defects in an
application under test without executing the code. Static Testing is done to avoid
errors at an early stage of the development cycle thus reducing the cost of fixing them.
2. Dynamic Testing Techniques are testing techniques that are used to test the dynamic
behaviour of the application under test, that is by the execution of the code base. The
main purpose of dynamic testing is to test the application with dynamic inputs- some
of which may be allowed as per requirement (Positive testing) and some are not
allowed (Negative Testing).
Each testing technique has further types as showcased in the below diagram. Each one of
them will be explained in detail with examples below.
Testing Techniques
Decision Table
4. Use case-based Testing: This technique helps us to identify test cases that execute the
system as a whole- like an actual user (Actor), transaction by transaction. Use cases are a
sequence of steps that describe the interaction between the Actor and the system. They
are always defined in the language of the Actor, not the system. This testing is most
effective in identifying integration defects. Use case also defines any preconditions and
postconditions of the process flow. ATM example can be tested via use case:
5. State Transition Testing: It is used where an application under test or a part of it can
be treated as FSM or finite state machine. Continuing the simplified ATM example
above, We can say that ATM flow has finite states and hence can be tested with the State
transition technique. There are 4 basic things to consider –
1. States a system can achieve
2. Events that cause the change of state
3. The transition from one state to another
4. Outcomes of change of state
A state event pair table can be created to derive test conditions – both positive and
negative.
State Transition
Testing Frameworks
A testing framework is a set of guidelines or rules used for creating and designing test
cases. A framework is comprised of a combination of practices and tools that are
designed to help QA professionals test more efficiently.
examples
Linear Automation Test Framework. A linear automation test framework involves
introductory level testing. ...
Modular-Based Test Framework. Modular-based test frameworks break down test cases
into small modules. ...
Library Architecture Test Framework. ...
Keyword-Driven Test Framework. ...
Data-Driven Test Framework. ...
Hybrid Test Framework.