FinalSoftware Engineering R18

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 204

UNIT- I

Introduction to Software Engineering

1.1 The evolving role of software:

Software Evolution is a term which refers to the process of developing software


initially, then timely updating it for various reasons, i.e., to add new features or to
remove obsolete functionalities etc. The evolution process includes fundamental
activities of change analysis, release planning, system implementation and releasing a
system to customers.

The cost and impact of these changes are accessed to see how much system is affected
by the change and how much it might cost to implement the change. If the proposed
changes are accepted, a new release of the software system is planned. During release
planning, all the proposed changes (fault repair, adaptation, and new functionality) are
considered.

A design is then made on which changes to implement in the next version of the
system. The process of change implementation is an iteration of the development
process where the revisions to the system are designed, implemented and tested.

The necessity of Software evolution: Software evaluation is necessary just because


of the following reasons:

a. Change in requirement with time: With the passes of time, the organization’s
needs and modus Operandi of working could substantially be changed so in this
frequently changing time the tools(software) that they are using need to change for
maximizing the performance.

b. Environment change: As the working environment changes the things(tools) that


enable us to work in that environment also changes proportionally same happens in
the software world as the working environment changes then, the organizations need
reintroduction of old software with updated features and functionality to adapt the
new environment.

c. Errors and bugs: As the age of the deployed software within an organization
increases their preciseness or impeccability decrease and the efficiency to bear the
increasing complexity workload also continually degrades. So, in that case, it
becomes necessary to avoid use of obsolete and aged software. All such obsolete
Softwares need to undergo the evolution process in order to become robust as per the
workload complexity of the current environment.
d. Security risks: Using outdated software within an organization may lead you to at
the verge of various software-based cyberattacks and could expose your confidential
data illegally associated with the software that is in use. So, it becomes necessary to
avoid such security breaches through regular assessment of the security
patches/modules are used within the software. If the software isn’t robust enough to
bear the current occurring Cyber attacks so it must be changed (updated).

e. For having new functionality and features: In order to increase the performance
and fast data processing and other functionalities, an organization need to
continuously evolute the software throughout its life cycle so that stakeholders &
clients of the product could work efficiently.

1.2 Laws used for Software Evolution:

1. Law of continuing change:


This law states that any software system that represents some real-world reality
undergoes continuous change or become progressively less useful in that
environment.

2. Law of increasing complexity:


As an evolving program changes, its structure becomes more complex unless effective
efforts are made to avoid this phenomenon.
3. Law of conservation of organization stability:
Over the lifetime of a program, the rate of development of that program is
approximately constant and independent of the resource devoted to system
development.
4. Law of conservation of familiarity:
This law states that during the active lifetime of the program, changes made in the
successive release are almost constant.

1.3 Changing Nature of Software:

The software is instruction or computer program that when executed provide desired
features, function, and performance. A data structure that enables the program to
adequately manipulate information and document that describe the operation and use
of the program.

Characteristic of software:
There is some characteristic of software which is given below:

1. Functionality
2. Reliability
3. Usability
4. Efficiency
5. Maintainability
6. Portability

Changing Nature of Software:


Nowadays, seven broad categories of computer software present continuing
challenges for software engineers which is given below:
1. System Software:
System software is a collection of programs which are written to service other
programs. Some system software processes complex but determinate, information
structures. Other system application process largely indeterminate data. Sometimes
when, the system software area is characterized by the heavy interaction with
computer hardware that requires scheduling, resource sharing, and sophisticated
process management.
2. Application Software:
Application software is defined as programs that solve a specific business need.
Application in this area process business or technical data in a way that facilitates
business operation or management technical decision making. In addition to
convention data processing application, application software is used to control
business function in real time.
3. Engineering and Scientific Software:
This software is used to facilitate the engineering function and task. however modern
application within the engineering and scientific area are moving away from the
conventional numerical algorithms. Computer-aided design, system simulation, and
other interactive applications have begun to take a real-time and even system
software characteristic.
4. Embedded Software:
Embedded software resides within the system or product and is used to implement
and control feature and function for the end-user and for the system itself. Embedded
software can perform the limited and esoteric function or provided significant
function and control capability.

5.Product-line Software:
Designed to provide a specific capability for use by many different customers, product
line software can focus on the limited and esoteric marketplace or address the mass
consumer market.
6. Web Application:
It is a client-server computer program which the client runs on the web browser. In
their simplest form, Web apps can be little more than a set of linked hypertext files
that present information using text and limited graphics. However, as e-commerce and
B2B application grow in importance. Web apps are evolving into a sophisticate
computing environment that not only provides a standalone feature, computing
function, and content to the end user.
7. Artificial Intelligence Software:
Artificial intelligence software makes use of a nonnumerical algorithm to solve a
complex problem that is not amenable to computation or straightforward analysis.
Application within this area includes robotics, expert system, pattern recognition,
artificial neural network, theorem proving and game playing.

1.4 Software Myths:


Most, experienced experts have seen myths or superstitions (false beliefs or
interpretations) or misleading attitudes (naked users) which creates major problems
for management and technical people. The types of software-related myths are listed
below.
`Types of Software Myths

(i) Management Myths:

Myth 1:

We have all the standards and procedures available for software development.

Fact:

● Software experts do not know all the requirements for the software
development.
● And all existing processes are incomplete as new software development is
based on new and different problem.

Myth 2:

The addition of the latest hardware programs will improve the software development.

Fact:

● The role of the latest hardware is not very high on standard software
development; instead (CASE) Engineering tools help the computer, they are
more important than hardware to produce quality and
productivity.
● Hence, the hardware resources are misused.
Myth 3:

● With the addition of more people and program planners to Software


development can help meet project deadlines (If lagging behind).

Fact:
● If software is late, adding more people will merely make the problem
worse. This is because the people already working on the project now need
to spend time educating the newcomers, and are thus taken away from their
work. The newcomers are also far less productive than the existing software
engineers, and so the work put into training them to work on the software
does not immediately meet with an appropriate reduction in work.

(ii)Customer Myths:

The customer can be the direct users of the software, the technical team, marketing /
sales department, or other company. Customer has myths leading to false expectations
(customer) & that’s why you create dissatisfaction with the developer.

Myth 1:

A general statement of intent is enough to start writing plans (software development)


and details of objectives can be done over time.

Fact:

● Official and detailed description of the database function, ethical


performance, communication, structural issues and the verification process
are important.
● Unambiguous requirements (usually derived iteratively) are developed only
through effective and continuous communication between customer and
developer.

Myth 2:

Software requirements continually change, but change can be easily

accommodated because software is flexible Fact:


● It is true that software requirements change, but the impact of change varies
with the time at which it is introduced. When requirements changes are
requested early (before design or code has been started), the cost impact is
relatively small. However, as time passes, the cost impact grows rapidly—
resources have been committed, a design framework has been established,
and change can cause upheaval that requires additional resources and major
design modification.

Different Stages of Myths

(iii)Practitioner’s Myths:

Myths 1:

They believe that their work has been completed with the writing of the plan.

Fact:

● It is true that every 60-80% effort goes into the maintenance phase (as of
the latter software release). Efforts are required, where the product is
available first delivered to customers.

Myths 2:

There is no other way to achieve system quality, until it is “running”.


Fact:

● Systematic review of project technology is the quality of effective software


verification method. These updates are quality filters and more accessible
than test.

Myth 3:

An operating system is the only product that can be successfully exported


project.

Fact:

● A working system is not enough, the right document brochures and


booklets are also required to provide guidance & software support.

Myth 4:

Engineering software will enable us to build powerful and unnecessary document &
always delay us.

Fact:

● Software engineering is not about creating documents. It is about creating a


quality product. Better quality leads to reduced rework.
And reduced rework results in faster delivery times

1.5 A Generic view of process:

Software engineering- a layered technology:

1. Quality focus: A disciplined quality management is a backbone of


software engineering technology.
2. Process layer: The foundation for software engineering is the process
layer. Process defines a framework that must be established for effective
delivery of software engineering technology.
3. Methods: Software engineering methods provide the technical how-to’s
for building software. Methods encompass a broad array of tasks that
include communication, requirements analysis, design modeling, program
construction, testing, and support.
4. Tools: Software engineering tools provide automated or semiautomated
support for the process and the methods. When tools are integrated so that
information created by one tool can be used by another, a system for the
support of software development, called computer-aided software
engineering, is established.

1.6 Software Process Framework


Software Process Framework is an abstraction of the software development process.
It details the steps and chronological order of a process. Since it serves as a
foundation for them, it is utilized in most applications. Task sets, umbrella activities,
and process framework activities all define the characteristics of the software
development process.

Software process includes:

● Tasks – focus on a small, specific objective.


● Action – set of tasks that produce a major work product.
● Activities – group of related tasks and actions for a major objective.
Software Process Framework

1.7 Process Framework Activities:

The process framework is required for representing common process activities. Five
framework activities are described in a process framework for software engineering.
Communication, planning, modeling, construction, and deployment are all examples
of framework activities. Each engineering action defined by a framework activity
comprises a list of needed work outputs, project milestones, and software quality
assurance (SQA) points.

● Communication: By communication, customer requirement gathering is


done. Communication with consumers and stakeholders to determine the
system’s objectives and the software’s
requirements.
● Planning: Establish engineering work plan, describes technical risk, lists
resources requirements, work produced and defines work
schedule.
● Modeling: Architectural models and design to better understand the
problem and for work towards the best solution. The software model is
prepared by: o Analysis of requirements o Design
● Construction: Creating code, testing the system, fixing bugs, and
confirming that all criteria are met. The software design is mapped into a
code by: o Code generation o Testing
● Deployment: In this activity, a complete or non-complete product or
software is represented to the customers to evaluate and give feedback. On
the basis of their feedback, we modify the product for the supply of better
products.

1.8 Umbrella activities:

Umbrella Activities are that take place during a software development process for
improved project management and tracking.
1. Software project tracking and control: This is an activity in which the

team can assess progress and take corrective action to maintain the
schedule. Take action to keep the project on time by comparing the
project’s progress against the plan.

2. Risk management: The risks that may affect project outcomes or quality

can be analyzed. Analyze potential risks that may have an impact on the
software product’s quality and outcome.

3. Software quality assurance: These are activities required to maintain

software quality. Perform actions to ensure the product’ quality.


4. Formal technical reviews: It is required to assess engineering work

products to uncover and remove errors before they propagate to the next
activity. At each level of the process, errors are evaluated and

fixed.

5. Software configuration management: Managing of configuration process

when any change in the software occurs.

6. Work product preparation and production: The activities to create

models, documents, logs, forms, and lists are carried out.

7. Reusability management: It defines criteria for work product reuse.

Reusable work items should be backed up, and reusable software


components should be achieved.

8. Measurement: In this activity, the process can be defined and collected.

Also, project and product measures are used to assist the software team .

1.9 Capability Maturity Model Integration (CMMI)


Capability Maturity Model Integration (CMMI) is a successor of CMM and is a
more evolved model that incorporates best components of individual disciplines of
CMM like Software CMM, Systems Engineering CMM, People CMM, etc. Since
CMM is a reference model of matured practices in a specific discipline, so it becomes
difficult to integrate these disciplines as per the requirements. This is why CMMI is
used as it allows the integration of multiple disciplines as and when needed.

Objectives of CMMI :

1. Fulfilling customer needs and expectations.


2. Value creation for investors/stockholders.
3. Market growth is increased.
4. Improved quality of products and services.
5. Enhanced reputation in Industry.

CMMI Representation – Staged and Continuous :

A representation allows an organization to pursue a different set of improvement


objectives. There are two representations for CMMI :

Staged Representation :
 Uses a pre-defined set of process areas to define improvement path.

 Provides a sequence of improvements, where each part in the sequence serves


as a foundation for the next.

 An improved path is defined by maturity level.


 Maturity level describes the maturity of processes in organization.
 Staged CMMI representation allows comparison between different
organizations for multiple maturity levels.
Continuous Representation:

 Uses capability levels that measures improvement of an individual process


area.

 Continuous CMMI representation allows comparison between different


organizations on a process-area-by-process-area basis.
 Allows organizations to select processes which require more improvement.

 In this representation, order of improvement of various processes can be


selected which allows the organizations to meet their objectives and eliminate
risks.

CMMI Model – Maturity Levels :


In CMMI with staged representation, there are five maturity levels described as
follows:

1. Maturity level 1 : Initial


● Processes are poorly managed or controlled.
● Unpredictable outcomes of processes involved.
● Ad hoc and chaotic approach used.
● No KPAs (Key Process Areas) defined.
● Lowest quality and highest risk.

2. Maturity level 2 : Managed


● Requirements are managed.
● Processes are planned and controlled.
● Projects are managed and implemented according to their
documented plans.
● This risk involved is lower than Initial level, but still exists.
● Quality is better than Initial level.
3. Maturity level 3 : Defined
● Processes are well characterized and described using standards,
proper procedures, and methods, tools, etc.

● Focus is process standardization.


4. Maturity level 4 : Quantitatively managed
● Quantitative objectives for process performance and quality are
set.
● Quantitative objectives are based on customer requirements,
organization needs, etc.
● Process performance measures are analyzed quantitatively.
● Higher quality of processes is achieved.
5. Maturity level 5 : Optimizing
 Continuous improvement in processes and their performance.
 Improvement has to be both incremental and innovative
 Lower risk management
 High Quality of processes
.
.CMMI Model – Capability Level :A capability level includes relevant specific and
generic practices for a specific process area that can improve the organization’s
processes associated with that process area. For CMMI models with continuous
representation, there are six capability levels as described below :

1. Capability level 0 : Incomplete


● Incomplete process – partially or not performed.
● One or more specific goals of process area are not met.
● No generic goals are specified for this level.
● This capability level is same as maturity level 1.
2. Capability level 1 : Performed
● Process performance may not be stable.
● Objectives of quality, cost and schedule may not be met.
A capability level 1 process is expected to perform all specific
and generic practices for this level only a start-step for process
improvement.
3. Capability level 2 : Managed
● Process is planned, monitored and controlled.
● Managing the process by ensuring that objectives are achieved.
● Objectives are both model and other including cost, quality,
schedule.
● Actively managing processing with the help of metrics.
4. Capability level 3 : Defined
● a defined process is managed and meets the organization’s set of
guidelines and standards.
● focus is process standardization.
5. Capability level 4 : Quantitatively Managed
● process is controlled using statistical and quantitative techniques.
● process performance and quality is understood in statistical terms
and metrics.
● quantitative objectives for process quality and performance are
established.

6. Capability level 5 : Optimizing


● focuses on continually improving process performance.
● performance is improved in both ways – incremental and
innovation.
● emphasizes on studying the performance results across the
organization to ensure that common causes or issues are
identified.

Process Patterns:

As the software team moves through the software process they encounter problems. It
would be very useful if solutions to these problems were readily available so that
problems can be resolved quickly. Process-related problems which are encountered
during software engineering work, it identifies the encountered problem and in which
environment it is found, then it will suggest proven solutions to problem, they all are
described by process pattern. By solving problems a software team can construct a

process that best meets needs of a projectand fixed. Process Patterns:

As the software team moves through the software process they encounter problems. It
would be very useful if solutions to these problems were readily available so that
problems can be resolved quickly. Process-related problems which are encountered
during software engineering work, it identifies the encountered problem and in which
environment it is found, then it will suggest proven solutions to problem, they all are
described by process pattern. By solving problems a software team can construct a
process that best meets needs of a project.

Uses of the process pattern:


At any level of abstraction, patterns can be defined. They can be used to describe a
problem and solution associated with framework activity in some situations. While in
other situations patterns can be used to describe a problem and solution associated
with a complete process model.
Template:

● Pattern Name – Meaningful name must be given to a pattern within


context of software process (e.g. Technical Reviews).
● Forces – The issues that make problem visible and may affect its solution
also environment in which pattern is encountered.

Type:

It is of three types :

1. Stage pattern – Problems associated with a framework activity for process

are described by stage pattern. Establishing Communication might be an

example of a staged pattern. This pattern would incorporate task pattern

Requirements Gathering and others.

2. Task-pattern – Problems associated with a software engineering action or

work task and relevant to successful software engineering practice (e.g.,

Requirements Gathering is a task pattern) are defined by task-pattern.

3. Phase pattern – Even when the overall flow of activities is iterative in

nature, it defines sequence of framework activities that occurs within

process. Spiral Model or Prototyping might be an example of a phase

pattern.

Initial Context: Conditions under which the pattern applies are described by initial
context. Prior to the initiation of the pattern :

1. What organizational or term-related activities have already occurred?


2. Entry state for the process?
3. Software engineering information or project information already exists?

For example, the Planning pattern requires that :

● Collaborative communication has been established between customers and


software engineers.
● Successful completion of a number of task patterns for the communication
pattern has occurred.
● The project constraints, basic requirements, and the project scope are
known.

Problem: Any specific problem is to be solved by pattern.

Solution: Describes how to implement pattern successfully. This section describes


how initial state of process is modified as a consequence of initiation of pattern.

Resulting Context: Once the pattern has been successfully implemented, it describes
conditions. Upon completion of pattern :

1. Organizational or term-related activities must have occurred?


2. What should be the exit state for the process?
3. What software engineering information has been developed?
Related pattern: Provide a list of all process patterns that are directly related to this
one. It can be represented in a hierarchy or in some other diagrammatic form.

Known uses and Examples: In which the pattern is applicable, it indicates the
specific instances. For example, communication is mandatory at the beginning of
every software project, is recommended throughout the software project, and is
mandatory once the deployment activity is underway.

Example of Process Pattern:

Let’s see an example of a process pattern to understand it more clearly.


Template:

Pattern Name: Prototyping Model Design


Intent: Requirements are not clear. So aim is to make a model iteratively to solidify
the exact requirements.

Type: Phase Pattern

Initial Context: Before going to the prototyping these basic conditions should be
made

1. Stakeholder has some idea about their requirements i.e. what they exactly want

2. Communication medium should be established between stakeholder and


software development team to ensure proper understanding about the requirements
and future product

3. Initial understanding about other factors of project like scope of project,


duration of project, budget of project etc.

Problem: Identifying and Solidifying the hazy and nonexistent requirements.

Solution: A description of the prototyping should be presented.


Resulting Context: A prototype model which can give a clear idea about the actual
product and that needs to be agreed by stakeholders.

Related Patterns: Requirement extraction, Iterative design, customer


communication, Iterative development, Customer assessment etc.

Known Uses & Examples: When stakeholder requirements are unclear and
uncertain, prototyping is recommended.

Software Process Assessment:


Software Process Assessment is a disciplined and organized examination of the
software process which is being used by any organization bases the on the process
model. The Software Process Assessment includes many fields and parts like
identification and characterization of current practices, the ability of current practices
to control or avoid significant causes of poor (software) quality, cost, schedule and
identifying areas of strengths and weaknesses of the software.

Types of Software Assessment :


● Self Assessment : This is conducted internally by the people of
their own organisation.
● Second Party assessment: This is conducted by an external team or people
of the own organisation are supervised by an external team.
● Third Party assessment:

In an ideal case Software Process Assessment should be performed in a transparent,


open and collaborative environment. This is very important for the improvement of
the software and the development of the product. The results of the Software Process
Assessment are confidential and are only accessible to the company. The assessment
team must contain at least one person from the organization that is being assessed.

Software Process Maturity Assessment:


The scope of Software Process Assessment includes many components like it should
cover all the processes in the organisation, a selected subset of the software process or
a specific project. The idea of process maturity serves as the foundation for the
majority of standard-based process evaluation methodologies.

Though an organisation is the assessment objective, even when the same approach is
applied again, the outcomes of a process evaluation may vary. The different results are
mainly due to two reasons. The reasons are that the organization that is being
investigated must be determined. When the company is very large it is possible for the
company to have different definitions due to which the actual scope of appraisal may
be different in successive assessments. Even if it is the same organization the sample
of projects selected to represent the organization may affect the scope and result.
Process maturity is important when the organisation intended to embark on an long
term improvement strategy.

Software Process Cycle:


Generally there are six different steps in the complete cycle:
● Selecting a team: The first step is to select all the team members. Everyone
must be software professionals with sound knowledge in software
engineering.
● The standard process maturity questionnaire is filled out by the
representatives of the site that will be evaluated.
● In accordance with the CMM core process areas, the assessment team
analyses the questionnaire results to determine the areas that call for
additional investigation.
● The evaluation team visits the location to learn more about the software
procedures used there.
● The evaluation team compiles a set of results
outlining the
organization’s software process’s advantages and disadvantages.
● In order to deliver the findings to the right audience, the assessment team
creates a Key Process Area (KPA) profile analysis.

SCAMPI:

SCAMPI stands for Standard CMMI Assessment Method for Process Improvement.
To fulfil the demands of the CMMI paradigm, the Standard
CMMI Assessment Method for Process Improvement (SCAMPI) was created
(Software Engineering Institute, 2000). Moreover, it is based on the CBA IPI. The
CBA IPI and SCAMPI both have three steps.

1. Plan and become ready


2. Carry out the evaluation on-site
3. Report findings

The planning and preparation phase includes the following activities:

● Describe the scope of the evaluation.


● Create the assessment strategy.
● Get the evaluation crew ready and trained.
● Make a quick evaluation of the participants.
● CMMI Appraisal Questionnaire distribution ● Look at the survey
results.
● Perform a preliminary document evaluation.

The onsite evaluation phase includes the following activities:

● Display the results.


● Execute the findings.
● Complete / end the assessment.

Personal Software Process (PSP):


The SEI CMM which is reference model for raising the maturity levels of software
and predicts the most expected outcome from the next project undertaken by the
organizations does not tell software developers about how to analyze, design, code,
test and document the software products, but expects that the developers use effectual
practices. The Personal Software Process realized that the process of individual use is
completely different from that required by the team.

Personal Software Process (PSP) is the skeleton or the structure that assist the
engineers in finding a way to measure and improve the way of working to a great
extend. It helps them in developing their respective skills at a personal level and the
way of doing planning, estimations against the plans.

Objectives of PSP :
The aim of PSP is to give software engineers with the regulated methods for the
betterment of personal software development processes.
The PSP helps software engineers to:
● Improve their approximating and planning skills.
● Make promises that can be fulfilled.
● Manage the standards of their projects.
● Reduce the number of faults and imperfections in their work.

Time measurement:
Personal Software Process recommend that the developers should structure the way to
spend the time.The developer must measure and count the time they spend on
different activities during the development.

PSP Planning :
The engineers should plan the project before developing because without planning a
high effort may be wasted on unimportant activities which may lead to a poor and
unsatisfactory quality of the result.

Levels of Personal Software Process :


Personal Software Process (PSP) has four levels-
1. PSP 0 –

The first level of Personal Software Process, PSP 0 includes


Personal measurement , basic size measures, coding standards.

2. PSP 1 –

This level includes the planning of time and scheduling .

3. PSP 2 –

This level introduces the personal quality management ,design and code
reviews.

4. PSP 3 –

The last level of the Personal Software Process is for the Personal process
evolution.
Process models
The classical waterfall model is the basic software development life cycle model. It is
very simple but idealistic. Earlier this model was very popular but nowadays it is not
used. But it is very important because all the other software development life cycle
models are based on the classical waterfall model.

Why Do We Use the Waterfall Model?


The waterfall model is a software development model used in the context of large,
complex projects, typically in the field of information technology. It is characterized
by a structured, sequential approach to project management and software
development.

The waterfall model is useful in situations where the project requirements are well-
defined and the project goals are clear. It is often used for large-scale projects with
long timelines, where there is little room for error and the project stakeholders need to
have a high level of confidence in the outcome.

Features of the Waterfall Model


1. Sequential Approach: The waterfall model involves a sequential approach

to software development, where each phase of the project is completed


before moving on to the next one.

2. Document-Driven: The waterfall model relies heavily on documentation to

ensure that the project is well-defined and the project team is working
towards a clear set of goals.

3. Quality Control: The waterfall model places a high emphasis on quality

control and testing at each phase of the project, to ensure that the final
product meets the requirements and expectations of the stakeholders.
4. Rigorous Planning: The waterfall model involves a rigorous planning

process, where the project scope, timelines, and deliverables are carefully
defined and monitored throughout the

project lifecycle.

Overall, the waterfall model is used in situations where there is a need for a highly
structured and systematic approach to software development. It can be effective in
ensuring that large, complex projects are completed on time and within budget, with a
high level of quality and customer satisfaction.

Phases of Classical Waterfall Model


Waterfall Model is a classical software development methodology that was first
introduced by Winston W. Royce in 1970. It is a linear and sequential approach to
software development that consists of several phases that must be completed in a
specific order. The phases include:

1. Requirements Gathering and Analysis: The first phase involves

gathering requirements from stakeholders and analyzing them to understand


the scope and objectives of the project.

2. Design: Once the requirements are understood, the design phase begins.

This involves creating a detailed design document that outlines the software
architecture, user interface, and system components.

3. Implementation: The implementation phase involves coding the software

based on the design specifications. This phase also includes unit testing to
ensure that each component of the software is working as expected.

4. Testing: In the testing phase, the software is tested as a whole to ensure

that it meets the requirements and is free from defects.


5. Deployment: Once the software has been tested and approved, it is

deployed to the production environment.

6. Maintenance: The final phase of the Waterfall Model is

maintenance, which involves fixing any issues that arise after the software
has been deployed and ensuring that it continues to meet the requirements
over time.

The classical waterfall model divides the life cycle into a set of phases. This model
considers that one phase can be started after the completion of the previous phase.
That is the output of one phase will be the input to the next phase. Thus the
development process can be considered as a sequential flow in the waterfall. Here the
phases do not overlap with each other. The different sequential phases of the classical
waterfall model are shown in the below
figure.

Phases of Classical Waterfall Model

Let us now learn about each of these phases in detail.


1. Feasibility Study

The main goal of this phase is to determine whether it would be financially and
technically feasible to develop the software.
The feasibility study involves understanding the problem and then determining the
various possible strategies to solve the problem. These different identified solutions
are analyzed based on their benefits and drawbacks, The best solution is chosen and
all the other phases are carried out as per this solution
strategy.

2. Requirements Analysis and Specification

The aim of the requirement analysis and specification phase is to understand the exact
requirements of the customer and document them properly. This
phase consists of two different activities.
● Requirement gathering and analysis: Firstly all the requirements
regarding the software are gathered from the customer and then the
gathered requirements are analyzed. The goal of the analysis part is to
remove incompleteness (an incomplete requirement is one in which some
parts of the actual requirements have been omitted) and inconsistencies (an
inconsistent requirement is one in which some part of the requirement
contradicts some other part).
● Requirement specification: These analyzed requirements are documented
in a software requirement specification (SRS) document. SRS document
serves as a contract between the development team and customers. Any
future dispute between the customers and the developers can be settled by
examining the SRS document.
3. Design

The goal of this phase is to convert the requirements acquired in the SRS into a format
that can be coded in a programming language. It includes high-level and detailed
design as well as the overall software architecture. A Software
Design Document is used to document all of this effort (SDD)

4. Coding and Unit Testing

In the coding phase software design is translated into source code using any suitable
programming language. Thus each designed module is coded. The aim of the unit
testing phase is to check whether each module is working
properly or not.

5. Integration and System testing

Integration of different modules is undertaken soon after they have been coded and
unit tested. Integration of various modules is carried out incrementally over a number
of steps. During each integration step, previously planned modules are added to the
partially integrated system and the resultant system is tested. Finally, after all the
modules have been successfully integrated and tested, the full working system is
obtained and system testing is carried out on this.
System testing consists of three different kinds of testing activities as described
below.

● Alpha testing: Alpha testing is the system testing performed by the


development team.
● Beta testing: Beta testing is the system testing performed by a
friendly set of customers.
● Acceptance testing: After the software has been delivered, the customer
performed acceptance testing to determine whether to accept the delivered
software or reject it.

6. Maintenance

Maintenance is the most important phase of a software life cycle. The effort spent on
maintenance is 60% of the total effort spent to develop a full software. There are
basically three types of maintenance.

● Corrective Maintenance: This type of maintenance is carried out to


correct errors that were not discovered during the product development
phase.
● Perfective Maintenance: This type of maintenance is carried out to
enhance the functionalities of the system based on the customer’s request.
● Adaptive Maintenance: Adaptive maintenance is usually required for
porting the software to work in a new environment such as working on a
new computer platform or with a new operating system.

Advantages of the Classical Waterfall Model


The classical waterfall model is an idealistic model for software development. It is
very simple, so it can be considered the basis for other software development life
cycle models. Below are some of the major advantages of this SDLC model.

● Easy to Understand: Classical Waterfall Model is very simple and easy to


understand.
● Individual Processing: Phases in the Classical Waterfall model are
processed one at a time.
● Properly Defined: In the classical waterfall model, each stage in the model
is clearly defined.
● Clear Milestones: Classical Waterfall model has very clear and
well-understood milestones.
● Properly Documented: Processes, actions, and results are very well
documented.
● Reinforces Good Habits: Classical Waterfall Model reinforces good habits
like define-before-design and design-before-code.
● Working: Classical Waterfall Model works well for smaller projects and
projects where requirements are well understood.

Disadvantages of the Classical Waterfall Model


The Classical Waterfall Model suffers from various shortcomings, basically, we can’t
use it in real projects, but we use other software development lifecycle models which
are based on the classical waterfall model. Below are some major drawbacks of this
model.

● No Feedback Path: In the classical waterfall model evolution of software


from one phase to another phase is like a waterfall. It assumes that no error
is ever committed by developers during any phase. Therefore, it does not
incorporate any mechanism for error
correction.
● Difficult to accommodate Change Requests: This model assumes that all
the customer requirements can be completely and correctly defined at the
beginning of the project, but actually customer’s requirements keep on
changing with time. It is difficult to accommodate any change requests after
the requirements
specification phase is complete.
● No Overlapping of Phases: This model recommends that a new phase can
start only after the completion of the previous phase. But in real projects,
this can’t be maintained. To increase efficiency and reduce cost, phases
may overlap.
● Limited Flexibility: The Waterfall Model is a rigid and linear approach to
software development, which means that it is not well-suited for projects
with changing or uncertain requirements. Once a phase has been completed,
it is difficult to make changes or go back to a previous phase.
● Limited Stakeholder Involvement: The Waterfall Model is a structured
and sequential approach, which means that stakeholders are typically
involved in the early phases of the project (requirements gathering and
analysis) but may not be involved in the later phases (implementation,
testing, and deployment).
● Late Defect Detection: In the Waterfall Model, testing is typically done
toward the end of the development process. This means that defects may
not be discovered until late in the development process, which can be
expensive and time-consuming to fix.
● Lengthy Development Cycle: The Waterfall Model can result in a lengthy
development cycle, as each phase must be completed before moving on to
the next. This can result in delays and increased costs if requirements
change or new issues arise.
● Not Suitable for Complex Projects: The Waterfall Model is not well-
suited for complex projects, as the linear and sequential nature of the model
can make it difficult to manage multiple dependencies and interrelated
components.

Applications of Classical Waterfall Model


● Large-scale Software Development Projects: The Waterfall Model is
often used for large-scale software development projects, where a
structured and sequential approach is necessary to ensure that the project is
completed on time and within budget.
● Safety-Critical Systems: The Waterfall Model is often used in the
development of safety-critical systems, such as aerospace or medical
systems, where the consequences of errors or defects can be severe.
● Government and Defense Projects: The Waterfall Model is also
commonly used in government and defense projects, where a rigorous and
structured approach is necessary to ensure that the project meets all
requirements and is delivered on time.
● Projects with well-defined Requirements: The Waterfall Model is best
suited for projects with well-defined requirements, as the sequential nature
of the model requires a clear understanding of the project objectives and
scope.
● Projects with Stable Requirements: The Waterfall Model is also well-
suited for projects with stable requirements, as the linear nature of the
model does not allow for changes to be made once a phase has been
completed.

Incremental process model


The incremental process model is also known as the Successive version model.

First, a simple working system implementing only a few basic features is built and
then that is delivered to the customer. Then thereafter many successive iterations/
versions are implemented and delivered to the customer until the desired system is
released.
A, B, and C are modules of Software Products that are incrementally developed and
delivered.

Life cycle activities:


Requirements of Software are first broken down into several modules that can be
incrementally constructed and delivered. At any time, the plan is made just for the
next increment and not for any kind of long-term plan. Therefore, it is easier to
modify the version as per the need of the customer. The Development Team first
undertakes to develop core features (these do not need services from other features) of
the system.

Once the core features are fully developed, then these are refined to increase levels of
capabilities by adding new functions in Successive versions. Each incremental version
is usually developed using an iterative waterfall model of development.

As each successive version of the software is constructed and delivered, now the
feedback of the Customer is to be taken and these were then incorporated into the next
version. Each version of the software has more additional features than the previous
ones.
After Requirements gathering and specification, requirements are then split into
several different versions starting with version 1, in each successive increment, the
next version is constructed and then deployed at the customer site. After the last
version (version n), it is now deployed at the client site.

Types of Incremental model:

1. Staged Delivery Model: Construction of only one part of the project at a


time.
2. Parallel Development Model – Different subsystems are developed at the same
time. It can decrease the calendar time needed for the development, i.e.
TTM (Time to Market) if enough resources are available.
When to use this:

1. Funding Schedule, Risk, Program Complexity, or need for early realization of


benefits.

2. When Requirements are known up-front.

3. When Projects have lengthy development schedules.

4. Projects with new Technology.

● Error Reduction (core modules are used by the customer from the
beginning of the phase and then these are tested thoroughly) ● Uses divide
and conquer for a breakdown of tasks.
● Lowers initial delivery cost.
● Incremental Resource Deployment.

5. Requires good planning and design.


6. The total cost is not lower.
7. Well-defined module interfaces are required.

Advantages-

1. Prepares the software fast.


2. Clients have a clear idea of the project.
3. Changes are easy to implement.
4. Provides risk handling support, because of its iterations.

Disadvantages-

1. A good team and proper planned execution are required.


2. Because of its continuous iterations the cost increases.

Evolutionary model:
Evolutionary model is a combination of Iterative and Incremental model of software
development life cycle. Delivering your system in a big bang release, delivering it in
incremental process over time is the action done in this model. Some initial
requirements and architecture envisioning need to be done. It is better for software
products that have their feature sets redefined during development because of user
feedback and other factors. The Evolutionary development model divides the
development cycle into smaller, incremental waterfall models in which users are able
to get access to the product at the end of each cycle. Feedback is provided by the
users on the product for the planning stage of the next cycle and the development
team responds, often by changing the product, plan or process. Therefore, the
software product evolves with time. All the models have the disadvantage that the
duration of time from start of the project to the delivery time of a solution is very
high.
Evolutionary model solves this problem in a different approach.

Evolutionary model suggests breaking down of work into smaller chunks, prioritizing
them and then delivering those chunks to the customer one by one. The number of
chunks is huge and is the number of deliveries made to the customer. The main
advantage is that the customer’s confidence increases as he constantly gets
quantifiable goods or services from the beginning of the project to verify and validate
his requirements. The model allows for changing requirements as well as all work is
broken down into maintainable work chunks.

Application of Evolutionary Model:

1. It is used in large projects where you can easily find modules for
incremental implementation. Evolutionary model is commonly used when
the customer wants to start using the core features instead of waiting for the
full software.
2. Evolutionary model is also used in object oriented software development
because the system can be easily portioned into units in terms of objects.

Necessary conditions for implementing this model:-

● Customer needs are clear and been explained in deep to the developer team.
● There might be small changes required in separate parts but not a major
change.
● As it requires time, so there must be some time left for the market
constraints.
● Risk is high and continuous targets to achieve and report to customer
repeatedly.
● It is used when working on a technology is new and requires time to
learn.

Advantages:

● In evolutionary model, a user gets a chance to experiment partially


developed system.
● It reduces the error because the core modules get tested thoroughly.

Disadvantages:

● Sometimes it is hard to divide the problem into several versions that would
be acceptable to the customer which can be incrementally implemented and
delivered.

UNIT - II
Software Requirements

Functional Requirements: These are the requirements that the end user specifically
demands as basic facilities that the system should offer. It can be a calculation, data
manipulation, business process, user interaction, or any other specific functionality
which defines what function a system is likely to perform. All these functionalities
need to be necessarily incorporated into the system as a part of the contract. These are
represented or stated in the form of input to be given to the system, the operation
performed and the output expected. They are basically the requirements stated by the
user which one can see directly in the final product, unlike the non-functional
requirements. For example, in a hospital management system, a doctor should be able
to retrieve the information of his patients. Each high-level functional requirement may
involve several interactions or dialogues between the system and the outside world. In
order to accurately describe the functional requirements, all scenarios must be
enumerated. There are many ways of expressing functional requirements e.g., natural
language, a structured or formatted language with no rigorous syntax and formal
specification language with proper syntax. Functional Requirements in Software
Engineering are also called Functional Specification.

Non-functional requirements: These are basically the quality constraints that the
system must satisfy according to the project contract.Nonfunctional requirements, not
related to the system functionality, rather define how the system should perform. The
priority or extent to which these factors are implemented varies from one project to
other. They are also called non-behavioral requirements. They basically deal with
issues like:

● Portability
● Security
● Maintainability
● Reliability
● Scalability
● Performance
● Reusability
● Flexibility

NFR’s are classified into following types:

● Interface constraints
● Performance constraints: response time, security, storage space,
etc.
● Operating constraints
● Life cycle constraints: maintainability, portability, etc.
● Economic constraints

The process of specifying non-functional requirements requires the knowledge of the


functionality of the system, as well as the knowledge of the context within which the
system will operate.

They are divided into two main categories:Execution qualities like security and
usability, which are observable at run time. Evolution qualities like testability,
maintainability, extensibility, and scalability that embodied in the static structure of
the software system.

1. User requirements :
User requirement simply means needs of users that should be fulfilled by
software system. It is documented in a User Requirement Document
(URD). Overall statements are generally written in natural language plus a
description of the services system provides and its operational constraints.
User requirement is good if it is clear and short, results in increasing overall
quality, increases productivity, is traceable, etc.
2. System Requirements :
System requirement simply means needs of system to run smoothly and
efficiently. It is a structured document that gives a detailed description of
system functions, services, and operational constraints. It requires many
hardware and software resources. If these hardware and software resources
are not or less available, then it may result in system failure or causes
problems during performance. Between client and contractor, it is written
as a contract to define all requirements that are needed to be implemented
to increases productivity.

Interface specification:
In software engineering, an interface specification refers to a formal description of
how different software components, modules, or systems interact and communicate
with each other. It outlines the rules, protocols, data formats, and methods that govern
the exchange of information and functionality between these components. Interface
specifications play a crucial role in ensuring that diverse parts of a software system
can work together harmoniously, even if they are developed by different teams or
vendors.

Interface specifications provide a clear and unambiguous way to define how software
elements should interact. They ensure that each component knows what to expect
when communicating with other components and that potential misunderstandings or
compatibility issues are minimized.

Key elements included in an interface specification are:


1.Methods and Functions:Describes the specific operations or functions that one
component can request from another. This includes the input parameters, expected
outputs, and potential exceptions that might occur during the operation.

2. Data Formats: Specifies the structure and format of the data that is exchanged
between components. This could involve defining data types, encodings, and any
necessary metadata.

3.Communication Protocols: Outlines the rules for communication, such as the order
in which messages are exchanged, how acknowledgments are handled, and any error
recovery mechanisms.

4.API Documentation: Provides detailed documentation for the Application


Programming Interfaces (APIs) that allow components to interact. This
documentation typically includes descriptions of available methods, their parameters,
return values, and usage examples.

5.Constraints and Assumptions: Includes any limitations, conditions, or


assumptions related to the interface. This helps prevent misunderstandings between
developers using the interface.

6. Versioning and Compatibility: Addresses how changes to the interface will be


managed to maintain backward and forward compatibility between different versions
of the software.
Interface specifications are especially important in large software systems, distributed
systems, and systems that involve integration with external services or third-party
components. They facilitate a modular and organized approach to software
development, allowing teams to work independently on their components while still
ensuring seamless collaboration and integration.

Software documentation:
Software documentation is a written piece of text that is often accompanied by a
software program. This makes the life of all the members associated with the project
easier. It may contain anything from API documentation, build notes or just help
content. It is a very critical process in software development.
It’s primarily an integral part of any computer code development method. Moreover,
computer code practitioners are a unit typically concerned with the worth, degree of
usage, and quality of the actual documentation throughout the development and its
maintenance throughout the total method. Motivated by the requirements of Novatel
opposition, a world-leading company developing package in support of worldwide
navigation satellite system, and based mostly on the results of a former systematic
mapping studies area unit aimed at a higher understanding of the usage and therefore
the quality of varied technical documents throughout computer code development and
their maintenance.

For example, before the development of any software product requirements are
documented which is called Software Requirement Specification (SRS).

Requirement gathering is considered a stage of Software Development Life Cycle


(SDLC).

Another example can be a user manual that a user refers to for installing, using, and
providing maintenance to the software application/product.
Types Of Software Documentation :

1. Requirement Documentation: It is the description of how the software


shall perform and which environment setup would be appropriate to have
the best out of it. These are generated while the software is under
development and is supplied to the tester groups
too.
2. Architectural Documentation: Architecture documentation is a special
type of documentation that concerns the design. It contains very little code
and is more focused on the components of the system, their roles, and
working. It also shows the data flow throughout the system.
3. Technical Documentation: These contain the technical aspects of the
software like API, algorithms, etc. It is prepared mostly for software devs.
4. End-user Documentation: As the name suggests these are made for the
end user. It contains support resources for the end user.

Purpose of Documentation :

Due to the growing importance of computer code necessities, the method of crucial
them needs to be effective so as to notice desired results. As the such determination of
necessities is often beneath sure regulation and pointers that area unit core in getting a
given goal.

These all imply that computer code necessities area unit expected to alter thanks to the
ever ever-changing technology within the world. however, the very fact that computer
code information id obtained through development has to be modified within the
wants of users and the transformation of the atmosphere area unit is inevitable.

what is more, computer code necessities ensure that there’s a verification and
therefore the testing method, in conjunction with prototyping and conferences there
are focus teams and observations?
For a software engineer reliable documentation is often a should the presence of
documentation helps keep track of all aspects of associate applications and it
improves the standard of wares, it’s the most focused area of unit development,
maintenance, and information transfer to alternative developers. productive
documentation can build info simply accessible, offer a restricted range of user entry
purposes, facilitate new users to learn quickly, alter the merchandise and facilitate
chopping out the price.

Importance of software documentation :

For a programmer reliable documentation is always a must the presence keeps track
of all aspects of an application and helps in keeping the software updated.

Advantages of software documentation :

● The presence of documentation helps in keeping the track of all aspects of


an application and also improves the quality of the software product.
● The main focus is based on the development, maintenance, and knowledge
transfer to other developers.
● Helps development teams during development.
● Helps end-users in using the product.
● Improves overall quality of software product ● It cuts down duplicative
work.
● Makes easier to understand code.
● Helps in establishing internal coordination in work.

Disadvantages of software documentation :

● The documenting code is time-consuming.


● The software development process often takes place under time pressure,
due to which many times the documentation updates don’t match the
updated code.
● The documentation has no influence on the performance of an
application.
● Documenting is not so fun, it’s sometimes boring to a certain extent.

The agile methodology encourages engineering groups to invariably concentrate on


delivering prices to their customers. This key should be thought-about within the
method of manufacturing computer code documentation .good package ought to be
provided whether or not it’s a computer code specifications document for
programmers, testers, or a computer code manual for finish users.

Requirements engineering process:


Feasibility Study:

Feasibility Study in Software Engineering is a study to evaluate feasibility of


proposed project or system. Feasibility study is one of stage among important four
stages of Software Project Management Process. As name suggests feasibility study is
the feasibility analysis or it is a measure of the software product in terms of how
much beneficial product development will be for the organization in a practical point
of view. Feasibility study is carried out based on many purposes to analyze whether
software product will be right in terms of development, implantation, contribution of
project to the organization etc.

Types of Feasibility Study :


The feasibility study mainly concentrates on below five mentioned areas. Among
these Economic Feasibility Study is most important part of the feasibility analysis and
Legal Feasibility Study is less considered feasibility analysis.
1. Technical Feasibility –
In Technical Feasibility current resources both hardware software along
with required technology are analyzed/assessed to develop project. This
technical feasibility study gives report whether there exists correct required
resources and technologies which will be used for project development.
Along with this, feasibility study also analyzes technical skills and
capabilities of technical team, existing technology can be used or not,
maintenance and up-gradation is easy or not for chosen technology etc.
2. Operational Feasibility –
In Operational Feasibility degree of providing service to requirements is
analyzed along with how much easy product will be to operate and
maintenance after deployment. Along with this other operational scopes are
determining usability of product, Determining suggested solution by
software development team is acceptable or not etc.
3. Economic Feasibility –
In Economic Feasibility study cost and benefit of the project is analyzed.
Means under this feasibility study a detail analysis is carried out what will
be cost of the project for development which includes all required cost for
final development like hardware and software resource required, design and
development cost and operational cost and so on. After that it is analyzed
whether project will be beneficial in terms of finance for organization or
not.
4. Legal Feasibility –
In Legal Feasibility study project is analyzed in legality point of view. This
includes analyzing barriers of legal implementation of project, data
protection acts or social media laws, project certificate, license, copyright
etc. Overall it can be said that Legal Feasibility Study is study to know if
proposed project conform legal and ethical requirements.
5. Schedule Feasibility –
In Schedule Feasibility Study mainly timelines/deadlines is analyzed for
proposed project which includes how many times teams will take to
complete final project which has a great impact on the organization as
purpose of project may fail if it can’t be completed on time.

Aim of feasibility study :

● the overall objective of the organization are covered and contributed by the
system or not.
● the implementation of the system be done using current technology or not.
● can the system be integrated with the other system which are already exist

Feasibility Study Process :


The below steps are carried out during entire feasibility analysis.

1. Information assessment
2. Information collection
3. Report writing
4. General information

Need of Feasibility Study :


Feasibility study is so important stage of Software Project Management Process as
after completion of feasibility study it gives a conclusion of whether to go ahead with
proposed project as it is practically feasible or to stop proposed project here as it is
not right/feasible to develop or to think/analyze about proposed project again.

Along with this Feasibility study helps in identifying risk factors involved in
developing and deploying system and planning for risk analysis also narrows the
business alternatives and enhance success rate analyzing different parameters
associated with proposed project development.

Requirements elicitation:
Requirements elicitation is the process of gathering and defining the requirements
for a software system. The goal of requirements elicitation is to ensure that the
software development process is based on a clear and comprehensive understanding of
the customer’s needs and requirements. Requirements elicitation involves the
identification, collection, analysis, and refinement of the requirements for a software
system. It is a critical part of the software development life cycle and is typically
performed at the beginning of the project. Requirements elicitation involves
stakeholders from different areas of the organization, including business owners, end-
users, and technical experts. The output of the requirements elicitation process is a set
of clear, concise, and well-defined requirements that serve as the basis for the design
and development of the software system.
Requirements elicitation is perhaps the most difficult, most error-prone, and most
communication-intensive software development. It can be successful only through an
effective customer-developer partnership. It is needed to know what the users really
need.

Requirements elicitation Activities:


Requirements elicitation includes the subsequent activities. Few of them are listed
below –

● Knowledge of the overall area where the systems is applied.


● The details of the precise customer problem where the system is going to be
applied must be understood.
● Interaction of system with external requirements.
● Detailed investigation of user needs.
● Define the constraints for system development.

Requirements elicitation Methods:

There are a number of requirements elicitation methods. Few of them are listed below

1. Interviews
2. Brainstorming Sessions
3. Facilitated Application Specification Technique (FAST)
4. Quality Function Deployment (QFD)
5. Use Case Approach

The success of an elicitation technique used depends on the maturity of the analyst,
developers, users, and the customer involved.

1. Interviews:

Objective of conducting an interview is to understand the


customer’s expectations from the software.

It is impossible to interview every stakeholder hence representatives from groups are


selected based on their expertise and credibility.

Interviews maybe be open-ended or structured.

1. In open-ended interviews there is no pre-set agenda. Context free questions


may be asked to understand the problem.
2. In structured interview, agenda of fairly open questions is prepared.
Sometimes a proper questionnaire is designed for the interview.

2. Brainstorming Sessions:
● It is a group technique
● It is intended to generate lots of new ideas hence providing a platform to
share views
● A highly trained facilitator is required to handle group bias and group
conflicts.
● Every idea is documented so that everyone can see it.
● Finally, a document is prepared which consists of
the list of
requirements and their priority if possible.

3. Facilitated Application Specification Technique:


Its objective is to bridge the expectation gap – the difference between what the
developers think they are supposed to build and what customers think they are going
to get. A team-oriented approach is developed for requirements gathering. Each
attendee is asked to make a list of objects that are-

1. Part of the environment that surrounds the system


2. Produced by the system
3. Used by the system

Each participant prepares his/her list, different lists are then combined, redundant
entries are eliminated, team is divided into smaller sub-teams to develop mini-
specifications and finally a draft of specifications is written down using all the inputs
from the meeting.

4. Quality Function Deployment:


In this technique customer satisfaction is of prime concern, hence it emphasizes on the
requirements which are valuable to the customer.
3 types of requirements are identified –
● Normal requirements –
In this the objective and goals of the proposed software are discussed with
the customer. Example – normal requirements for a result management
system may be entry of marks, calculation of
results, etc
● Expected requirements –
These requirements are so obvious that the customer need not explicitly
state them. Example – protection from unauthorized access.
● Exciting requirements –
It includes features that are beyond customer’s expectations and prove to be
very satisfying when present. Example – when
unauthorized access is detected, it should backup and shutdown all
processes.

The major steps involved in this procedure are –

1. Identify all the stakeholders, eg. Users, developers, customers etc


2. List out all requirements from customer.
3. A value indicating degree of importance is assigned to
each
requirement.
4. In the end the final list of requirements is categorized as –
● It is possible to achieve
● It should be deferred and the reason for it
● It is impossible to achieve and should be dropped off

5. Use Case Approach:


This technique combines text and pictures to provide a better understanding of the
requirements.
The use cases describe the ‘what’, of a system and not ‘how’. Hence, they only give a
functional view of the system.
The components of the use case design includes three major things – Actor, Use
cases, use case diagram.

1. Actor –

It is the external agent that lies outside the system but interacts with
it in some way. An actor maybe a person, machine etc. It is represented as a
stick figure. Actors can be primary actors or secondary actors.
● Primary actors – It requires assistance from the system to achieve
a goal.
● Secondary actor – It is an actor from which the system
needs assistance.

2. Use cases –

They describe the sequence of interactions between actors and the system.
They capture who(actors) do what(interaction) with the system. A complete
set of use cases specifies all possible ways to use the system.

3. Use case diagram –

A use case diagram graphically represents what happens when an actor


interacts with a system. It captures the functional aspect of the system.
● A stick figure is used to represent an actor.
● An oval is used to represent a use case.
● A line is used to represent a relationship between an actor and a
use case.

Features of requirements elicitation:


1. Stakeholder engagement: Requirements elicitation involves engaging with

stakeholders such as customers, end-users, project sponsors, and subject


matter experts to understand their needs and requirements.

2. Gathering information: Requirements elicitation involves gathering

information about the system to be developed, the business processes it will


support, and the end-users who will be using it.

3. Requirement prioritization: Requirements elicitation involves prioritizing

requirements based on their importance to the project’s success.

4. Requirements documentation: Requirements elicitation involves

documenting the requirements in a clear and concise manner so that they


can be easily understood and communicated to the

development team.

5. Validation and verification: Requirements elicitation involves validating

and verifying the requirements with the stakeholders to ensure that they
accurately represent their needs and requirements.

6. Iterative process: Requirements elicitation is an iterative process that

involves continuously refining and updating the requirements based on


feedback from stakeholders.

7. Communication and collaboration: Requirements elicitation involves

effective communication and collaboration with stakeholders, project team


members, and other relevant parties to ensure that the
requirements are clearlyunderstood and

implemented.

8. Flexibility: Requirements elicitation requires flexibility to adapt to

changing requirements, stakeholder needs, and project constraints.


Advantages of Requirements Elicitation:

● Helps to clarify and refine customer requirements.


● Improves communication and collaboration between stakeholders.
● Increases the chances of developing a software system that meets customer
needs.
● Avoids misunderstandings and helps to manage expectations.
● Supports the identification of potential risks and problems early in the
development cycle.
● Facilitates the development of a comprehensive and accurate project
plan.
● Increases user and stakeholder confidence in the software
development process.
● Supports the identification of new business opportunities and revenue
streams.

Disadvantages of Requirements Elicitation:

● Can be time-consuming and expensive.


● Requires specialized skills and expertise.
● May be impacted by changing business needs and requirements.
● Can be impacted by political and organizational factors.
● Can result in a lack of buy-in and commitment from stakeholders.
● Can be impacted by conflicting priorities and competing interests.

May result in incomplete or inaccurate requirements if not properly
managed.
● Can lead to increased development costs and decreased efficiency if
requirements are not well-defined.

Requirements Validation Techniques


Requirements Validation Techniques are used to ensure that the software requirements
are complete, consistent, and correct. In this article, we will discuss the Requirements
Validation Technique in detail. We will also see the advantages and disadvantages of
Software Validation Techniques.

Common Techniques used in Software Engineering


● Inspection: This technique involves reviewing the requirements document
with a group of experts, looking for errors, inconsistencies, and missing
information.
● Walkthrough: Walkthrough technique involves a group of experts
reviewing the requirements document and walking through it line by line,
discussing any issues or concerns that arise.
● Formal Verification: This technique involves mathematically proving that
the requirements are complete and consistent and that the system will meet
the requirements.
● Model-Based Verification: Model-Based Verification involves creating a
model of the system and simulating it to see if it meets the requirements.
● Prototyping: This technique involves creating a working prototype of the
system and testing it to see if it meets the requirements.

● Black-box Testing: Black Box testing technique involves testing the
system without any knowledge of its internal structure or implementation,
to see if it meets the requirements.

Acceptance Testing: Accepting Testing technique involves testing the


system with real users to see if it meets their needs and requirements.
● User Feedback: This technique involves gathering feedback from the users
and incorporating their suggestions and feedback into the requirements.

What is Traceability?
This technique involves tracing the requirements throughout the entire software
development life cycle to ensure that they are being met and that any changes are
tracked and managed.

What is Agile Methodologies?


Agile Methodologies such as Scrum and Kanban, provide an iterative approach to
validate requirements by delivering small chunks of functionality and getting
feedback from the customer.

It is important to note that no single technique is sufficient on its own and a


combination of different techniques is usually used to validate software
requirements effectively.

What is Requirements Validation?


Requirements validation is the process of checking that requirements defined for
development, define the system that the customer really wants. To check issues
related to requirements, we perform requirements validation. We usually use
requirements validation to check errors at the initial phase of development as the error

may increase excessive rework when detected later in the development process. In the
requirements validation process, we perform a different type of test to check the
requirements mentioned in the Software Requirements Specification (SRS), these
checks include:
Completeness checks
● Consistency checks
● Validity checks
● Realism checks
● Ambiguity checks
● Verifiability

The output of requirements validation is the list of problems and agreed-on actions of
detected problems. The lists of problems indicate the problem detected during the
process of requirement validation. The list of agreed actions states the corrective
action that should be taken to fix the detected problem. There are several techniques
that are used either individually or in conjunction with other techniques to check to
check entire or part of the system:

● Test case generation: The requirement mentioned in the SRS document


should be testable, the conducted tests reveal the error present in the
requirement. It is generally believed that if the test is difficult or impossible
to design, this usually means that the requirement will be difficult to
implement and it should be reconsidered.
● Prototyping: In this validation technique the prototype of the system is
presented before the end-user or customer, they experiment with the
presented model and check if it meets their need. This type of model is
generally used to collect feedback about the requirement of the user.

● Requirements Reviews: In this approach, the SRS is carefully reviewed by
a group of people including people from both the contractor organizations
and the client side, the reviewer systematically analyses the document to
check errors and ambiguity.

Automated Consistency Analysis: This approach is used for the automatic


detection of an error, such as non-determinism, missing cases, a type error,
and circular definitions, in requirements specifications. First, the
requirement is structured in formal notation then the CASE tool is used to
check the in-consistency of the system, The report of all inconsistencies is
identified and corrective actions are taken.
● Walk-through: A walkthrough does not have a formally defined procedure
and does not require a differentiated role assignment.
● Checking early whether the idea is feasible or not.
● Obtaining the opinions and suggestions of other people.
● Checking the approval of others and reaching an agreement.

Advantages of Requirements Validation Techniques


● Improved quality of the final product: By identifying and addressing
requirements early on in the development process, using validation
techniques can improve the overall quality of the final product.
● Reduced development time and cost: By identifying and addressing
requirements early on in the development process, using validation
techniques can reduce the likelihood of costly rework later on.
● Increased user involvement: Involving users in the validation process can
lead to increased user buy-in and engagement in the
project.

● Improved communication: Using validation techniques can improve
communication between stakeholders and developers, by
providing a clear and visual representation of the software requirements.
● Easy testing and validation: A prototype can be easily tested and
validated, allowing stakeholders to see how the final product will work and
identify any issues early on in the development process.
● Increased alignment with business goals: Using validation techniques can
help to ensure that the requirements align with the overall business goals
and objectives of the organization.
● Traceability: This technique can help to ensure that the requirements are
being met and that any changes are tracked and managed.
● Agile methodologies: Agile methodologies provide an iterative approach to
validate requirements by delivering small chunks of functionality and
getting feedback from the customer.

Disadvantages of Requirements Validation


Techniques
● Increased time and cost: Using validation techniques can be time-
consuming and costly, especially when involving multiple stakeholders.
● Risk of conflicting requirements: Using validation techniques can lead to
conflicting requirements, which can make it difficult to prioritize and
implement the requirements.
● Risk of changing requirements: Requirements may change over time and
it can be difficult to keep up with the changes and ensure that the project is
aligned with the updated requirements.
● Misinterpretation and miscommunication: Misinterpretation and
miscommunication can occur when trying to understand the requirements.

Dependence on the tool: The team should be well-trained on the tool and
its features to avoid dependency on the tool and not on the requirement.

● Limited validation: The validation techniques can only check the


requirement that is captured and may not identify the requirement that is
missed
● Limited to functional requirements: Some validation techniques are
limited to functional requirements and may not validate non-functional
requirements.

Requirements management:

Requirement management is the process of analyzing, documenting, tracking,


prioritizing and agreeing on the requirement and controlling the communication to
relevant stakeholders. This stage takes care of the changing nature of requirements. It
should be ensured that the SRS is as modifiable as possible so as to incorporate
changes in requirements specified by the end users at later stages too. Being able to
modify the software as per requirements in a systematic and controlled manner is an
extremely important part of the requirements engineering process.

Requirements management is the process of managing the requirements throughout


the software development life cycle, including tracking and controlling changes, and
ensuring that the requirements are still valid and relevant. The goal of requirements
management is to ensure that the software system being developed meets the needs
and expectations of the stakeholders and that it is developed on time, within budget,
and to the required quality.

There are several key activities that are involved in requirements management,
including:

Tracking and controlling changes: This involves monitoring and controlling


changes to the requirements throughout the development process, including

identifying the source of the change, assessing the impact of the change,
and approving or rejecting the change.
● Version control: This involves keeping track of different versions of the
requirements document and other related artifacts.
● Traceability: This involves linking the requirements to other elements of the
development process, such as design, testing, and validation.
● Communication: This involves ensuring that the requirements are
communicated effectively to all stakeholders and that any changes or issues
are addressed in a timely manner.
● Monitoring and reporting: This involves monitoring the progress of the
development process and reporting on the status of the requirements.

Requirements management is a critical step in the software development life cycle as


it helps to ensure that the software system being developed meets the needs and
expectations of stakeholders, and that it is developed on time, within budget, and to
the required quality. It also helps to prevent scope creep and to ensure that the
requirements are aligned with the project goals.

ADVANTAGES OF DISADVANTAGES:

The advantages of disadvantages of the requirements engineering process in software


engineering include:

Advantages:

● Helps ensure that the software being developed meets the needs and
expectations of the stakeholders

Can help identify potential issues or problems early in the development


process, allowing for adjustments to be made before
significant
● Helps ensure that the software is developed in a cost-effective and efficient
manner
● Can improve communication and collaboration between the development
team and stakeholders
● Helps to ensure that the software system meets the needs of all
stakeholders.
● Provides a clear and unambiguous description of the requirements, which
helps to reduce misunderstandings and errors.
● Helps to identify potential conflicts and contradictions in the requirements,
which can be resolved before the software development process begins.
● Helps to ensure that the software system is delivered on time, within
budget, and to the required quality standards.
● Provides a solid foundation for the development process, which helps to
reduce the risk of failure.

Disadvantages:

● Can be time-consuming and costly, particularly if the requirements


gathering process is not well-managed
● Can be difficult to ensure that all stakeholders’ needs and expectations are
taken into account
● Can be challenging to ensure that the requirements are clear, consistent, and
complete
● Changes in requirements can lead to delays and increased costs in the
development process.

As a best practice, Requirements engineering should be flexible, adaptable,


and should be aligned with the overall project goals.
● It can be time-consuming and expensive, especially if the requirements are
complex.
● It can be difficult to elicit requirements from stakeholders who have
different needs and priorities.
● Requirements may change over time, which can result in delays and
additional costs.
● There may be conflicts between stakeholders, which can be difficult to
resolve.
● It may be challenging to ensure that all stakeholders understand and agree
on the requirements.

Requirements engineering is a critical process in software engineering that involves


identifying, analyzing, documenting, and managing the requirements of a software
system. The requirements engineering process consists of the following stages:

1. Elicitation: In this stage, the requirements are gathered from various


stakeholders such as customers, users, and domain experts. The aim is to
identify the features and functionalities that the software system should
provide.
2. Analysis: In this stage, the requirements are analyzed to determine their
feasibility, consistency, and completeness. The aim is to identify any
conflicts or contradictions in the requirements and resolve them.
3. Specification: In this stage, the requirements are documented in a clear,
concise, and unambiguous manner. The aim is to provide a detailed
description of the requirements that can be understood by all stakeholders.
4. Validation: In this stage, the requirements are reviewed and validated to
ensure that they meet the needs of all stakeholders. The aim is to ensure that
the requirements are accurate, complete, and
consistent.
5. Management: In this stage, the requirements are managed throughout the
software development lifecycle. The aim is to ensure that any changes or
updates to the requirements are properly documented and communicated to
all stakeholders.
6. Effective requirements engineering: is crucial to the success of software
development projects. It helps ensure that the software system meets the
needs of all stakeholders and is delivered on time, within budget, and to the
required quality standards.

System models System

Models in Software Engineering

System models are representations that help software engineers understand, analyze,
and communicate different aspects of a software system. They provide abstractions
and visualizations that aid in the design, development, and documentation of software
projects. There are various types of system models used in software engineering:

Context Models:

Context models, also known as context diagrams, provide a high-level overview of a


software system and its interactions with external entities. They show the system as a
single entity and its external environment as other entities with which it interacts.
Context models help define the scope of the system and identify the primary inputs
and outputs.

Behavioral Models:

Behavioral models focus on depicting the dynamic behavior of a software system.


They illustrate how the system functions and how its components interact over time.
Behavioral models include various types:

Use Case Diagrams: Use case diagrams show the interactions between users (actors)
and the system through specific use cases. They help identify the system's
functionalities from the user's perspective.

Activity Diagrams: Activity diagrams represent the flow of activities and actions
within the system. They are especially useful for modeling business processes and
workflows.

Sequence Diagrams: Sequence diagrams showcase the interactions and messages


exchanged between different components or objects over time.
They provide insights into the order of events and communication patterns.
State Diagrams: State diagrams depict the different states that an object or
component can be in and how it transitions between those states based on events or
conditions.
Data Models:

Data models focus on representing the structure and organization of data within the
software system. They help in designing databases, defining data relationships, and
ensuring data integrity. Data models include:

-Entity-Relationship Diagrams (ERDs): ERDs depict the entities (objects or


concepts) in the system, their attributes, and the relationships between them.
They are widely used for database design.

-Class Diagrams: Class diagrams describe the structure of classes, their attributes,
methods, and relationships in an object-oriented system.

Object Models:

Object models provide a visual representation of the objects (instances of classes)


within a software system and how they interact. They are closely related to class
diagrams and are used to design and communicate object-oriented systems.

Structured Methods:

Structured methods refer to systematic and disciplined approaches to software


development that involve step-by-step processes, guidelines, and techniques. These
methods provide a framework for analysis, design, and implementation. Examples
include the Structured Systems Analysis and Design Method (SSADM) and
Yourdon's Method.
Each of these system models serves a specific purpose in understanding and designing
software systems. They offer different perspectives and levels of abstraction, allowing
software engineers to tackle various aspects of the system's complexity and ensure its
successful development.
UNIT - III

Design Process in Software Engineering


The design process in software engineering is a crucial phase that bridges the gap
between the requirements analysis and the actual implementation of a software
system. It involves transforming the requirements into a detailed blueprint or plan that
guides developers in building the software. The design process aims to create a
structure that meets the user's needs while also considering factors like
maintainability, extensibility, and efficiency. The design process typically involves
several steps:

1.Requirement Analysis: Understand and refine the user requirements to identify the
functionalities, constraints, and qualities that the software needs to have.

2.Architectural Design: Create a high-level structure for the software system. This
involves defining components, their relationships, and the overall organization of the
system.

3.Detailed Design: Develop detailed specifications for each component, including


data structures, algorithms, interfaces, and methods. This step often includes creating
diagrams like class diagrams, sequence diagrams, and more.

4.Interface Design: Design the interfaces through which different components or


modules will interact. This includes defining protocols, data formats, and
communication methods.
5. Data Design: Design the structure of the data that the system will use and manage.
This involves designing databases, defining data relationships, and ensuring data
integrity.

6.Component Design: Design individual components or modules of the software,


focusing on their internal structure and logic.

7.Code Generation: Translate the design specifications into actual code using
appropriate programming languages.

8.Testing and Verification: Validate the design by testing the software against the
requirements and design specifications. This includes unit testing, integration testing,
and more.

9.Design Review: Conduct design reviews to gather feedback from stakeholders and
ensure that the design meets the intended goals.

10.Documentation: Create comprehensive documentation that describes the design


decisions, architectural choices, and how the software components work together.
Design Quality in Software Engineering

Design quality refers to the characteristics of a software design that determine its
effectiveness, maintainability, and overall value. A well-designed software system
exhibits certain qualities that contribute to its success throughout its lifecycle. Some
important aspects of design quality include:

1. Modularity: The design should be organized into cohesive and loosely


coupled modules or components. This enhances maintainability, reusability, and
makes the system easier to understand.

2. Scalability:The design should accommodate future growth and changes in


requirements without significant modifications to the existing system.

3. Flexibility: The design should be adaptable to changes in technology, business


needs, and user requirements.

4. Reusability: Components or modules should be designed in a way that allows


them to be easily reused in other parts of the system or in future
projects.

5. Simplicity: A good design avoids unnecessary complexity, making it easier to


understand, implement, and maintain.
6.Readability and Understandability: The design should be clear and well-
documented so that developers and stakeholders can easily comprehend its structure
and logic.

7.Low Coupling and High Cohesion: Components should be minimally dependent


on each other (low coupling) and have a clear, focused purpose (high cohesion).

8. Security and Reliability: The design should address security concerns and ensure
the reliability of the software under various conditions.

9.Performance: The design should consider performance requirements and optimize


critical parts of the software for efficiency.

10.Consistency: The design should follow established coding and architectural


standards to maintain consistency and reduce confusion.

Design quality directly impacts the long-term success and maintainability of a


software system. A well-designed system is easier to maintain, extend, and adapt to
changing needs, contributing to reduced costs and increased customer satisfaction.

Design concepts:
Design concepts in software engineering refer to fundamental principles and
guidelines that help developers create effective, maintainable, and reliable software
solutions. These concepts provide a framework for making design decisions and
ensuring that the resulting software meets user requirements and quality standards.
Here are some key design concepts in software engineering:

1. **Abstraction:** Abstraction involves focusing on essential features while


ignoring unnecessary details. It simplifies complexity by representing concepts at a
higher level, making it easier to understand and manage the system.

2. **Modularity:** Modularity is the practice of dividing a software system into


separate, independent modules or components. Each module should have a well-
defined purpose and clear interfaces for communication. Modularity promotes
reusability, maintainability, and parallel development.

3. **Encapsulation:** Encapsulation restricts direct access to the internal details of a


module and exposes only necessary interfaces. It enhances data security, prevents
unauthorized modifications, and allows for changes to be made within the module
without affecting other parts of the system.

4. **Separation of Concerns:** This principle advocates for dividing a software


system into distinct sections, each addressing a specific aspect of functionality.
This separation simplifies design, development, and maintenance by isolating
different concerns and reducing complexity.
5. **Information Hiding:** Information hiding restricts the visibility of certain
details within a module, ensuring that implementation specifics are concealed from
other modules. This promotes modularity and prevents unintended dependencies.

6. **High Cohesion:** High cohesion means that elements within a module are
closely related and contribute to a single, well-defined purpose. Modules with high
cohesion are easier to understand and maintain.

7. **Low Coupling:** Low coupling indicates that modules have minimal


interdependence. Reducing coupling between modules makes it easier to modify
one module without affecting others, enhancing flexibility and
maintainability.

8. **Open-Closed Principle:** This principle states that software entities


(classes, modules, functions) should be open for extension but closed for
modification. This encourages adding new features without altering existing code.

9. **Single Responsibility Principle (SRP):** The SRP suggests that a class or


module should have only one reason to change. It promotes a clear and focused
design where each entity has a single responsibility.

10. **Liskov Substitution Principle (LSP):** The LSP states that objects of a
derived class should be able to replace objects of the base class without affecting
the correctness of the program. It ensures that inheritance hierarchies maintain
expected behavior.

11. **Interface Segregation Principle (ISP):** The ISP recommends that clients
should not be forced to depend on interfaces they don't use. It encourages
designing small, focused interfaces instead of large, all-encompassing ones.

12. **Dependency Inversion Principle (DIP):** The DIP states that high-level
modules should not depend on low-level modules; both should depend on
abstractions. It encourages the use of interfaces or abstract classes to decouple
components.

13. **Design Patterns:** Design patterns are reusable solutions to common design
problems. They provide well-established approaches for addressing various design
challenges and promote best practices.

14. **SOLID Principles:** The SOLID principles are a set of five design
principles (Single Responsibility, Open-Closed, Liskov Substitution, Interface
Segregation, Dependency Inversion) that guide the creation of maintainable and
extensible software.

Applying these design concepts helps software engineers create systems that are
flexible, maintainable, and robust while minimizing the risk of introducing errors or
making the system unnecessarily complex.
Design model:
The design model in software engineering refers to a representation or blueprint that
describes the architecture, structure, and behavior of a software system. It provides a
visual and conceptual overview of how different components, modules, and
functionalities of the system are organized and interact with each other. The design
model serves as a guide for developers during the implementation phase and helps
ensure that the software meets its requirements while adhering to good design
practices.

There are several types of design models used in software engineering, each focusing
on a different aspect of the software system:

1. **Architectural Design Models:** These models provide a high-level view of


the software's structure and organization. They define the major components, their
relationships, and how they collaborate to achieve the system's goals.

Common architectural design models include layered architecture, client-server


architecture, and microservices architecture.

2. **Structural Design Models:** Structural models focus on the internal


structure of the software components. They show how classes, objects, and modules
are organized, including their attributes, methods, and relationships.
Class diagrams and object diagrams are common structural design models.
3. **Behavioral Design Models:** Behavioral models describe how different
parts of the software system interact and behave in response to various inputs and
events. These models illustrate the dynamic aspects of the system. Examples include
use case diagrams, activity diagrams, sequence diagrams, and state diagrams.

4. **Data Flow Models:** Data flow models depict the flow of data and
information within the system. They help visualize how data is processed,
transformed, and exchanged between components. Data flow diagrams (DFDs) and
entity-relationship diagrams (ERDs) are examples of data flow models.

5. **User Interface (UI) Design Models:** UI design models focus on the visual
and interactive aspects of the software's user interface. They outline how users will
interact with the system, including layout, navigation, and visual elements.

6. **Component and Module Diagrams:** These diagrams provide a detailed


view of the software's components or modules and their relationships. Component
diagrams show the physical components, while module diagrams
highlight the logical organization.

7. **Deployment Diagrams:** Deployment diagrams depict how software


components are deployed across hardware and network infrastructure. They show the
distribution of components and their relationships to physical resources.
8. **Concurrency and Interaction Diagrams:** These diagrams illustrate how
components interact in a concurrent or multi-threaded environment. They help
understand communication patterns and potential synchronization issues.

9. **Package Diagrams:** Package diagrams organize and represent the


grouping of related components or modules. They help manage the complexity of
larger systems by showing the hierarchical organization.

10. **Composite Structure Diagrams:** Composite structure diagrams show the


internal structure of a class or component, including its parts, ports, and connectors.

The design model serves as a communication tool among developers, stakeholders,


and designers. It ensures that everyone involved has a clear understanding of the
software's structure, behavior, and interactions. As the design model evolves, it guides
the implementation process and supports decisions related to architecture, coding
practices, and system integration.

Fundamentals of Software Architecture


In the world of technology, starting from small children to young people and starting
from young to old people everyone using their Smartphones, Laptops, Computers,
PDAs etc to solve any simpler or complex task online by using some software
programs, there everything looks very simple to user. Also that’s the purpose of a
good software to provide good quality of services in a user-friendly environment.
There the overall abstraction of any software product makes it looks like simple and
very easier for user to use. But in back if we will see building a complex software
application includes complex processes which comprises of a number of elements of
which coding is a small part.

After gathering of business requirement by a business analyst then developer team


starts working on the Software Requirement Specification (SRS), sequentially it
undergoes various steps like testing, acceptance, deployment, maintenance etc. Every
software development process is carried out by following some sequential steps which
comes under this Software Development Life Cycle (SDLC).

In the design phase of Software Development Life Cycle the software architecture is
defined and documented. So in this article we will clearly discuss about one of
significant element of Software Development Life Cycle (SDLC) i.e the Software
Architecture.

Software Architecture :
Software Architecture defines fundamental organization of a system and more simply
defines a structured solution. It defines how components of a software system are
assembled, their relationship and communication between them. It serves as a
blueprint for software application and development basis for developer team.

Software architecture defines a list of things which results in making many things
easier in the software development process.

● A software architecture defines structure of a system.


● A software architecture defines behavior of a system.
● A software architecture defines component relationship.
● A software architecture defines communication structure.
● A software architecture balances stakeholder’s needs.
● A software architecture influences team structure.
● A software architecture focuses on significant elements.
● A software architecture captures early design decisions.

Characteristics of Software Architecture :


Architects separate architecture characteristics into broad categories depending upon
operation, rarely appearing requirements, structure etc. Below some important
characteristics which are commonly considered are explained.

● Operational Architecture Characteristics :


1. Availability
2. Performance
3. Reliability
4. Low fault tolerance
5. Scalability
● Structural Architecture Characteristics :
1. Configurability
2. Extensibility
3. Supportability
4. Portability
5. Maintainability
● Cross-Cutting Architecture Characteristics :
1. Accessibility
2. Security
3. Usability
4. Privacy
5. Feasibility

SOLID principles of Software architecture :


Each character of the word SOLID defines one principle of software architecture. This
SOLID principle is followed to avoid product strategy mistakes. A software
architecture must adhere to SOLID principle to avoid any architectural or
developmental failure.

S.O.L.I.D PRINCIPLE

1. Single Responsibility –

Each services should have a single objective.

2. Open-Closed Principle –

Software modules should be independent and expandable.

3. Liskov Substitution Principle –

Independent services should be able to communicate and substitute


each other.

4. Interface Segregation Principle –

Software should be divided into such microservices there should not be any
redundancies.

5. Dependency Inversion Principle –

Higher-levels modules should not be depending on low-lower-level


modules and changes in higher level will not affect to lower level.

Importance of Software Architecture :


Software architecture comes under design phase of software development life cycle. It

is one of initial step of whole software development process. Without software

architecture proceeding to software development is like building a house without

designing architecture of house.

So software architecture is one of important part of software application development.


In technical and developmental aspects point of view below are reasons software
architecture are important.

● Selects quality attributes to be optimized for a system.


● Facilitates early prototyping.
● Allows to be built a system in component wise.
● Helps in managing the changes in System.

Besides all these software architecture is also important for many other factors like
quality of software, reliability of software, maintainability of software, Supportability
of software and performance of software and so on.

Advantages of Software Architecture :

● Provides a solid foundation for software project.


● Helps in providing increased performance.
● Reduces development cost.

Disadvantages of Software Architecture :

● Sometimes getting good tools and standardization becomes a


problem for software architecture.

● Initial prediction of success of project based on architecture is not


always possible.

From above it’s clear how much important a software architecture for the
development of a software application. So a good software architecture is also
responsible for delivering a good quality software product.

Data Architecture Design and Data


Management
Data architecture Design and Data Management :

Data architecture design is set of standards which are composed of certain policies,
rules, models and standards which manages, what type of data is collected, from
where it is collected, the arrangement of collected data, storing that data, utilizing and
securing the data into the systems and data warehouses for further analysis.

Data is one of the essential pillars of enterprise architecture through which it succeeds
in the execution of business strategy.

Data architecture design is important for creating a vision of interactions occurring


between data systems, like for example if data architect wants to implement data
integration, so it will need interaction between two systems and by using data
architecture the visionary model of data interaction during the process can be
achieved.

Data architecture also describes the type of data structures applied to manage data and
it provides an easy way for data preprocessing. The data architecture is formed by
dividing into three essential models and then are combined :

● Conceptual model –
It is a business model which uses Entity Relationship (ER) model for
relation between entities and their attributes.
● Logical model –
It is a model where problems are represented in the form of logic such as
rows and column of data, classes, xml tags and other DBMS
techniques.
● Physical model –
Physical models holds the database design like which type of
database technology will be suitable for architecture.
A data architect is responsible for all the design, creation,deployment of data
architecture and defines how data is to be stored and retrieved, other decisions are
made by internal bodies.

Factors that influence Data Architecture :


Few influences that can have an effect on data architecture are business policies,
business requirements, Technology used, economics, and data processing needs.

● Business requirements –
These include factors such as the expansion of business, the performance of
the system access, data management, transaction management, making use
of raw data by converting them into image files and records, and then
storing in data warehouses. Data warehouses are the main aspects of storing
transactions in
business.
● Business policies –
The policies are rules that are useful for describing the way of processing
data. These policies are made by internal organizational bodies and other
government agencies.
● Technology in use –
This includes using the example of previously completed data architecture
design and also using existing licensed software
purchases, database technology.
● Business economics –
The economical factors such as business growth and loss, interest rates,
loans, condition of the market, and the overall cost will also have an effect
on design architecture.
● Data processing needs –
These include factors such as mining of the data, large continuous
transactions, database management, and other data preprocessing needs.

Data Management :

● Data management is the process of managing tasks like extracting data,


storing data, transferring data, processing data, and then securing data with
low-cost consumption.

● Main motive of data management is to manage and safeguard the people’s


and organization data in an optimal way so that they can easily create,
access, delete, and update the data.
● Because data management is an essential process in each and every
enterprise growth, without which the policies and decisions can’t be made
for business advancement. The better the data
management the better productivity in business.
● Large volumes of data like big data are harder to manage traditionally so
there must be the utilization of optimal technologies and tools for data
management such as Hadoop, Scala, Tableau, AWS, etc. Which can further
used for big data analysis in achieving
improvements in patterns.
● Data management can be achieved by training the employees necessarily
and maintenance by DBA, data analyst, and data
architects.

Architectural Style, Architectural Patterns


and Design Patterns
Architectural Style
The architectural style shows how do we organize our code, or how the system will
look like from 10000 feet helicopter view to show the highest level of abstraction of
our system design. Furthermore, when building the architectural style of our system
we focus on layers and modules and how they are communicating with each other.
There are different types of architectural styles, and moreover, we can mix them and
produce a hybrid style that consists of a mix between two and even more architectural
styles. Below is a list of architectural styles and examples for each category:

● Structure architectural styles: such as layered, pipes and filters


and component-based styles.
● Messaging styles: such as Implicit invocation, asynchronous messaging
and publish-subscribe style.
● Distributed systems: such as service-oriented, peer to peer style, object
request broker, and cloud computing styles.
● Shared memory styles: such as role-based, blackboard,
database-centric styles.
● Adaptive system styles: such as microkernel style, reflection,
domain-specific language styles.

Architectural Patterns

The architectural pattern shows how a solution can be used to solve a reoccurring
problem. In another word, it reflects how a code or components interact with each
other. Moreover, the architectural pattern is describing the architectural style of our
system and provides solutions for the issues in our architectural style. Personally, I
prefer to define architectural patterns as a way to implement our architectural style.
For example: how to separate the UI of the data module in our architectural style?
How to integrate a third-party component with our system? how many tires will we
have in our client-server architecture? Examples of architectural patterns are
microservices, message bus, service requester/ consumer, MVC, MVVM,
microkernel, n-tier, domain-driven design, and presentation-abstraction-control.

Design Patterns

Design patterns are accumulative best practices and experiences that software
professionals used over the years to solve the general problem by – trial and error –
they faced during software development. The Gang of Four (GOF, refers to Eric
Gamma, Richard Helm, Ralf Johnson, and John Vlissides) wrote a book in 1994 titled
with “Design Pattern – Elements of reusable object-oriented software” in which they
suggested that design patterns are based on two main principles of object-oriented
design:

● Develop to an interface, not to an implementation.


● Favor object composition over inheritance.

Also, they presented that the design patterns set contains 23 patterns and categorized
into three main sets:

1. Creational design patterns:

Provide a way to create objects while hiding the creation logic. Thus, the object
creation is to be done without instantiating objects directly with the “New” keyword
to gives the flexibility to decide which objects need to be created for a given use case.
The creational design patterns are:

● Abstract factory pattern: provide an interface to create objects without


specifying the classes.
● Singleton pattern: provide only a single instance of the calls and global
access to this instance.
● Builder Pattern: Separate the construction from representation and
allows the same construction to create multiple representations.
● Prototype pattern: creating duplicate without affecting the performance
and memory. So the duplicate object is built from the skeleton of an
existing object.

2. Structural patterns:

Concerned with class and object composition. The Structural design patterns are:

● Adapter pattern: it works as a bridge between two incompatible interfaces


and combines their capabilities.
● Bridge pattern: provide a way to decouple the abstraction from its
implementation.
● Filter pattern: Also known as criteria pattern, it provides a way to filter a
set of objects using different criteria and chaining them in a decoupled way
through logical operations.
● Composite pattern: provide a way to treat a group of objects in a similar
way as a single object. It composes objects in term of a tree structure to
represent part as well as a whole hierarchy
● Decorator pattern: allows adding new functionality to an existing object
without altering its structure.
● Façade pattern: provide a unified interface to a set of interfaces.it hides
the complexities of the system and provides an interface to the client using
which the client can access the system.
● Flyweight pattern: reduce the number of objects created and to decrease
memory footprint and increase performance. It helps in reusing already
existing similar kind objects by storing them and
creates a new object when no matching object is found.
● Proxy pattern: provides a place holder to another object to control access
to it. The object has an original object to interface its
functionality to the outer world.
3. Behavioral patterns:

Behavioral patterns are concerned with communications between objects. The


following is the list of behavioral patterns:

● Responsibility pattern: creates a chain of receiver objects for a request.


This pattern decouples the sender and receiver of a request based on the
type of request.
● Command pattern: it’s a data-driven pattern in which A request is
wrapped under an object as command and passed to an invoker
object.
● Interpreter pattern: provides a way to evaluate language grammar or
expression. It involves implementing an expression interface that tells to
interpret a particular context. This pattern is used in SQL parsing, symbol
processing engine, etc.
● Iterator pattern: provides a way to access the elements of a collection
object in a sequential manner without any need to know its underlying
representation.
● Mediator pattern: used to reduce communication complexity between
multiple objects or classes. It provides a mediator class that normally
handles all the communications between different classes and supports easy
maintenance of the code by loose coupling.
● Memento pattern: used to restore the state of an object to a
previous state.
● Observer pattern: used when there is a one-to-many relationship between
objects such as if one object is modified, its dependent objects are to be
notified automatically.
● State pattern: is used to change the class behavior based on its
state.
● Null object pattern: helps to avoid null references by having a
default object.
● Strategy pattern: provides a way to change class behavior or its algorithm
at run time.

● Template pattern: an abstract class exposes defined way(s)/template(s) to


execute its methods. Its subclasses can override the method implementation
as per need but the invocation is to be in the same way as defined by an
abstract class.
● Visitor pattern: used to change the executing algorithm of an element
class.

There are two more subsets of design pattern can be added to the 3 categories of
design pattern:

● J2EE patterns: patterns are specifically concerned with the presentation


tier. These patterns are identified by Sun Java Center.
● Concurrency patterns: such as: balking, join, lock and Thread pool
patterns

The bottom line:

The architectural style is a 10000-helicopter view of the system. It shows the system
design at the highest level of abstraction. It also shows the high-level module of the
application and how these modules are interacting. On the other hand, architectural
patterns have a huge impact on system implementation horizontally and vertically.
Finally, the design patterns are used to solve localized issues during the
implementation of the software. Also, it has a lower impact on the code than the
architectural patterns since the design pattern is more concerned with a specific
portion of code implementation such as initializing objects and communication
between objects.

Architectural Design:
Introduction: The software needs the architectural design to represents the design of
software. IEEE defines architectural design as “the process of defining a collection of
hardware and software components and their interfaces to establish the framework for
the development of a computer system.” The software that is built for computer-based
systems can exhibit one of these many architectural styles.
Each style will describe a system category that consists of :

● A set of components(eg: a database, computational modules) that will


perform a function required by the system.
● The set of connectors will help in coordination, communication, and
cooperation between the components.
● Conditions that how components can be integrated to form the
system.
● Semantic models that help the designer to understand the overall properties
of the system.

The use of architectural styles is to establish a structure for all the components of the
system.

Taxonomy of Architectural styles:

1] Data centered architectures:

● A data store will reside at the center of this architecture and is accessed
frequently by the other components that update, add, delete or modify the
data present within the store.
● The figure illustrates a typical data centered style. The client software
access a central repository. Variation of this approach are used to transform
the repository into a blackboard when data related

to client or data of interest for the client change the notifications to client
software.
● This data-centered architecture will promote integrability. This means that
the existing components can be changed and new client components can be
added to the architecture without the permission or concern of other clients.
● Data can be passed among clients using blackboard mechanism.

Advantage of Data centered architecture

● Repository of data is independent of clients


● Client work independent of each other
● It may be simple to add additional clients.
● Modification can be very easy

Data centered architecture

2] Data flow architectures:

● This kind of architecture is used when input data is transformed into output
data through a series of computational manipulative components.
● The figure represents pipe-and-filter architecture since it uses both pipe
and filter and it has a set of components called filters connected by lines.
● Pipes are used to transmitting data from one component to the next.
● Each filter will work independently and is designed to take data input of a
certain form and produces data output to the next filter of a specified form.
The filters don’t require any knowledge of the working
of neighboring filters.
● If the data flow degenerates into a single line of transforms, then it is
termed as batch sequential. This structure accepts the batch of data and
then applies a series of sequential components to transform it.

Advantages of Data Flow architecture

● It encourages upkeep, repurposing, and modification.


● With this design, concurrent execution is supported.

The disadvantage of Data Flow architecture

● It frequently degenerates to batch sequential system


● Data flow architecture does not allow applications that require greater user
engagement.
● It is not easy to coordinate two different but related streams

Data Flow architecture

3]Call and Return architectures: It is used to create a program that is easy to


scale and modify. Many sub-styles exist within this category. Two of them are
explained below.
● Remote procedure call architecture: This components is used to present
in a main program or sub program architecture distributed among multiple
computers on a network.
● Main program or Subprogram architectures: The main program
structure decomposes into number of subprograms or function into a control
hierarchy. Main program contains number of subprograms
that can invoke other components.

4]Object Oriented architecture: The components of a system encapsulate data


and the operations that must be applied to manipulate the data. The coordination and
communication between the components are established via the message passing.

Characteristics of Object Oriented architecture

● Object protect the system’s integrity.


● An object is unaware of the depiction of other items.

Advantage of Object Oriented architecture

● It enables the designer to separate a challenge into a collection of


autonomous objects.
● Other objects are aware of the implementation details of the object,
allowing changes to be made without having an impact on other
objects.
5]Layered architecture:
● A number of different layers are defined with each layer performing a well-
defined set of operations. Each layer will do some operations that becomes
closer to machine instruction set progressively.
● At the outer layer, components will receive the user interface operations
and at the inner layers, components will perform the operating system
interfacing(communication and coordination with
OS)
● Intermediate layers to utility services and application software functions.
● One common example of this architectural style is OSI-ISO (Open Systems
Interconnection-International Organisation for Standardisation)
communication system.

Conceptual Model of the Unified Modeling


Language (UML)
The Unified Modeling Language (UML) is a standard visual language for describing
and modelling software blueprints. The UML is more than just a graphical language.
Stated formally, the UML is for: Visualizing, Specifying, Constructing, and
Documenting.
The artifacts of a software-intensive system (particularly systems built using the
object-oriented style).

Three Aspects of UML:

Figure – Three Aspects of UML

Note – Language, Model, and Unified are the important aspect of UML as described
in the map above.

1. Language:

● It enables us to communicate about a subject which includes the


requirements and the system.
● It is difficult to communicate and collaborate for a team to successfully
develop a system without a language.

2. Model:

● It is a representation of a subject.
● It captures a set of ideas (known as abstractions) about its subject.

3. Unified:

● It is to bring together the information systems and technology industry’s


best engineering practices.
● These practices involve applying techniques that allow us to successfully
develop systems.

A Conceptual Model:
A conceptual model of the language underlines the three major elements:

• The Building Blocks

• The Rules

• Some Common Mechanisms

Once you understand these elements, you will be able to read and recognize the
models as well as create some of them.

Figure – A Conceptual Model of the UML

Building Blocks:
The vocabulary of the UML encompasses three kinds of building blocks:

Things:
Things are the abstractions that are first-class citizens in a model; relationships tie
these things together; diagrams group interesting collections of things.
There are 4 kinds of things in the UML:
1. Structural things

2. Behavioral things

3. Grouping things

4. Annotational things

These things are the basic object-oriented building blocks of the UML. You use them
to write well-formed models.

Relationships:
There are 4 kinds of relationships in the UML:
1. Dependency

2. Association

3. Generalization

4. Realization

These relationships are the basic relational building blocks of the UML.

Diagrams:
It is the graphical presentation of a set of elements. It is rendered as a connected graph
of vertices (things) and arcs (relationships).
1. Class diagram
2. Object diagram

3. Use case diagram

4. Sequence diagram

5. Collaboration diagram

6. Statechart diagram

7. Activity diagram

8. Component diagram 9. Deployment diagram

Rules:
The UML has a number of rules that specify what a well-formed model should look
like. A well-formed model is one that is semantically self-consistent and in harmony
with all its related models.
The UML has semantic rules for:

1. Names – What you can call things, relationships, and diagrams.

2. Scope – The context that gives specific meaning to a name.

3. Visibility – How those names can be seen and used by others.

4. Integrity – How things properly and consistently relate to one another.

5. Execution – What it means to run or simulate a dynamic model.

Common Mechanisms:
The UML is made simpler by the four common mechanisms. They are as
follows:
1. Specifications

2. Adornments

3. Common divisions

4. Extensibility mechanisms

Basic structural modeling:

Class Diagrams:

UML class diagrams: Class diagrams are the main building blocks of every object-
oriented method. The class diagram can be used to show the classes, relationships,
interface, association, and collaboration. UML is standardized in class diagrams.
Since classes are the building block of an application that is based on OOPs, so as the
class diagram has an appropriate structure to represent the classes, inheritance,
relationships, and everything that OOPs have in their context. It describes various
kinds of objects and the static relationship between them.
The main purpose to use class diagrams are:

● This is the only UML that can appropriately depict various aspects of the
OOPs concept.
● Proper design and analysis of applications can be faster and
efficient.
● It is the base for deployment and component diagram.
There are several software available that can be used online and offline to draw these
diagrams Like Edraw max, lucid chart, etc. There are several points to be kept in
focus while drawing the class diagram. These can be said as its syntax:

● Each class is represented by a rectangle having a subdivision of three


compartments name, attributes, and operation.
● There are three types of modifiers that are used to decide the
visibility of attributes and operations.
● + is used for public visibility(for everyone)
● # is used for protected visibility (for friend and derived)
● – is used for private visibility (for only me)

Below is the example of Animal class (parent) having two child class as dog and cat
both have object d1, c1 inheriting properties of the parent class.
Sequence Diagrams:
Sequence Diagrams – A sequence diagram simply depicts interaction between
objects in a sequential order i.e. the order in which these interactions take place. We
can also use the terms event diagrams or event scenarios to refer to a sequence
diagram. Sequence diagrams describe how and in what order the objects in a system
function. These diagrams are widely used by businessmen and software developers to
document and understand requirements for new and existing systems.

Sequence Diagram Notations –

1. Actors – An actor in a UML diagram represents a type of role where it

interacts with the system and its objects. It is important to note here that an

actor is always outside the scope of the system we aim

to model using the UML diagram. Figure – notation


symbol for actorWe use actors to depict various roles including human
users and other external subjects. We represent an actor in a UML diagram
using a stick person notation. We can have multiple actors in a sequence
diagram. For example – Here the user in seat reservation system is shown
as an actor where it exists outside the system and is not a part of the system.
Figure – an actor interacting with a seat reservation system

2. Lifelines – A lifeline is a named element which depicts an individual

participant in a sequence diagram. So basically each instance in a sequence

diagram is represented by a lifeline. Lifeline elements are located at the top

in a sequence diagram. The standard in UML for naming a lifeline follows

the following format – Instance Name : class name.


Figure – lifeline. We display a lifeline in a rectangle called head with its name
and type. The head is located on top of a vertical dashed line (referred to as the
stem) as shown above. If we want to model an unnamed instance, we follow
the same pattern except now the portion of lifeline’s name is left blank.
Difference between a lifeline and an actor – A lifeline always portrays an
object internal to the system whereas actors are used to depict objects external
to the system. The following is an example of a sequence diagram:
Figure – a sequence diagram

3. Messages – Communication between objects is depicted using messages.

The messages appear in a sequential order on the lifeline. We represent

messages using arrows. Lifelines and messages form the core of a sequence

diagram. Messages can be broadly classified into the following categories :


Figure – a sequence diagram with different types of messages
● Synchronous messages – A synchronous message waits for a
reply before the interaction can move forward. The sender waits
until the receiver has completed the processing of the message.
The caller continues only when it knows that the receiver has
processed the previous message i.e. it receives a reply message.
A large number of calls in object oriented programming are
synchronous. We use a solid arrow head to represent a
synchronous message.

Figure – a sequence diagram using a synchronous message


● Asynchronous Messages – An asynchronous message
does not wait for a reply from the receiver. The interaction
moves forward irrespective of the receiver processing the
previous message or not. We use a lined arrow head to represent
an asynchronous message.

● Create message – We use a Create message to


instantiate a new object in the sequence diagram. There
are situations when a particular message call requires the creation
of an object. It is represented with a dotted arrow and create
word labelled on it to specify that it is the create Message
symbol. For example – The creation of a new order on a e-
commerce website would require a new object of Order class to
be created.

Figure – a situation where create message is used


● Delete Message – We use a Delete Message to delete an object.
When an object is deallocated memory or is destroyed within the
system we use the Delete Message symbol. It destroys the
occurrence of the object in the system.It is represented by an
arrow terminating with a x. For example – In the scenario below
when the order is received by the user, the object of order class
can be destroyed.
Figure – a scenario where delete message is used
● Self Message – Certain scenarios might arise where the object
needs to send a message to itself. Such messages are called Self
Messages and are represented with a U

shaped arrow. Figure – self


messageFor example – Consider a scenario where the device
wants to access its webcam. Such a scenario is represented using
a self message.
Figure –
a scenario where a self message is used
● Reply Message – Reply messages are used to show the message
being sent from the receiver to the sender. We represent a
return/reply message using an open arrowhead with a dotted line.
The interaction moves forward only when a reply message is sent
by the receiver.

Figure – reply
messageFor example – Consider the scenario where the device
requests a photo from the user. Here the message which shows
the photo being sent is a reply message.
Figure –
a scenario where a reply message is used
● Found Message – A Found message is used to represent a
scenario where an unknown source sends the message. It is
represented using an arrow directed towards a lifeline from an
end point. For example: Consider the scenario of a hardware
failure.
● Figure – found message.It can be due to multiple reasons and we
are not certain as to what caused the hardware
failure.
Figure – a scenario where found message is used
● Lost Message – A Lost message is used to represent a
scenario where the recipient is not known to the system. It is
represented using an arrow directed towards an end point from a
lifeline. For example: Consider a scenario where a warning is
generated.

Figure – lost
messageThe warning might be generated for the user or
other software/object that the lifeline is interacting with. Since
the destination is not known before hand, we use the Lost
Message symbol.

Figure – a scenario where lost message is used


4. Guards – To model conditions we use guards in UML. They are used when
we need to restrict the flow of messages on the pretext of a condition being
met. Guards play an important role in letting software developers know the
constraints attached to a system or a particular process. For example: In
order to be able to withdraw cash, having a balance greater than zero is a
condition that must be met as shown below.

Figure – sequence diagram using a guard

A sequence diagram for an emotion based music player –


Figure – a sequence diagram for an emotion based music playerThe above sequence
diagram depicts the sequence diagram for an emotion based music
player:

1. Firstly the application is opened by the user.


2. The device then gets access to the web cam.
3. The webcam captures the image of the user.
4. The device uses algorithms to detect the face and predict the mood.
5. It then requests database for dictionary of possible moods.
6. The mood is retrieved from the database.
7. The mood is displayed to the user.
8. The music is requested from the database.
9. The playlist is generated and finally shown to the user.

Uses of sequence diagrams –


● Used to model and visualise the logic behind a sophisticated function,
operation or procedure.
● They are also used to show details of UML use case diagrams.
● Used to understand the detailed functionality of current or future
systems.
● Visualise how messages and tasks move between objects or components in
a system.

What is a collaboration diagram?

A collaboration diagram, also known as a communication diagram, is an

illustration of the relationships and interactions among software objects in the

Unified Modeling Language (UML). Developers can use these diagrams to

portray the dynamic behaviour of a particular use case and define the role of each

object.

To create a collaboration diagram, first identify the structural elements required to

carry out the functionality of an interaction. Then build a model using the

relationships between those elements. Several vendors offer software for creating

and editing collaboration diagrams.

Notations of a collaboration diagram

A collaboration diagram resembles a flowchart that portrays the roles,

functionality and behaviour of individual objects as well as the overall operation

of the system in real time. The four major components of a collaboration diagram

include the following:


1. Objects. These are shown as rectangles with naming labels inside. The

naming label follows the convention of object name: class name. If an

object has a property or state that specifically influences the

collaboration, this should also be noted.

2. Actors. These are instances that invoke the interaction in the diagram.

Each actor has a name and a role, with one actor initiating the entire use

case.

3. Links. These connect objects with actors and are depicted using a solid

line between two elements. Each link is an instance where messages can

be sent.

4. Messages between objects. These are shown as a labelled arrow placed

near a link. These messages are communications between objects that

convey information about the activity and can include the sequence

number.

The most important objects are placed in the centre of the diagram, with all other

participating objects branching off. After all objects are placed, links and

messages should be added in between.


Components of a collaboration diagram

When to use a collaboration diagram

Developers should use collaboration diagrams when the relationships among

objects are crucial to display. A few examples of instances where collaboration

diagrams might be helpful include the following:


● Modelling collaborations, mechanisms or the structural organisation

within a system design.

● Providing an overview of collaborating objects within an object-oriented

system.

● Exhibiting many alternative scenarios for the same use case.

● Demonstrating forward and reverse engineering.

● Capturing the passage of information between objects.

● Visualising the complex logic behind an operation.

However, collaboration diagrams are best suited to the portrayal of simple

interactions among relatively small numbers of objects. As the number of objects

and messages grows, a collaboration diagram can become difficult to read and use

efficiently. Additionally, collaboration diagrams typically exclude descriptive

information, such as timing.

Collaboration vs. sequence diagrams

In UML the two types of interaction diagrams are collaboration and sequence

diagrams. While both types use similar information, they display them in separate

ways. Collaboration diagrams visualize the structural organization of objects and

their interactions. On the other hand, sequence diagrams focus on the order of

messages that flow between objects. In most scenarios, a single figure is not

sufficient in describing the behavior of a system and both figures are required.
Use Case Diagram
Use case diagrams referred as a Behavior model or diagram. It simply describes and
displays the relation or interaction between the users or customers and providers of
application service or the system. It describes different actions that a system performs
in collaboration to achieve something with one or more users of the system. Use case
diagram is used a lot nowadays to manage the system.

Here, we will understand the designing use case diagram for the library management
system. Some scenarios of the system are as follows :

1. User who registers himself as a new user initially is regarded as staff or


student for the library system.
● For the user to get registered as a new user, registration forms are
available that is needed to be fulfilled by the user.
● After registration, a library card is issued to the user by the
librarian. On the library card, an ID is assigned to
cardholder or user.
2. After getting the library card, a new book is requested by the user as
per there requirement.
3. After, requesting, the desired book or the requested book is reserved
by the user that means no other user can request for that book.
4. Now, the user can renew a book that means the user can get a new due date
for the desired book if the user has renewed them.
5. If the user somehow forgets to return the book before the due date, then the
user pays fine. Or if the user forgets to renew the book till
the due date, then the book will be overdue and the user pays fine.
6. User can fill the feedback form available if they want to.
7. Librarian has a key role in this system. Librarian adds the records in the
library database about each student or user every time issuing
the book or returning the book, or paying fine.
8. Librarian also deletes the record of a particular student if the student leaves
the college or passed out from the college. If the book no longer exists in
the library, then the record of the particular book is
also deleted.
9. Updating database is the important role of Librarian.

Component Based Diagram


Component Diagrams are used to show code modules of a system in Unified
Modeling Language (UML). They are generally used for modeling subsystems. It
represents how each and every component acts during execution and running of a
system program. They are also used to show and represent structure and organization
of all components. These code modules include application program, ActiveX control,
Java Beans, backend databases, or some ASP programs. The component diagrams
represent implementation of view models. The component diagrams are for
representing interfaces and dependencies among software architecture. The word
component simply means modules of a class that usually represents an independent
subsystem.
These components have ability to interface with rest of system. The component
diagram is used to explain working and behavior of various components of a system
and is static diagrams of UML. They are also used for subsystem modeling. The main
purpose of component diagram is simply to show relationship among various
components of a system.

The component and interface are as shown below :

Example –
Following is a component diagram for the ‘On-line Course Registration’ system. This
diagram shows conceptual view of server-side components.
Advantages :

● Component diagrams are very simple, standardized, and very easy


to understand.
● It is also useful in representing implementation of system.
● These are very useful when you want to make a design of some
device that contains an input-output socket.
● Use of reusable components also helps in reducing overall development
cost.
● It is very easy to modify and update implementation without causing any
other side effects.

Disadvantages :

● They cannot be used for designing Software like web pages,


applications, etc.
● It also requires sponsoring equipment and actuators for each and every
component.
UNIT - IV Testing

Strategies

Strategic Approach to Software Testing Introduction:

Software testing is a critical phase in the software development lifecycle that aims to
identify defects, ensure quality, and validate that the software meets its requirements.
A strategic approach to software testing involves planning, designing, and executing
tests systematically to achieve optimal results. It ensures that testing efforts are well-
organized, efficient, and aligned with project goals.

Key Principles of a Strategic Testing Approach

1. Early Testing:Start testing as early as possible in the development process.


Identify defects and issues early, reducing the cost of fixing them
later.

2. Test Planning: Develop a comprehensive test plan that outlines testing goals,
scope, resources, schedules, and test objectives.

3. Requirement Analysis: Understand and analyze requirements thoroughly to


create test cases that cover all expected functionalities and scenarios.

4. Test Design: Design test cases that are effective, efficient, and cover various
use cases. Prioritize tests based on risk and criticality.
5. Automation:Utilize test automation to execute repetitive and time-consuming
tests, freeing up manual testers to focus on exploratory
testing.

6. Traceability: Maintain traceability between requirements, test cases, and


defects. Ensure that each requirement has corresponding test coverage.

7. Regression Testing: Regularly perform regression testing to ensure that new


changes do not adversely affect existing functionalities.

8. Test Environment: Set up and maintain test environments that closely


resemble the production environment to ensure accurate testing.

9. Test Data Management: Plan and manage test data to cover a wide range of
scenarios and ensure realistic testing.

10. Continuous Improvement:Continuously refine and improve testing processes


based on lessons learned from previous testing cycles.

TEST STRATEGIES FOR CONVENTIONAL SOFTWARE

Unit Testing : The unit test focuses on the internal processing logic and data
structures within the boundaries of a component. This type of testing can be
conducted in parallel for multiple components.
Unit-test considerations:-

1. The module interface is tested to ensure proper information flows (into and
out).

2. Local data structures are examined to ensure temporary data store during
execution.
3. All independent paths are exercised to ensure that all statements in a module have
been executed at least once.

4. Boundary conditions are tested to ensure that the module operates properly at
boundaries. Software often fails at its boundaries.
5. All error-handling paths are tested. If data do not enter and exit properly, all other
tests are controversial.

Among the potential errors that should be tested when error handling is evaluated are:

(1) Error description is unintelligible,

(2) Error noted does not correspond to error encountered,

(3) Error condition causes system intervention prior to error handling,

(4) exception-condition processing is incorrect,

(5) Error description does not provide enough information to assist in the location of
the cause of the error

Unit-test procedures:-
The design of unit tests can occur before coding begins or after source code has been
generate. Because a component is not a stand-alone program, driver and/or stub
software must often be developed for each unit test. Driver is nothing more than a
“main program” that accepts test case data, passes such data to the component (to be
tested), and prints relevant results. Stubs serve to replace modules that are subordinate
(invoked by) the component to be tested. A stub may do minimal data manipulation,
prints verification of entry, and returns control to the module undergoing testing.
Drivers and stubs represent testing “overhead.” That is, both are software that must be
written (formal design is not commonly applied) but that is not delivered with the
final software product

Integration Testing : Data can be lost across an interface; one component

can have an inadvertent, adverse effect on another; sub functions, when combined,
may not produce the desired major function. The objective of Integration testing is to
take unit-tested components and build a program structure that has been dictated by
design. The program is constructed and tested in small increments, where errors are
easier to isolate and correct. A number of different incremental integration strategies
are:- a) Top-down integration testing is an incremental approach to construction of the
software architecture. Modules are integrated by moving downward through the
control hierarchy. Modules subordinate to the main control module are incorporated
into the structure in either a depth-first or breadth-first manner. The integration
process is performed in a series of five steps:

1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth or breadth first),


subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.

4. On completion of each set of tests, another stub is replaced with the real
component.

5. Regression testing may be conducted to ensure that new errors have not been
introduced. The top-down integration strategy verifies major control or decision
points early in the test process. Stubs replace low-level modules at the beginning of
top-down testing. Therefore, no significant data can flow upward in the program
structure. As a tester, you are left with three choices:

(1) Delay many tests until stubs are replaced with actual modules,

(2) Develop stubs that perform limited functions that simulate the actual module, or

(3) Integrate the software from the bottom of the hierarchy upward.

b) Bottom-up integration- Begins construction and testing with components at

the lowest levels in the program structure. Because components are integrated from
the bottom up, the functionality provided by components subordinate to a given level
is always available and the need for stubs is eliminated. A bottom-up integration
strategy may be implemented with the following steps:

1. Low-level components are combined into clusters (sometimes called builds) that
perform a specific software sub function.

2. A driver (a control program for testing) is written to coordinate test case input and
output.

3. The cluster is tested.


4. Drivers are removed and clusters are combined moving upward in the program
structure. Integration follows the following pattern—D are drivers and
M are modules. Drivers will be removed prior to integration of modules

Regression testing:- Each time a new module is added as part of integration


testing, the software changes. New data flow paths are established, new I/O may
occur, and new control logic is invoked. These changes may cause problems with
functions that previously worked flawlessly. Regression testing is the re-execution of
some subset of tests that have already been conducted to ensure that changes have not
propagated
unintended side effects.

Regression testing may be conducted manually or using automated


capture/playback tools. Capture/playback tools enable the software engineer to capture
test cases and results for subsequent playback and comparison. The regression test
suite contains three different classes of test cases: • A representative sample of tests
that will exercise all software functions. • Additional tests that focus on software
functions that are likely to be affected by the change. • Tests that focus on the
software components that have been changed. As integration testing proceeds, the
number of regression tests can grow.

Smoke testing:- It is an integration testing approach that is commonly used

when product software is developed. It is designed as a pacing

mechanism for time-critical projects, allowing the software team to assess the
project on a frequent basis. In essence, the smoke-testing approach
encompasses the following activities:

1. Software components that have been translated into code are integrated into a
build. A build includes all data files, libraries, reusable modules, and engineered
components that are required to implement one or more product
functions.

2. A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover “showstopper”
errors that have the highest likelihood of throwing the software project behind
schedule.

3. The build is integrated with other builds, and the entire product is smoke tested
daily. The integration approach may be top down or bottom up. Smoke testing
provides a number of benefits when it is applied on complex, time
critical software projects: • Integration risk is minimized. Because smoke tests are
conducted daily, incompatibilities and other show-stopper errors are uncovered early,
• The quality of the end product is improved. Smoke testing is likely to uncover
functional errors as well as architectural and
component-level design errors. • Error diagnosis and correction are simplified. Errors
uncovered during smoke testing are likely to be associated with “new software
increments”—that is, the software that has just been added to the build(s) is a probable
cause of a newly discovered error. • Progress is easier to assess. With each passing
day, more of the software has been integrated and more has been demonstrated to
work. This improves team morale and gives managers a good indication that progress
is being made. Strategic options:- The major disadvantage of the top-down approach
is the need for stubs and the attendant testing difficulties that can be associated with
them. The major disadvantage of bottom-up integration is that “the program as an
entity does not exist until the last module is added”.

Software Testing:
Software Testing can be majorly classified into two categories:

1. Black Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is not known to the
tester. Only the external design and structure are

tested.

2. White Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is known to the
tester. Implementation and impact of the code are tested.

Black box testing and white box testing are two different approaches to
software testing, and their differences are as
follows:

Black box testing is a testing technique in which the internal workings of the software
are not known to the tester. The tester only focuses on the input and output of the
software. Whereas, White box testing is a testing technique in which the tester has
knowledge of the internal workings of the software, and can test individual code
snippets, algorithms and methods.

Testing objectives: Black box testing is mainly focused on testing the functionality of
the software, ensuring that it meets the requirements and specifications. White box
testing is mainly focused on ensuring that the internal code of the software is correct
and efficient.

Knowledge level: Black box testing does not require any knowledge of the internal
workings of the software, and can be performed by testers who are not familiar with
programming languages. White box testing requires knowledge of programming
languages, software architecture and design
patterns.

Testing methods: Black box testing uses methods like equivalence


partitioning, boundary value analysis, and error guessing to create test cases. Whereas,
white box testing uses methods like control flow testing, data flow testing and
statement coverage.

Scope: Black box testing is generally used for testing the software at the functional
level. White box testing is used for testing the software at the unit level, integration
level and system level.

Advantages and disadvantages:

Black box testing is easy to use, requires no programming knowledge and is effective
in detecting functional issues. However, it may miss some important internal defects
that are not related to functionality. White box testing is effective in detecting internal
defects, and ensures that the code is efficient and maintainable. However, it requires
programming knowledge and can be
time-consuming.

In conclusion, both black box testing and white box testing are important for software
testing, and the choice of approach depends on the testing
objectives, the testing stage, and the available resources.
Differences between Black Box Testing vs White Box Testing:

Black Box Testing White Box Testing

It is a way of software testing in which It is a way of testing the software in


the internal structure or the program or which the tester has knowledge about
the code is hidden and nothing is known the internal structure or the code or the
about it. program of the software.

Implementation of code is not Code implementation is necessary


needed for black box testing. for white box testing.

It is mostly done by software testers. It is mostly done by software developers.


No knowledge of implementation is Knowledge of implementation is
needed. required.

It can be referred to as outer or external It is the inner or the internal software


software testing.
testing.

It is a functional test of the software. It is a structural test of the software.

This testing can be initiated based on This type of testing of software is started
the requirement specifications after a detail design document.
document.

No knowledge of programming is It is mandatory to have knowledge of


programming.
required.

It is the behaviour testing of the


It is the logic testing of the software.
software.
It is applicable to the higher levels of It is generally applicable to the lower
testing of software. levels of software testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for


It is suitable for algorithm testing.
algorithm testing.

Data domains along with inner or


Can be done by trial and error ways and
internal boundaries can be better
methods.
tested.

Example: Search something on Example: By input to check and


google by using keywords verify loops
Black-box test design techniques-
White-box test design techniques-

● Decision table testing


● All-pairs testing ● Control flow testing
● Equivalence partitioning ● Data flow testing

● Error guessing ● Branch testing

Types of Black Box Testing: Types of White Box Testing:

● Functional Testing ● Path Testing


● Non-functional testing ● Loop Testing
● Regression Testing ● Condition testing

It is less exhaustive as compared to white It is comparatively more exhaustive than


box testing. black box testing.

Verification and Validation Testing:


In this section, we will learn about verification and validation testing and their major
differences.
Verification testing
Verification testing includes different activities such as business requirements, system
requirements, design review, and code walkthrough while developing a
product.

It is also known as static testing, where we are ensuring that "we are developing the right
product or not". And it also checks that the developed application fulfilling all the
requirements given by the client.

Validation Testing:
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System
Testing (ST), and non-functional testing includes User acceptance testing (UAT).
Validation testing is also known as dynamic testing, where we are ensuring that "we have
developed the product right." And it also checks that the software meets the business needs
of the client.

Note: Verification and Validation process are done under the V model of the software
development life cycle.

Difference between verification and validation testing


Verification Validation

We check whether we are developing We check whether the developed

the right product or not. product is right.

Verification is also known as static Validation is also known as dynamic


testing. testing.
Verification includes different methods Validation includes testing like functional
testing, system testing,
like Inspections, Reviews, and
Walkthroughs.

integration, and User acceptance


testing.

It is a process of checking the It is a process of checking the software

work-products (not the final product) of a during or at the end of the development

development cycle to decide whether the cycle to decide whether the software follow

product meets the specified the specified business

requirements. requirements.

Quality assurance comes under Quality control comes under validation

verification testing. testing.

The execution of code does not happen In validation testing, the execution of code
happens.
in the verification testing.

In verification testing, we can find the bugs In the validation testing, we can find those
early in the development phase of the
product. bugs, which are not caught in the

verification process.
Verification testing is executed by the Validation testing is executed by the testing
Quality assurance team to make sure that team to test the application.
the product is developed according to
customers' requirements.

Verification is done before the validation After verification testing, validation testing
testing. takes place.

In this type of testing, we can verify that the In this type of testing, we can validate that
inputs follow the outputs or not. the user accepts the product or not.

System Testing:

INTRODUCTION:

System Testing is a type of software testing that is performed on a complete

integrated system to evaluate the compliance of the system with the corresponding

requirements. In system testing, integration testing passed components are taken as

input. The goal of integration testing is to detect any irregularity between the units

that are integrated together. System testing detects defects within both the integrated

units and the whole system. The result of system testing is the observed behaviour of

a component or a system when it is tested. System Testing is carried out on the whole

system in the context of either system requirement specifications or functional

requirement specifications or in the context of both. System testing tests the design

and behaviour of the system and also the expectations of the customer. It is performed

to test the system beyond the bounds mentioned in the software requirements

specification (SRS). System Testing is basically performed by a testing team that is


independent of the development team that helps to test the quality of the system

impartial. It has both functional and non-functional testing. System Testing is a

black-box testing. System Testing is performed after the integration testing and

before the acceptance testing.

System Testing Process: System Testing is performed in the following steps:

● Test Environment Setup: Create testing environment for the better


quality testing.
● Create Test Case: Generate test case for the testing process.
● Create Test Data: Generate the data that is to be tested.
● Execute Test Case: After the generation of the test case and the test data,
test cases are executed.
● Defect Reporting: Defects in the system are detected.
● Regression Testing: It is carried out to test the side effects of the testing
process.
● Log Defects: Defects are fixed in this step.
● Retest: If the test is not successful then again test is performed.

Types of System Testing:

● Performance Testing: Performance Testing is a type of software testing


that is carried out to test the speed, scalability, stability and
reliability of the software product or application.
● Load Testing: Load Testing is a type of software Testing which is carried
out to determine the behaviour of a system or software product under
extreme load.
● Stress Testing: Stress Testing is a type of software testing performed to
check the robustness of the system under the varying
loads.
● Scalability Testing: Scalability Testing is a type of software testing which
is carried out to check the performance of a software application or system
in terms of its capability to scale up or scale
down the number of user request load.

Tools used for System Testing :

1. JMeter
2. Gallen Framework
3. Selenium

Here are a few common tools used for System Testing:

1. HP Quality Center/ALM
2. IBM Rational Quality Manager
3. Microsoft Test Manager
4. Selenium
5. Appium
6. LoadRunner
7. Gatling
8. JMeter
9. Apache JServ
10. SoapUI
Note: The choice of tool depends on various factors like the technology
used, the size of the project, the budget, and the testing
requirements.

Advantages of System Testing :

● The testers do not require more knowledge of programming to carry


out this testing.
● It will test the entire product or software so that we will easily detect the
errors or defects which cannot be identified during the unit testing and
integration testing.
● The testing environment is similar to that of the real time production or
business environment.
● It checks the entire functionality of the system with different test scripts and
also it covers the technical and business requirements of
clients.
● After this testing, the product will almost cover all the possible bugs or
errors and hence the development team will confidently go ahead
with acceptance testing.

Here are some advantages of System Testing:

● Verifies the overall functionality of the system.


● Detects and identifies system-level problems early in the development
cycle.
● Helps to validate the requirements and ensure the system meets the user
needs.
● Improves system reliability and quality.
● Facilitates collaboration and communication between development
and testing teams.
● Enhances the overall performance of the system.
● Increases user confidence and reduces risks.
● Facilitates early detection and resolution of bugs and defects.
● Supports the identification of system-level dependencies and inter-module
interactions.
● Improves the system’s maintainability and scalability.

Disadvantages of System Testing :

● This testing is time consuming process than another testing


techniques since it checks the entire product or software.
● The cost for the testing will be high since it covers the testing of
entire software.
● It needs good debugging tool otherwise the hidden errors will not be
found.
Here are some disadvantages of System Testing:

● Can be time-consuming and expensive.


● Requires adequate resources and infrastructure.
● Can be complex and challenging, especially for large and complex
systems.
● Dependent on the quality of requirements and design documents.
● Limited visibility into the internal workings of the system.
● Can be impacted by external factors like hardware and network
configurations.
● Requires proper planning, coordination, and execution.
● Can be impacted by changes made during development.
● Requires specialized skills and expertise.
● May require multiple test cycles to achieve desired results.
Debugging
Debugging is the process of identifying and resolving errors, or bugs, in a software
system. It is an important aspect of software engineering because bugs can cause a
software system to malfunction, and can lead to poor performance or incorrect results.
Debugging can be a time-consuming and complex task, but it is essential for ensuring
that a software system is
functioning correctly.

There are several common methods and techniques used in debugging,


including:

1. Code Inspection: This involves manually reviewing the source code of a

software system to identify potential bugs or errors.

2. Debugging Tools: There are various tools available for debugging such as

debuggers, trace tools, and profilers that can be used to identify and resolve
bugs.

3. Unit Testing: This involves testing individual units or components of a

software system to identify bugs or errors.

4. Integration Testing: This involves testing the interactions between

different components of a software system to identify bugs or errors.

5. System Testing: This involves testing the entire software system to identify

bugs or errors.

6. Monitoring: This involves monitoring a software system for unusual

behavior or performance issues that can indicate the presence of

bugs or errors.
7. Logging: This involves recording events and messages related to the

software system, which can be used to identify bugs or errors.

It is important to note that debugging is an iterative process, and it may take multiple
attempts to identify and resolve all bugs in a software system. Additionally, it is
important to have a well-defined process in place for reporting and tracking bugs, so
that they can be effectively managed and resolved.

In summary, debugging is an important aspect of software engineering, it’s the


process of identifying and resolving errors, or bugs, in a software system. There are
several common methods and techniques used in debugging,
including code inspection, debugging tools, unit testing, integration testing, system
testing, monitoring, and logging. It is an iterative process that may take multiple
attempts to identify and resolve all bugs in a software system.

In the context of software engineering, debugging is the process of fixing a bug in the
software. In other words, it refers to identifying, analyzing, and removing errors. This
activity begins after the software fails to execute properly and concludes by solving
the problem and successfully testing the software. It is considered to be an extremely
complex and tedious task because errors need to be resolved at all stages of
debugging.

A better approach is to run the program within a debugger, which is a


specialized environment for controlling and monitoring the execution of a program.
The basic functionality provided by a debugger is the insertion of breakpoints within
the code. When the program is executed within the debugger, it stops at each
breakpoint. Many IDEs, such as Visual C++ and C-Builder provide built-in
debuggers.

Debugging Process: Steps involved in debugging are:


● Problem identification and report preparation.
● Assigning the report to the software engineer defect to verify that it is
genuine.
● Defect Analysis using modeling, documentation, finding and testing
candidate flaws, etc.
● Defect Resolution by making required changes to the system.
● Validation of corrections.

The debugging process will always have one of two outcomes :

1. The cause will be found and corrected.


2. The cause will not be found.

Later, the person performing debugging may suspect a cause, design a test case to
help validate that suspicion and work toward error correction in an
iterative fashion.

During debugging, we encounter errors that range from mildly annoying to


catastrophic. As the consequences of an error increase, the amount of pressure to find
the cause also increases. Often, pressure sometimes forces a software developer to fix
one error and at the same time introduce two more.

Debugging Approaches/Strategies:

1. Brute Force: Study the system for a larger duration in order to understand

the system. It helps the debugger to construct different representations of


systems to be debugged depending on the need. A study of the system is
also done actively to find recent changes made to the software.

2. Backtracking: Backward analysis of the problem which involves tracing

the program backward from the location of the failure message in order to
identify the region of faulty code. A detailed study of the region is
conducted to find the cause of defects.

3. Forward analysis of the program involves tracing the program forwards

using breakpoints or print statements at different points in the program and


studying the results. The region where the wrong outputs are obtained is
the region that needs to be focused on to find
the defect.

4. Using past experience with the software debug the software with similar

problems in nature. The success of this approach depends on

the expertise of the debugger.

5. Cause elimination: it introduces the concept of binary partitioning.

Data related to the error occurrence are organized to isolate


potential causes.

6. Static analysis: Analyzing the code without executing it to identify

potential bugs or errors. This approach involves analyzing code syntax,


data flow, and control flow.

7. Dynamic analysis: Executing the code and analyzing its behavior at

runtime to identify errors or bugs. This approach involves techniques like

runtime debugging and profiling.

8. Collaborative debugging: Involves multiple developers working together

to debug a system. This approach is helpful in situations where multiple


modules or components are involved, and the root cause of the error is not
clear.

9. Logging and Tracing: Using logging and tracing tools to identify the

sequence of events leading up to the error. This approach involves


collecting and analyzing logs and traces generated by the system
during its execution.

10. Automated Debugging: The use of automated tools and techniques to

assist in the debugging process. These tools can include static and dynamic
analysis tools, as well as tools that use machine learning and artificial
intelligence to identify errors and
suggest fixes.

Debugging Tools:

Debugging tool is a computer program that is used to test and debug other programs.
A lot of public domain software like gdb and dbx are available for debugging. They
offer console-based command-line interfaces. Examples of automated debugging tools
include code-based tracers, profilers, interpreters,
etc. Some of the widely used debuggers are:

● Radare2
● WinDbg
● Valgrind

Difference Between Debugging and Testing:

Debugging is different from testing. Testing focuses on finding bugs, errors, etc
whereas debugging starts after a bug has been identified in the software. Testing is
used to ensure that the program is correct and it was supposed to do with a certain
minimum success rate. Testing can be manual or automated. There are several
different types of testing unit testing, integration testing, alpha, and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by
some automated tools available but is more of a manual process as every bug is
different and requires a different technique, unlike a pre-defined testing mechanism.
Advantages of Debugging:

Several advantages of debugging in software engineering:

1. Improved system quality: By identifying and resolving bugs, a software

system can be made more reliable and efficient, resulting in

improved overall quality.

2. Reduced system downtime: By identifying and resolving bugs, a software

system can be made more stable and less likely to

experience downtime, which can result in improved availability for users.

3. Increased user satisfaction: By identifying and resolving bugs, a software

system can be made more user-friendly and better able to

meet the needs of users, which can result in increased satisfaction.

4. Reduced development costs: By identifying and resolving bugs early in

the development process, it can save time and resources that would
otherwise be spent on fixing bugs later in the

development process or after the system has been deployed.


5. Increased security: By identifying and resolving bugs that could be

exploited by attackers, a software system can be made more secure,


reducing the risk of security breaches.

6. Facilitates change: With debugging, it becomes easy to make changes to

the software as it becomes easy to identify and fix bugs that would have
been caused by the changes.

7. Better understanding of the system: Debugging can help developers gain

a better understanding of how a software system works, and how different


components of the system interact with one another.
8. Facilitates testing: By identifying and resolving bugs, it makes it easier to

test the software and ensure that it meets the requirements

and specifications.

In summary, debugging is an important aspect of software engineering as it helps to


improve system quality, reduce system downtime, increase user
satisfaction, reduce development costs, increase security, facilitate change, a better
understanding of the system, and facilitate testing.

Disadvantages of Debugging:

While debugging is an important aspect of software engineering, there are also some
disadvantages to consider:

1. Time-consuming: Debugging can be a time-consuming process, especially

if the bug is difficult to find or reproduce. This can cause delays in the
development process and add to the overall cost of the

project.

2. Requires specialized skills: Debugging can be a complex task that requires

specialized skills and knowledge. This can be a challenge

for developers who are not familiar with the tools and techniques used in
debugging.

3. Can be difficult to reproduce: Some bugs may be difficult to reproduce,

which can make it challenging to identify and resolve

them.

4. Can be difficult to diagnose: Some bugs may be caused by


interactions between different components of a software system, which can
make it challenging to identify the root cause of the
problem.

5. Can be difficult to fix: Some bugs may be caused by fundamental design

flaws or architecture issues, which can be difficult or impossible to fix


without significant changes to the software system.

6. Limited insight: In some cases, debugging tools can only provide limited

insight into the problem and may not provide enough

information to identify the root cause of the problem.

7. Can be expensive: Debugging can be an expensive process,

especially if it requires additional resources such as specialized


debugging tools or additional development time.

Product metrics:

Software Quality
Software quality product is defined in term of its fitness of purpose. That is, a quality product
does precisely what the users want it to do. For software products, the
fitness of use is generally explained in terms of satisfaction of the requirements laid down in
the SRS document. Although "fitness of purpose" is a satisfactory interpretation of quality
for many devices such as a car, a table fan, a grinding machine, etc.for software products,
"fitness of purpose" is not a wholly satisfactory
definition of quality.

The modern view of a quality associated with a software product several quality methods
such as the following:
Portability: A software device is said to be portable, if it can be freely made to work in
various operating system environments, in multiple machines, with other software products,
etc.

Usability: A software product has better usability if various categories of users can easily
invoke the functions of the product.

Reusability: A software product has excellent reusability if different modules of the product
can quickly be reused to develop new products.

Correctness: A software product is correct if various requirements as specified in the SRS


document have been correctly implemented.

Maintainability: A software product is maintainable if bugs can be easily corrected as and


when they show up, new tasks can be easily added to the product, and the functionalities of
the product can be easily modified, etc.

Software Quality Management System


Software Quality Management System contains the methods that are used by the
authorities to develop products having the desired quality.

Managerial Structure: Quality System is responsible for managing the


structure as a whole. Every Organization has a managerial structure.

Individual Responsibilities: Each individual present in the organization must have


some responsibilities that should be reviewed by the top management and each
individual present in the system must take this seriously.
Quality System Activities: The activities which each quality system must have been

● Project Auditing
● Review of the quality system
● It helps in the development of methods and guidelines
Evolution of Quality Management System
Quality Systems are basically evolved over the past some years. The evolution of a
Quality Management System is a four-step process.

The main task of quality control is to detect defective devices and it also helps in
finding the cause that leads to the defect. It also helps in the correction of bugs.

Quality Assurance helps an organisation in making good quality products. It also


helps in improving the quality of the product by passing the products through security
checks.

Total Quality Management(TQM) checks and assures that all the procedures must be
continuously improved regularly through process measurements.

Metrics for the Design Model of the


Product:
Metrics simply measures quantitative assessment that focuses on countable values
most commonly used for comparing and tracking performance of
system. Metrics are used in different scenarios like analyzing model, design model,
source code, testing, and maintenance. Metrics for design modeling allows developers
or software engineers to evaluate or estimate quality of design and include various
architecture and component-level designs.

Metrics by Glass and Card :


In designing a product, it is very important to have efficient management of
complexity. Complexity itself means very difficult to understand. We know that
systems are generally complex as they have many interconnected components that
make it difficult to understand. Glass and Card are two scientists who have suggested
three design complexity measures. These are given below :

1. Structural Complexity –

Structural complexity depends upon fan-out for modules. It can be defined


as :
S(k) = f2out(k)

Where fout represents fanout for module k (fan-out means number of


modules that are subordinating module k).

2. Data Complexity –

Data complexity is complexity within the interface of an internal module. It


is the size and intricacy of data. For some module k, it can be defined as :
D(k) = tot_var(k) / [fout(k)+1]
Where tot_var is the total number of input and output variables going to
and coming out of the module.

3. System Complexity –

System complexity is a combination of structural and data complexity. It


can be denoted as:
Sy(k) = S(k)+D(k)

When structural, data, and system complexity get increased, overall


architectural complexity also gets increased.
Complexity metrics –
Complexity metrics are used to measure complexity of overall software. The
computation of complexity metrics can be done with the help of a flow graph. It is
sometimes called cyclomatic complexity. Cyclomatic complexity is a useful metric to
indicate complexity of software system. Without use of complexity metrics, it is very
difficult and time-consuming to determine complexity in designing products where
risk cost emanates. Even continuous complexity analysis makes it difficult for project
team and management to solve problem. Measuring Software complexity leads to
improve code quality, increase productivity, meet architectural standards, reduce
overall cost, increases robustness, etc. To calculate cyclomatic complexity, following
equation is used:

Cyclomatic complexity= E - N + 2

Where, E is total number of edges and N is total number of nodes.

Metrics:

A metric is a measurement of the level at which any impute belongs to a


system product or process.

Software metrics will be useful only if they are characterized effectively and validated
so that their worth is proven. There are 4 functions related to software metrics:

1. Planning
2. Organizing
3. Controlling
4. Improving

Characteristics of software Metrics:


1. Quantitative: Metrics must possess quantitative nature. It means metrics

can be expressed in values.

2. Understandable: Metric computation should be easily understood, and the

method of computing metrics should be clearly defined.

3. Applicability: Metrics should be applicable in the initial phases of the

development of the software.

4. Repeatable: The metric values should be the same when measured

repeatedly and consistent in nature.

5. Economical: The computation of metrics should be economical.

6. Language Independent: Metrics should not depend on any programming

language.

Classification of Software Metrics:

There are 3 types of software metrics:


1. Product Metrics: Product metrics are used to evaluate the state of the

product, tracing risks and undercover prospective problem areas. The ability

of the team to control quality is evaluated. Examples include lines of code,

cyclomatic complexity, code coverage, defect

density, and code maintainability index.

2. Process Metrics: Process metrics pay particular attention to enhancing the

long-term process of the team or organization. Examples include effort

variance, schedule variance, defect injection

rate, and lead time.


3. Project Metrics: The project matrix describes the project characteristic and

execution process. Examples include effort estimation accuracy, schedule

deviation, cost variance, and

productivity.
● Number of software developer
● Staffing patterns over the life cycle of software
● Cost and schedule
● Productivity

Advantages of Software Metrics :

1. Reduction in cost or budget.


2. It helps to identify the particular area for improvising.
3. It helps to increase the product quality.
4. Managing the workloads and teams.
5. Reduction in overall time to produce the product,.
6. It helps to determine the complexity of the code and to test the code
with resources.
7. It helps in providing effective planning, controlling and managing of the
entire product.

Disadvantages of Software Metrics :

1. It is expensive and difficult to implement the metrics in some cases.


2. Performance of the entire team or an individual from the team can’t be
determined. Only the performance of the product is determined.
3. Sometimes the quality of the product is not met with the expectation.
4. It leads to measure the unwanted data which is wastage of time.
5. Measuring the incorrect data leads to make wrong decision making.
UNIT - V

Metrics for Process and Products

Software measurement

Software Measurement and Metrics


Software Measurement: A measurement is a manifestation of the size, quantity,
amount, or dimension of a particular attribute of a product or process. Software
measurement is a titrate impute of a characteristic of a software product or the
software process. It is an authority within software engineering.
The software measurement process is defined and governed by ISO
Standard.

Software Measurement Principles:

The software measurement process can be characterized by five activities-

1. Formulation: The derivation of software measures and metrics appropriate

for the representation of the software that is being

considered.

2. Collection: The mechanism used to accumulate data required to derive the

formulated metrics.

3. Analysis: The computation of metrics and the application of mathematical

tools.
4. Interpretation: The evaluation of metrics resulting in insight into the

quality of the representation.

5. Feedback: Recommendation derived from the interpretation of product

metrics transmitted to the software team.

Need for Software Measurement:

Software is measured to:

● Create the quality of the current product or process.


● Anticipate future qualities of the product or process.
● Enhance the quality of a product or process.
● Regulate the state of the project in relation to budget and schedule.
● Enable data-driven decision-making in project planning and control.
● Identify bottlenecks and areas for improvement to drive process
improvement activities.
● Ensure that industry standards and regulations are followed.
● Give software products and processes a quantitative basis for
evaluation.
● Enable the ongoing improvement of software development practices.

Classification of Software Measurement:

There are 2 types of software measurement:

1. Direct Measurement: In direct measurement, the product, process, or thing

is measured directly using a standard scale.


2. Indirect Measurement: In indirect measurement, the quantity or quality to

be measured is measured using related parameters i.e. by use of reference.

Measuring Software Quality using Quality


Metrics
In Software Engineering, Software Measurement is done based on some Software
Metrics where these software metrics are referred to as the measure of various
characteristics of a Software.

In Software engineering Software Quality Assurance (SAQ) assures the quality of the
software. A set of activities in SAQ is continuously applied throughout the software
process. Software Quality is measured based on some software quality metrics.

There is a number of metrics available based on which software quality is measured.


But among them, there are a few most useful metrics which are
essential in software quality measurement. They are –

1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security

Now let’s understand each quality metric in detail –

1. Code Quality – Code quality metrics measure the quality of code used for
software project development. Maintaining the software code quality by writing Bug-
free and semantically correct code is very important for good software project
development. In code quality, both Quantitative metrics like the number of lines,
complexity, functions, rate of bugs generation, etc, and Qualitative metrics like
readability, code clarity, efficiency, and maintainability, etc are measured.

2. Reliability – Reliability metrics express the reliability of software in different


conditions. The software is able to provide exact service at the right time or not
checked. Reliability can be checked using Mean Time Between Failure (MTBF) and
Mean Time To Repair (MTTR).

3. Performance – Performance metrics are used to measure the performance of


the software. Each software has been developed for some specific purposes.
Performance metrics measure the performance of the software by determining
whether the software is fulfilling the user requirements or not, by analyzing how
much time and resource it is utilizing for providing the service.

4. Usability – Usability metrics check whether the program is user-friendly or


not. Each software is used by the end-user. So it is important to measure that the end-
user is happy or not by using this software.

5. Correctness – Correctness is one of the important software quality metrics as


this checks whether the system or software is working correctly without any error by
satisfying the user. Correctness gives the degree of service each function provides as
per developed.

6. Maintainability – Each software product requires maintenance and up-


gradation. Maintenance is an expensive and time-consuming process. So if the
software product provides easy maintainability then we can say software quality is up
to mark. Maintainability metrics include the time required to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing
environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to
integrate with other required software which increases software functionality and
what is the control on integration from unauthorized software’s which increases the
chances of cyberattacks.

8. Security – Security metrics measure how secure the software is. In the age of
cyber terrorism, security is the most essential part of every software. Security assures
that there are no unauthorized changes, no fear of cyber attacks, etc when the software
product is in use by the end-user.

Risk management

Reactive and Proactive RCA :


The main question that arises is whether RCA is reactive or proactive? Some people
think that RCA is only required to solve problems or failures that have already
occurred. But, it’s not true. One should know that RCA can be both i.e.
reactive and proactive as given below –

1. Reactive RCA :
The main question that arises in reactive RCA is “What went wrong?”. Before
investigating or identifying the root cause of failure or defect, failure needs to be in
place or should be occurred already. One can only identify the root cause and perform
the analysis only when problem or failure had occurred that causes malfunctioning in
the system. Reactive RCA is a root cause analysis that is performed after the
occurrence of failure or defect.

It is simply done to control, implemented to reduce the impact and severity of defect
that has occurred. It is also known as reactive risk management. It reacts quickly as
soon as problem occurs by simply treating symptoms. RCA is generally reactive but it
has the potential to be proactive. RCA is reactive at initial and it can only be proactive
if one addresses and identifies small things too that can cause problem as well as
exposes hidden causes of the problem.

Advantages :

● Helps one to prioritize tasks according to its severity and then


resolve it.
● Increases teamwork and their knowledge.

Disadvantages :

● Sometimes, resolving equipment after failure can be more costly


than preventing failure from an occurrence.
● Failed equipment can cause greater damage to system and
interrupts production activities.

2. Proactive RCA :
The main question that arises in proactive RCA is “What could go wrong?”.
RCA can also be used proactively to mitigate failure or risk. The main importance of
RCA can be seen when it is applied to events that have not occurred yet. Proactive
RCA is a root cause analysis that is performed before any occurrence of failure or
defect. It is simply done to control, implemented to prevent defect from its
occurrence. As both reactive and proactive RCAs are is important, one should move
from reactive to proactive RCA.

It is better to prevent issues from its occurrence rather than correcting it after its
occurrence. In simple words, Prevention is better than correction. Here, prevention
action is considered as proactive and corrective action is considered as reactive. It is
also known as proactive risk management. It identifies the root cause of problem to
eliminate it from reoccurring. With help of proactive RCA, we can identify the main
root cause that leads to the occurrence of problem or failure, or defect. After knowing
this, we can take various measures and implement actions to prevent these causes
from the occurrence.

Advantages :

● Future chances of failure occurrence can be minimized.


● Reduce overall cost required to resolve failure by simply preventing failure
from an occurrence.
● Increases overall productivity by minimizing chances of interruption due to
failure.

Disadvantages :

● Sometimes, preventing equipment from failure can be more costly than


resolving failure after occurrence.
● Many resources and tools required to prevent failure from an occurrence
that can affect the overall cost.
● Requires highly skilled technicians to perform maintenance tasks.
Risk Management:
A risk is a probable problem- it might happen or it might not. There are main two
characteristics of risk

Uncertainty- the risk may or may not happen that means there are no 100%
risks. loss – If the risk occurs in reality , undesirable result or losses will occur.

Risk management is a sequence of steps that help a software team to understand ,


analyze and manage uncertainty. Risk management consists of

● Risk Identification
● Risk analysis
● Risk Planning
● Risk Monitoring
A computer code project may be laid low with an outsized sort of risk. so as
to be ready to consistently establish the necessary risks which could have
an effect on a computer code project, it’s necessary to reason risks into
completely different categories. The project manager will then examine the
risks from every category square measure relevant to the project.
There are mainly 3 classes of risks that may have an effect on a
computer code project:

1. Project Risks:

Project risks concern various sorts of monetary funds, schedules, personnel,


resource, and customer-related issues. a vital project risk is schedule
slippage. Since computer code is intangible, it’s terribly tough to observe
and manage a computer code project. it’s terribly tough to manage one
thing that can not be seen. For any producing project, like producing cars,
the project manager will see the merchandise taking form.
For example, see that the engine is fitted, at the moment the area of the
door unit fitted, the automotive is obtaining painted, etc. so he will simply
assess the progress of the work and manage it. The physical property of the
merchandise being developed is a vital reason why several computer codes
come to suffer from the danger of schedule slippage.

2. Technical Risks:

Technical risks concern potential style, implementation, interfacing, testing,


and maintenance issues. Technical risks conjointly embody ambiguous
specifications, incomplete specification, dynamic specification, technical
uncertainty, and technical degeneration. Most technical risks occur thanks
to the event team’s lean information
concerning the project.

3. Business Risks:

This type of risk embodies the risks of building a superb product that
nobody needs, losing monetary funds or personal commitments, etc.

Classification of Risk in a project:

Example: Let us consider a satellite based mobile communication project. The


project manager can identify several risks in this project. Let us classify
them appropriately.

● What if the project cost escalates and overshoots what was estimated? –
Project Risk
● What if the mobile phones that are developed become too bulky in size to
conveniently carry? Business Risk
● What if call hand-off between satellites becomes too difficult to implement?
Technical Risk

Methods for Identifying Risks


1. Checklist Analysis – Checklist Analysis is type of technique generally used to
identify or find risks and manage it. The checklist is basically developed by listing
items, steps, or even tasks and is then further analyzed against criteria to just
identify and determine if procedure is completed correctly or not. It is list of risk
that is just found to occur regularly in development of software project. Below is
the list of software development risk by Barry Boehm- modified version.

Risk Risk Reduction Technique

Personnel Shortfalls Various techniques include training and career


development, job-matching, teambuilding, etc.

Unrealistic time and Various techniques include incremental


cost estimates
development, standardization of methods, recording,
and analysis of the past project, etc.

Development of Various techniques include formal


wrong software specification methods, user surveys, etc.
functions
Development of the Various techniques include user involvement,
wrong user interface prototyping, etc.

2. Brainstorming – This technique provides and gives free and open approach that
usually encourages each and everyone on project team to participate. It also results
in greater sense of ownership of project risk, and team generally committed to
managing risk for given time period of project. It is creative and unique technique
to gather risks spontaneously by team members. The team members identify and
determine risks in ‘no wrong answer’ environment. This technique also provides
opportunity for team members to always develop on each other’s ideas. This
technique is also used to determine best possible solution to problems and issue
that arises and emerge.

3. Casual Mapping – Causal mapping is method that builds or develops on


reflection and review of failure factors in cause and effect of the diagrams. It is very
useful for facilitating learning with an organization or system simply as method of
project-post evaluation. It is also key tool for risk assessment.

4. SWOT Analysis – Strengths-Weaknesses-Opportunities-Threat (SWOT) is very


technique and helpful for identifying risks within greater organization context. It is
generally used as planning tool for analyzing business, its resources, and also its
environment simply by looking at internal strengths and weaknesses and
opportunities and threats in external environment. It is technique often used in
formulation of strategy. The appropriate time and effort should be spent on
thinking seriously about weaknesses and threats of organization for SWOT analysis
to more effective and successful in risk
identification.
5. Flowchart Method – This method allows for dynamic process to be
diagrammatically represented in paper. This method is generally used to represent
activities of process graphically and sequentially to simply identify the risk.

Risk Projection in Software Engineering

Introduction:

Risk projection in software engineering involves assessing potential risks that could
impact the success of a software project in the future. It goes beyond identifying
current risks and aims to predict how those risks might evolve over time. By
anticipating the progression of risks, development teams can take proactive measures
to mitigate or manage them effectively.

Steps in Risk Projection

1.Identify Current Risks: Start by identifying and analyzing existing risks that
could impact the project. These risks could include technical challenges, scope
changes, resource constraints, and more.

2. Evaluate Risk Factors: Understand the factors that contribute to the


occurrence and impact of each risk. This could involve considering the probability of
occurrence, potential impact on the project's objectives, and the ability to detect and
address the risk.

3. Determine Risk Trends: Study historical data and patterns to identify how
risks have evolved in similar past projects. This can provide insights into how risks
might progress in the current project.
4. Consider External Influences:Take into account external factors such as
market trends, technological advancements, regulatory changes, and economic shifts
that could influence the project's risks.

5.Scenario Analysis: Create different scenarios to model the potential outcomes of


various risk trajectories. This can help estimate the project's potential exposure to
risks.

6. Predict Risk Impact:Project the potential impact of risks on project schedules,


budgets, resource allocations, and overall project objectives.

7.Mitigation Strategies: Develop strategies to mitigate or manage the projected risks.


This could involve contingency plans, risk avoidance, risk transfer, or risk acceptance.

8. Update Risk Management Plan: Update the risk management plan based
on the projected risks and strategies. Ensure that the plan remains relevant and
adaptable to changing circumstances.

Benefits of Risk Projection:

1. Proactive Management:Risk projection allows teams to take proactive measures


to address potential risks before they escalate.
2.Informed Decision-Making: Projecting risks provides insights for making
informed decisions regarding resource allocation, project scheduling, and risk
mitigation strategies.

3. Resource Planning: Anticipating risks helps allocate resources effectively to


address potential challenges, minimizing disruptions.

4. Stakeholder Communication: Communicating projected risks to stakeholders


fosters transparency and builds trust by demonstrating the team's readiness to handle
challenges.

5. Enhanced Control: Teams gain better control over the project by anticipating
challenges and having strategies in place to address them.

Risk Refinement in Software Engineering

Introduction:

Risk refinement in software engineering is the process of iteratively analyzing,


reassessing, and enhancing the understanding of identified risks throughout the project
lifecycle. It involves elaborating on initial risk assessments, updating risk information,
and adapting risk management strategies to ensure that the project remains aligned
with its objectives and can effectively respond to potential challenges.

Steps in Risk Refinement


1. Initial Risk Identification: Begin by identifying and categorizing potential
risks that could affect the project's success. This involves considering
technical, organizational, and external factors.

2. Risk Assessment: Evaluate each identified risk's likelihood of occurrence,


potential impact on the project's objectives, and the ability to detect and
mitigate the risk.

3. Risk Prioritization: Prioritize risks based on their severity, considering both


their potential impact and likelihood. Focus on addressing high-priority risks
first.

4. Monitoring and Tracking: Continuously monitor and track identified risks as


the project progresses. This involves keeping an eye on changes in risk factors and
project conditions.

5. Data Collection:Gather data and feedback from the project team, stakeholders,
and other relevant sources to refine the understanding of each
risk.

6. Risk Refinement Workshops:Conduct workshops or meetings to engage


stakeholders and team members in discussing and refining risks. Brainstorm potential
new risks that might have emerged.
7. Updating Risk Information: Keep risk documentation up to date with the
latest information, including changes in risk factors, mitigation strategies, and
potential impacts.

8. Risk Mitigation Strategies: Adapt risk mitigation strategies based on the


evolving understanding of each risk. Develop new strategies or modify existing ones
to ensure their effectiveness.

9. Scenario Analysis: Explore different scenarios to understand how risks might


manifest under various circumstances. This helps in refining risk
projections.

10. Communication: Continuously communicate risk updates to stakeholders,


keeping them informed about potential challenges and
mitigation efforts.

## Benefits of Risk Refinement:

1. Real-time Adaptation: Risk refinement enables project teams to adapt to


changing circumstances and emerging risks in real time.

2. Enhanced Risk Management: By continuously improving the


understanding of risks, teams can develop more effective risk management
strategies.
3. Preventive Action: Early identification and refinement of risks allow teams to
take preventive actions before risks escalate.

4. Improved Decision-making: Refined risk information leads to


better
decision-making by providing accurate insights into potential challenges.

5. Transparency: Regularly updating and communicating risk information


promotes transparency among stakeholders.

Risk Mitigation, Monitoring, and


Management (RMMM) plan:
RMMM Plan :
A risk management technique is usually seen in the software Project plan.
This can be divided into Risk Mitigation, Monitoring, and Management Plan
(RMMM). In this plan, all works are done as part of risk analysis. As part of the
overall project plan project manager generally uses this RMMM plan.

In some software teams, risk is documented with the help of a Risk Information Sheet
(RIS). This RIS is controlled by using a database system for easier management of
information i.e creation, priority ordering, searching, and other analysis. After
documentation of RMMM and start of a project, risk mitigation and monitoring steps
will start.

Risk Mitigation :
It is an activity used to avoid problems (Risk Avoidance). Steps for
mitigating the risks as follows.

1. Finding out the risk.


2. Removing causes that are the reason for risk creation.
3. Controlling the corresponding documents from time to time.
4. Conducting timely reviews to speed up the work.

Risk Monitoring :
It is an activity used for project tracking.
It has the following primary objectives as follows.

1. To check if predicted risks occur or not.


2. To ensure proper application of risk aversion steps defined for risk.
3. To collect data for future risk analysis.
4. To allocate what problems are caused by which risks throughout the
project.

Risk Management and planning :


It assumes that the mitigation activity failed and the risk is a reality. This task is done
by Project manager when risk becomes reality and causes severe problems. If the
project manager effectively uses project mitigation to remove risks successfully then it
is easier to manage the risks. This shows that the response that will be taken for each
risk by a manager. The main objective of the risk management plan is the risk register.
This risk register describes and focuses on the predicted threats to a software project.

Example:

Let us understand RMMM with the help of an example of high staff turnover.

Risk Mitigation:

To mitigate this risk, project management must develop a strategy for reducing
turnover. The possible steps to be taken are:

● Meet the current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market).
● Mitigate those causes that are under our control before the project
starts.
● Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.
● Organize project teams so that information about each development
activity is widely dispersed.
● Define documentation standards and establish mechanisms to ensure that
documents are developed in a timely manner.
● Assign a backup staff member for every critical technologist.

Risk Monitoring:

As the project proceeds, risk monitoring activities commence. The project manager
monitors factors that may provide an indication of whether the risk is becoming more
or less likely. In the case of high staff turnover, the following factors can be
monitored:

● General attitude of team members based on project pressures.


● Interpersonal relationships among team members.
● Potential problems with compensation and benefits.
● The availability of jobs within the company and outside it.

Risk Management:

Risk management and contingency planning assumes that mitigation efforts have
failed and that the risk has become a reality. Continuing the example, the project is
well underway, and a number of people announce that they will be leaving. If the
mitigation strategy has been followed, backup is available, information is
documented, and knowledge has been dispersed across the team. In addition, the
project manager may temporarily refocus resources (and readjust the project
schedule) to those functions that are fully staffed, enabling newcomers who must be
added to the team to “get up to the speed“.

Drawbacks of RMMM:

● It incurs additional project costs.


● It takes additional time.
● For larger projects, implementing an RMMM may itself turn out to be
another tedious project.
● RMMM does not guarantee a risk-free project, infact, risks may also
come up after the project is delivered.

Quality Management:

Software Quality Assurance:


Software Quality Assurance (SQA) is simply a way to assure quality in the software.
It is the set of activities which ensure processes, procedures as well as standards are
suitable for the project and implemented correctly.

Software Quality Assurance is a process which works parallel to development of


software. It focuses on improving the process of development of software so that
problems can be prevented before they become a major issue.
Software Quality Assurance is a kind of Umbrella activity that is applied throughout
the software process.

Generally the quality of the software is verified by the third party organization like
international standard organizations.

Software quality assurance focuses on:

● software’s portability
● software’s usability
● software’s reusability
● software’s correctness
● software’s maintainability
● software’s error control

Software Quality Assurance has:

1. A quality management approach


2. Formal technical reviews
3. Multi testing strategy
4. Effective software engineering technology
5. Measurement and reporting mechanism

Major Software Quality Assurance Activities:

1. SQA Management Plan:

Make a plan for how you will carry out the sqa through out the
project. Think about which set of software engineering activities are
the best for project. check level of sqa team skills.

2. Set The Check Points:

SQA team should set checkpoints. Evaluate the performance of the


project on the basis of collected data on different check points.

3. Multi testing Strategy:

Do not depend on a single testing approach. When you have a lot of


testing approaches available use them.

4. Measure Change Impact:

The changes for making the correction of an error sometimes re introduces


more errors keep the measure of impact of change on project. Reset the
new change to change check the compatibility of this fix with whole
project.

5. Manage Good Relations:


In the working environment managing good relations with other teams
involved in the project development is mandatory. Bad relation of sqa team
with programmers team will impact directly and badly on
project. Don’t play politics.

Benefits of Software Quality Assurance (SQA):

1. SQA produces high quality software.


2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for a long time.
5. High quality commercial software increase market share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.
8. It cuts maintenance costs. Get the release right the first time, and your
company can forget about it and move on to the next big thing. Release a
product with chronic issues, and your business bogs down in a costly, time-
consuming, never-ending cycle of repairs.

Disadvantage of SQA:
There are a number of disadvantages of quality assurance. Some of them include
adding more resources, employing more workers to help maintain quality and so
much more.

Software Review
Software Review is systematic inspection of a software by one or more individuals
who work together to find and resolve errors and defects in the software during the
early stages of Software Development Life Cycle (SDLC). Software review is an
essential part of Software Development Life Cycle (SDLC) that helps software
engineers in validating the quality, functionality and other vital features and
components of the software. It is a whole process that includes testing the software
product and it makes sure that it meets the requirements stated by the client.

Usually performed manually, software review is used to verify various documents like
requirements, system designs, codes, test plans and test
cases.

Objectives of Software Review:


The objective of software review is:

1. To improve the productivity of the development team.


2. To make the testing process time and cost effective.

3. To make the final software with fewer defects.

4. To eliminate the inadequacies.

Process of Software Review:


Types of Software Reviews:
There are mainly 3 types of software reviews:

1. Software Peer Review:

Peer review is the process of assessing the technical content and quality of
the product and it is usually conducted by the author of the work product
along with some other developers.
Peer review is performed in order to examine or resolve the defects in the
software, whose quality is also checked by other members of the team.
Peer Review has following types:
● (i) Code Review:
Computer source code is examined in a systematic way.

● (ii) Pair Programming:


It is a code review where two developers develop code together
at the same platform.

● (iii) Walkthrough:
Members of the development team is guided by author and
other interested parties and the participants ask questions and
make comments about defects.

● (iv) Technical Review:


A team of highly qualified individuals examines the software
product for its client’s use and identifies technical defects from
specifications and standards.
● (v) Inspection:
In inspection the reviewers follow a well-defined process to
find defects.

2. Software Management Review:

Software Management Review evaluates the work status. In this section


decisions regarding downstream activities are taken.

3. Software Audit Review:


Software Audit Review is a type of external review in which one or more
critics, who are not a part of the development team, organize an
independent inspection of the software product and its processes to assess
their compliance with stated specifications and standards.
This is done by managerial level people.

Advantages of Software Review:

● Defects can be identified earlier stage of development (especially in


formal review).

● Earlier inspection also reduces the maintenance cost of software.

● It can be used to train technical authors.

● It can be used to remove process inadequacies that encourage


defects

Formal Technical Review (FTR)


Formal Technical Review (FTR) is a software quality control activity performed by
software engineers.

Objectives of formal technical review (FTR): Some of these are:

● Useful to uncover error in logic, function and implementation for any


representation of the software.
● The purpose of FTR is to verify that the software meets specified
requirements.
● To ensure that software is represented according to
predefined
standards.
● It helps to review the uniformity in software that is development in a
uniform manner.
● To makes the project more manageable.

In addition, the purpose of FTR is to enable junior engineer to observe the analysis,
design, coding and testing approach more closely. FTR also works to promote back up
and continuity become familiar with parts of software they might not have seen
otherwise. Actually, FTR is a class of reviews that include walkthroughs, inspections,
round robin reviews and other small group technical assessments of software. Each
FTR is conducted as meeting and is considered successful only if it is properly
planned, controlled and attended.

Example:

suppose during the development of the software without FTR design cost 10 units,
coding cost 15 units and testing cost 10 units then the total cost till now is 35 units
without maintenance but there was a quality issue because of bad design so to fix it
we have to re design the software and final cost will become 70 units. that is why FTR
is so helpful while developing the software.
The review meeting: Each review meeting should be held considering the following
constraints- Involvement of people:

1. Between 3, 4 and 5 people should be involve in the review.


2. Advance preparation should occur but it should be very short that is at the
most 2 hours of work for every person.
3. The short duration of the review meeting should be less than two hour.
Gives these constraints, it should be clear that an FTR focuses on specific
(and small) part of the overall software.

At the end of the review, all attendees of FTR must decide what to do.

1. Accept the product without any modification.


2. Reject the project due to serious error (Once corrected, another app
need to be reviewed), or
3. Accept the product provisional (minor errors are encountered and
should be corrected, but no additional review will be required).

The decision was made, with all FTR attendees completing a sign-of
indicating their participation in the review and their agreement with the findings of
the review team.

Review reporting and record keeping :-

1. During the FTR, the reviewer actively records all issues that have
been raised.
2. At the end of the meeting all these issues raised are consolidated
and a review list is prepared.
3. Finally, a formal technical review summary report is prepared.

It answers three questions :1.

What was reviewed ?

2. Who reviewed it ?
3. What were the findings and conclusions ?
Review guidelines :- Guidelines for the conducting of formal technical reviews
should be established in advance. These guidelines must be
distributed to all reviewers, agreed upon, and then followed. A review that is
unregistered can often be worse than a review that does not minimum set of
guidelines for FTR.

1. Review the product, not the manufacture (producer).


2. Take written notes (record purpose)
3. Limit the number of participants and insists upon
advance
preparation.
4. Develop a checklist for each product that is likely to be reviewed.
5. Allocate resources and time schedule for FTRs in order to maintain time
schedule.
6. Conduct meaningful training for all reviewers in
order to make
reviews effective.
7. Reviews earlier reviews which serve as the base for the current
review being conducted.
8. Set an agenda and maintain it.
9. Separate the problem areas, but do not attempt to solve every problem
notes.
10. Limit debate and rebuttal.

Statistical Software Quality Assurance

Introduction

Statistical Software Quality Assurance (SQA) is an approach that uses statistical


techniques to measure, assess, and improve the quality of software products and
processes. It involves collecting and analyzing quantitative data
to make informed decisions about software quality, identify areas for improvement,
and predict the behaviour of software systems.

Key Concepts of Statistical SQA

1. **Process Control:** Monitor and control the software development processes


using statistical methods to ensure consistency and predictability.

2. **Data Collection:** Gather relevant data about the software development and
testing processes, including defect counts, test results, and other
performance metrics.

3. **Statistical Analysis:** Apply various statistical techniques to analyze the


collected data and gain insights into the software quality
and process performance.

4. **Quality Metrics:** Define and use quality metrics to quantify the quality
attributes of software products, such as reliability, performance, and
maintainability.
5.
**Root Cause Analysis:** Use statistical analysis to identify the root causes of
defects and process inefficiencies, enabling targeted corrective actions.

6. **Process Improvement:** Continuously improve software development processes


based on statistical findings to enhance quality and efficiency.

Statistical Techniques in SQA

1. **Defect Analysis:** Analyze defect data to identify trends, patterns, and critical
areas for improvement. Techniques like Pareto analysis help prioritize issues.

2. **ProcessCapability Analysis:** Assess the capability of a process to


consistently produce products within specified limits. This helps identify areas for
process improvement.

3. **Control Charts:** Monitor process stability and identify variations using


control charts, which display data points over time along with control limits.
5.
4. **Regression Analysis:** Determine relationships between variables and predict
future outcomes. Useful for understanding the impact of various factors on
software quality.
**Failure Mode and Effects Analysis (FMEA):** Quantify the risk associated with
potential failures and prioritize them based on their impact and likelihood.

6. **Statistical Sampling:** Use statistical sampling techniques to analyze a subset of


data instead of the entire dataset, saving time and resources.

Benefits of Statistical SQA

1. **Objective Decision-making:** Statistical analysis provides objective


insights for making informed decisions about process improvements and corrective
actions.

2. **Early Detection:** Statistical techniques can detect quality issues early in


the development lifecycle, allowing for timely intervention.

3. **Efficiency:** By focusing efforts on critical areas identified through


statistical analysis, resources can be utilized more efficiently.
5.
4. **Continuous Improvement:** Statistical SQA fosters a culture of continuous
improvement by providing quantifiable insights into the effectiveness of process
changes.

**Predictive Capability:** Statistical analysis can predict future outcomes based


on historical data, aiding in risk assessment and planning.

Software Reliability:
Software Reliability means Operational reliability. It is described as the ability of a system
or component to perform its required functions under static conditions for a specific period.

Software reliability is also defined as the probability that a software system fulfills its
assigned task in a given environment for a predefined number of input cases, assuming that
the hardware and the input are free of error.

Software Reliability is an essential connect of software quality, composed with functionality,


usability, performance, serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve because the complexity of software
turn to be high. While any system with a high degree of complexity, containing software, will
be hard to reach a certain level of reliability, system developers tend to push complexity into
the software layer, with the speedy growth of system size and ease of doing so by upgrading
the software.
5.

ISO 9000 Certification in Software


Engineering
The International organization for Standardization is a world wide federation of
national standard bodies. The International standards organization (ISO) is a
standard which serves as a for contract between independent parties. It specifies
guidelines for development of quality system.
Quality system of an organization means the various activities related to its products
or services. Standard of ISO addresses to both aspects i.e. operational and
organizational aspects which includes responsibilities, reporting etc. An ISO 9000
standard contains set of guidelines of production
process without considering product itself.

ISO 9000 Certification

Why ISO Certification required by Software Industry?


There are several reasons why software industry must get an ISO certification.
Some of reasons are as follows :

● This certification has become a standards for international bidding.


● It helps in designing high-quality repeatable software products.
● It emphasis need for proper documentation.
● It facilitates development of optimal processes and totally quality
measurements.

Features of ISO 9001 Requirements :

● Document control –
All documents concerned with the development of a software
product should be properly managed and controlled.
● Planning –
Proper plans should be prepared and monitored.
● Review –
For effectiveness and correctness all important documents across all
phases should be independently checked and reviewed .
● Testing –
The product should be tested against specification.
● Organizational Aspects –
Various organizational aspects should be addressed e.g., management
reporting of the quality team.

Advantages of ISO 9000 Certification :


Some of the advantages of the ISO 9000 certification process are following :

● Business ISO-9000 certification forces a corporation to specialize in “how


they are doing business”. Each procedure and work instruction must be
documented and thus becomes a springboard for continuous improvement.
● Employees morale is increased as they’re asked to require control of
their processes and document their work processes
● Better products and services result from continuous improvement
process.
● Increased employee participation, involvement, awareness and
systematic employee training are reduced problems.

Shortcomings of ISO 9000 Certification :


Some of the shortcoming of the ISO 9000 certification process are following :

● ISO 9000 does not give any guideline for defining an appropriate
process and does not give guarantee for high quality process.
● ISO 9000 certification process have no international accreditation agency
exists.

You might also like