Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

SOFTWARE ENGINEERING

MODULE-4

Software Reliability:

Software reliability in software engineering refers to the probability of a software system


performing its intended functions without failure under specific conditions and for a specified
period. It is a crucial aspect of software quality assurance and encompasses various attributes
related to the dependability, consistency, and effectiveness of software systems.

Hardware Reliability vs Software Reliability:

Reliability in software is software that has no failure and works in a special time period with
a special environment. Hardware reliability is the probability of the absence of any
hardware-related system malfunction for a given mission on the other hand software
reliability is the probability that the software will provide a failure-free operation in a fixed
environment for a fixed interval of time.

Hardware Reliability:
Hardware reliability is the probability that the ability of the hardware to perform its function
for some period of time. It may change during certain periods such as initial burn-in or the
end of useful life.
 It is expressed as Mean Time between Failures (MBTF).
 Hardware faults are mostly physical faults.
 Thorough testing of all components cuts down on the number of faults.
 Hardware failures are mostly due to wear and tear.
 It follows the Bathtub curve principle for testing failure.
Software Reliability:
Software reliability is the probability that the software will operate failure-free for a
specific period of time in a specific environment. It is measured per some unit of time.
 Software Reliability starts with many faults in the system when first created.
 After testing and debugging enter a useful life cycle.
 Useful life includes upgrades made to the system which bring about new faults.
 The system needs to then be tested to reduce faults.
 Software reliability cannot be predicted from any physical basis, since it depends
completely on the human factors in design.

Software Reliability measurement Techniques:

Software reliability measurement techniques aim to quantify the dependability and


effectiveness of software systems by assessing various aspects such as failure rates, mean time
between failures (MTBF), availability, and defect density.

The current methods of software reliability measurement can be divided into four categories:
Product Metrics:

i. Function point metric is a technique to measure the functionality of proposed software


development based on the count of inputs, outputs, master files, inquires, and
interfaces.

ii. Test coverage metric size fault and reliability by performing tests on software products,
assuming that software reliability is a function of the portion of software that is
successfully verified or tested.

Project Management Metrics:


Project management metrics are quantitative measures used to assess the performance, progress, and
effectiveness of projects throughout their lifecycle. These metrics provide insights into various aspects
of project management, including schedule, cost, quality, scope, and team productivity. Here are some
common project management metrics:

1. Schedule Performance Indicators (SPI): SPI measures the efficiency of project schedule
performance by comparing the actual progress of the project to the planned schedule. It is
calculated as the ratio of earned value (EV) to planned value (PV). An SPI value greater than 1
indicates that the project is ahead of schedule, while a value less than 1 indicates that the
project is behind schedule.

2. Cost Performance Indicators (CPI): CPI measures the efficiency of project cost performance by
comparing the actual costs incurred to the planned costs. It is calculated as the ratio of earned
value (EV) to actual cost (AC). A CPI value greater than 1 indicates that the project is under
budget, while a value less than 1 indicates that the project is over budget.

3. Schedule Variance (SV): SV measures the deviation of actual progress from the planned
schedule. It is calculated as the difference between earned value (EV) and planned value (PV). A
positive SV indicates that the project is ahead of schedule, while a negative SV indicates that the
project is behind schedule.

4. Cost Variance (CV): CV measures the deviation of actual costs from the planned costs. It is
calculated as the difference between earned value (EV) and actual cost (AC). A positive CV
indicates that the project is under budget, while a negative CV indicates that the project is over
budget.
Fault and Failure Metrics:
A fault is a defect in a program which appears when the programmer makes an error and
causes failure when executed under particular conditions. These metrics are used to determine
the failure-free execution software. Here are some common fault and failure metrics.

1. Mean Time between Failures (MTBF): MTBF is the average time elapsed between
consecutive failures of a software system. It represents the expected time interval
between failures and is calculated by dividing the total operational time by the number
of failures observed. A higher MTBF value indicates higher reliability and longer intervals
between failures.

2. Mean Time to Failure (MTTF): MTTF is similar to MTBF but focuses on the average time
until the first failure occurs in the software system. It represents the expected time until
the software system experiences its initial failure and is calculated by dividing the total
operational time by the number of failures observed. A higher MTTF value indicates
higher reliability and longer intervals until the first failure.

3. Fault Removal Efficiency (FRE): FRE measures the effectiveness of fault removal
activities in eliminating faults or defects from the software codebase. It is calculated as
the ratio of the number of faults removed during testing to the total number of faults
identified. A higher FRE value indicates higher effectiveness in identifying and removing
faults during testing.

4. Fault Detection Rate (FDR): FDR measures the rate at which faults or defects are
detected and identified during testing or maintenance activities. It is calculated as the
ratio of the number of faults detected to the total number of faults present in the
software codebase. A higher FDR value indicates higher efficiency in detecting and
addressing faults.

5. Fault Injection Rate (FIR): FIR measures the frequency at which faults or defects are
intentionally injected into the software system for testing or validation purposes. It
helps assess the robustness and resilience of the software system to various types of
faults and failures.

6. Mean Time to Repair (MTTR): MTTR measures the average time required to repair or
recover from a failure in the software system. It includes the time to detect, diagnose,
and fix the failure. A lower MTTR value indicates faster recovery and shorter downtime,
contributing to higher reliability and availability of the software system.
Reliability Growth Modeling:

A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired. These
models help the manager in deciding how much efforts should be devoted to testing. The
objective of the project manager is to test and debug the system until the required level of
reliability is reached.

1)Jelinski and Moranda Model:


According to them a reliability growth model is a step function model where it is assumed that
the reliability increase by a constant increment each time an error is detected and repaired.

Characteristics of JM Model:

Following are the characteristics of JM-Model:

1. It is a Binomial type model

2. It is certainly the earliest and certainly one of the most well-known black-box models.

3. J-M model always yields an over-optimistic reliability prediction.

4. JM Model follows a prefect debugging step, i.e., the detected fault is removed with
certainty simple model.

2) Basic Execution Time Model:


The basic execution model is the most popular and generally used reliability growth model,
mainly because:

o It is practical, simple, and easy to understand.

o Its parameters clearly relate to the physical world.

o It can be used for accurate reliability prediction.


Musa-Okumo to Logarithmic Model:

The Musa-Okumo model and logarithmic models are both used in software reliability
engineering to predict and evaluate software reliability, but they approach the problem from
different perspectives.

The Musa-Okumo model, developed by K. Musa and Y. Okumoto, is a widely used software
reliability growth model. It's based on the assumption that software failures occur due to the
presence of latent defects in the software. The model predicts the number of failures that will
occur during testing or operation based on the number of remaining defects and the
effectiveness of the testing process.

Littlewood and Verall’s model:


This model allows for negative reliability growth to reflect the fact that when a repair is carried
out it may introduce additional errors. It also models the fact that as errors are repaired the
average improvement in reliability per repair decrease.

The Littlewood and Verall model is based on the assumption that the rate at which faults are
detected during testing follows a non-homogeneous Poisson process (NHPP). This means that
the rate of fault detection changes over time as the testing process evolves. The model takes
into account factors such as fault introduction rate, fault detection rate, and fault removal rate.

In this model, the number of faults remaining in the software system at any given time can be
estimated using mathematical techniques based on the observed fault detection rate and other
relevant parameters. By analyzing the historical fault detection data, the model can provide
insights into the reliability of the software and help project managers make decisions about
resource allocation and testing strategies.

Software Quality:
Quality of a product is defined in terms of its fitness of purpose. For software products, the
fitness of purpose is usually interpreted in terms of satisfaction of requirements laid down in
SRS document. Quality Factors:

1. Portability: A software product is said to be portable, if it can be easily made to work in


different operating system environments, in different machines, with other software products,
etc.

2. Usability: A software product has good usability, if different categories of user can easily
invoke the functions of product.
3. Reusability: a software product has good reusability, if different modules of the product can
easily be reused to develop new products.

4. Correctness. A software product is correct if different requirements as specified in the SRS


document have correctly.

5. Maintainability: A software product is maintainable; if errors can be easily corrected as and


when they show up, new functions can be easily added to the product and the functionalities of
the product can be easily modified, etc.

SOFTWARE QUALITY MANAGEMENT SYSTEM:

A quality management system is the principal methodology used by organizations to ensure


that the products they develop the desired quality. A quality system consist of the following

a) Managerial structure and individual responsibilities: a quality system is actually the


responsibility of the organization as a whole and should support the top management.

b) Quality System Activities: the quality system activities encompass the following

a. Auditing of projects

b. review of quality system

c. Development of standards, procedures, and guidelines, etc.

d. Production of reports for top management.

ISO 9000: ISO (International Standards Organization) is a consortium of 63 countries


established to formulate and foster standardization. ISO published its 9000 series of standards
in 1987.

 ISO 9000 certification serves as a reference for contract between independent parties and
also specifies the guidelines for maintaining a quality system.

 ISO 9001: this standard applies to organizations engaged in design, development, and
production and serving of goods and is applicable to most software organization.

 ISO 9002: it is applicable to that organization which does not design the products but are only
involved in production.

 ISO 9003: this will applies to organizations involved in installation and testing of products.
Quality SEI CMM:

Capability Maturity Model (CMM):

The Software Engineering Institute (SEI) Capability Maturity Model (CMM) specifies an increasing series
of levels of a software development organization. The higher the level, the better the software
development process, hence reaching each level is an expensive and time-consuming process.

Level One: Initial - The software process is characterized as inconsistent, and occasionally even
chaotic. Defined processes and standard practices that exist are abandoned during a crisis.
Success of the organization majorly depends on an individual effort, talent, and heroics. The
heroes eventually move on to other organizations taking their wealth of knowledge or lessons
learnt with them.

 Level Two: Repeatable - This level of Software Development Organization has a basic and
consistent project management processes to track cost, schedule, and functionality. The
process is in place to repeat the earlier successes on projects with similar applications. Program
management is a key characteristic of a level two organization.

 Level Three: Defined - The software process for both management and engineering activities
are documented, standardized, and integrated into a standard software process for the entire
organization and all projects across the organization use an approved, tailored version of the
organization's standard software process for developing, testing and maintaining the
application.

 Level Four: Managed - Management can effectively control the software development effort
using precise measurements. At this level, organization set a quantitative quality goal for both
software process and software maintenance. At this maturity level, the performance of
processes is controlled using statistical and other quantitative techniques, and is quantitatively
predictable.

 Level Five: Optimizing - The Key characteristic of this level is focusing on continually
improving process performance through both incremental and innovative technological
improvements. At this level, changes to the process are to improve the process performance
and at the same time maintaining statistical probability to achieve the established quantitative
process-improvement objectives

Software maintenance:
Software maintenance refers to the process of modifying and updating software after it
has been deployed to fix defects, improve performance, enhance functionality, adapt to
changes in the operating environment, and meet new user requirements. It's an
essential phase in the software development lifecycle (SDLC) that ensures the long-term
viability and usefulness of software systems.

Several Types of Software Maintenance

1. Corrective Maintenance: This involves fixing errors and bugs in the software system.

2. Patching: It is an emergency fix implemented mainly due to pressure from management.


Patching is done for corrective maintenance but it gives rise to unforeseen future errors
due to lack of proper impact analysis.

3. Adaptive Maintenance: This involves modifying the software system to adapt it to


changes in the environment, such as changes in hardware or software, government
policies, and business rules.

4. Perfective Maintenance: This involves improving functionality, performance, and


reliability, and restructuring the software system to improve changeability.

5. Preventive Maintenance: This involves taking measures to prevent future problems,


such as optimization, updating documentation, reviewing and testing the system, and
implementing preventive measures such as backups.
Characteristics of software maintenance:

1. Continuous Process: Software maintenance is an ongoing process that begins as soon as


the software is deployed and continues throughout its operational life. It involves
regular monitoring, updating, and improving the software to ensure its effectiveness
and reliability over time.

2. Evolutionary Nature: Software maintenance is inherently evolutionary, as it involves


adapting the software to changes in the environment, technology, and user
requirements. This may include adding new features, fixing defects, or optimizing
performance based on feedback and emerging needs.

3. Multifaceted Activities: Maintenance activities encompass a wide range of tasks,


including corrective, adaptive, perfective, and preventive maintenance. These activities
address different aspects of the software, such as functionality, usability, performance,
and compatibility, to ensure its continued usefulness and relevance.

4. Dependency on Documentation: Effective software maintenance relies heavily on


comprehensive documentation that describes the software's architecture, design,
functionality, and dependencies. This documentation serves as a reference for
understanding the software and implementing changes correctly without introducing
new issues.

5. User-Centric Approach: Software maintenance is driven by user needs and feedback. It


involves close collaboration between developers, users, and other stakeholders to
identify issues, prioritize changes, and ensure that the software meets the evolving
needs of its users.

6. Resource Intensive: Maintenance activities often require significant resources in terms


of time, effort, and expertise. This includes personnel dedicated to monitoring and
updating the software, as well as tools and infrastructure for testing, deployment, and
version control.

7. Risk Management: Software maintenance involves managing risks associated with


making changes to the software. This includes assessing the impact of proposed
changes, testing them thoroughly to identify potential issues, and implementing them
carefully to minimize disruptions and ensure the continued stability of the software.
Software Reverse Engineering:
Software reverse engineering is the process of analyzing a software system to understand its
inner workings, design principles, functionality, and implementation details without access to
its original source code or documentation. It involves extracting information from the software.

Why Reverse Engineering?


 Providing proper system documentation.
 Recovery of lost information.
 Assisting with maintenance.
 The facility of software reuse.
 Discovering unexpected flaws or faults.
 Implements innovative processes for specific use.
 Easy to document the things how efficiency and power can be improved.
Steps of Software Reverse Engineering:
1. Collection Information: This step focuses on collecting all possible information
(i.e., source design documents, etc.) about the software.
2. Examining the Information: The information collected in step-1 is studied so as
to get familiar with the system.
3. Extracting the Structure: This step concerns identifying program structure in the
form of a structure chart where each node corresponds to some routine.
4. Recording the Functionality: During this step processing details of each module
of the structure, charts are recorded using structured language like decision table,
etc.
5. Recording Data Flow: From the information extracted in step-3 and step-4, a set
of data flow diagrams is derived to show the flow of data among the processes.
6. Recording Control Flow: The high-level control structure of the software is
recorded.
7. Review Extracted Design: The design document extracted is reviewed several
times to ensure consistency and correctness. It also ensures that the design
represents the program.
8. Generate Documentation: Finally, in this step, the complete documentation
including SRS, design document, history, overview, etc. is recorded for future
use.

Software Re-engineering:
Software Re-engineering is a process of software development that is done to
improve the maintainability of a software system. Re-engineering is the
examination and alteration of a system to reconstitute it in a new form. This
process encompasses a combination of sub-processes like reverse engineering,
forward engineering, reconstructing, etc.
The process of software re-engineering involves the following steps:

1. Planning: The first step is to plan the re-engineering process, which involves
identifying the reasons for re-engineering, defining the scope, and
establishing the goals and objectives of the process.

2. Analysis: The next step is to analyze the existing system, including the code,
documentation, and other artifacts. This involves identifying the system’s
strengths and weaknesses, as well as any issues that need to be addressed.

3. Design: Based on the analysis, the next step is to design the new or updated
software system. This involves identifying the changes that need to be
made and developing a plan to implement them.

4. Implementation: The next step is to implement the changes by modifying


the existing code, adding new features, and updating the documentation
and other artifacts.

5. Testing: Once the changes have been implemented, the software system
needs to be tested to ensure that it meets the new requirements and
specifications.

6. Deployment: The final step is to deploy the re-engineered software system


and make it available to end-users.

Steps involved in Re-engineering


1. Inventory Analysis
2. Document Reconstruction
3. Reverse Engineering
4. Code Reconstruction
5. Data Reconstruction
6. Forward Engineering

Advantages of Re-engineering:
1. Reduced Risk: As the software already exists, the risk is less as compared to new
software development. Development problems, staffing problems and specification
problems are the lots of problems that may arise in new software development.
2. Reduced Cost: The cost of re-engineering is less than the costs of developing new
software.
3. Revelation of Business Rules: As a system is re-engineered, business rules that are
embedded in the system are rediscovered.
4. Better use of Existing Staff: Existing staff expertise can be maintained and extended
accommodate new skills during re-engineering.
5. Improved efficiency: By analyzing and redesigning processes, re-engineering can lead to
significant improvements in productivity, speed, and cost-effectiveness.
6. Increased flexibility: Re-engineering can make systems more adaptable to changing
business needs and market conditions.
7. Better customer service: By redesigning processes to focus on customer needs, re-
engineering can lead to improved customer satisfaction and loyalty.
8. Increased competitiveness: Re-engineering can help organizations become more
competitive by improving efficiency, flexibility, and customer service.
9. Improved quality: Re-engineering can lead to better quality products and services by
identifying and eliminating defects and inefficiencies in processes.
10. Increased innovation: Re-engineering can lead to new and innovative ways of doing
things, helping organizations to stay ahead of their competitors.
11. Improved compliance: Re-engineering can help organizations to comply with industry
standards and regulations by identifying and addressing areas of non-compliance.

Disadvantages of Re-engineering:
1. High costs: Re-engineering can be a costly process, requiring significant investments in time,
resources, and technology.

2. Disruption to business operations: Re-engineering can disrupt normal business operations and
cause inconvenience to customers, employees and other stakeholders.

3. Resistance to change: Re-engineering can encounter resistance from employees who may be
resistant to change and uncomfortable with new processes and technologies.

4. Risk of failure: Re-engineering projects can fail if they are not planned and executed properly,
resulting in wasted resources and lost opportunities.

5. Lack of employee involvement: Re-engineering projects that are not properly communicated
and involve employees, may lead to lack of employee engagement and ownership resulting in
failure of the project.

6. Difficulty in measuring success: Re-engineering can be difficult to measure in terms of success,


making it difficult to justify the cost and effort involved.

7. Difficulty in maintaining continuity: Re-engineering can lead to significant changes in processes


and systems, making it difficult to maintain continuity and consistency in the organization.
Software Reuse:

Software reuse in software engineering refers to the practice of utilizing existing software
components, modules, or systems to build new software applications. Instead of developing
software from scratch, developers leverage reusable assets to accelerate development,
improve quality, reduce costs, and enhance productivity. Software reuse encompasses various
forms and levels of reuse, including:

1. Code Reuse: Reusing code involves incorporating existing source code, libraries, or
modules into new software projects. This can range from simple code snippets and
functions to entire libraries or frameworks. Code reuse helps in avoiding redundant
development efforts, reducing errors, and maintaining consistency across projects.

2. Component Reuse: Components are self-contained, reusable units of software


functionality that can be easily integrated into different applications. Examples include
user interface components, data access modules, and communication libraries. By using
pre-built components, developers can focus on higher-level design and functionality,
rather than low-level implementation details.

3. Object-Oriented Reuse: Object-oriented programming (OOP) facilitates software reuse


through concepts such as inheritance, polymorphism, and encapsulation. Developers
can create reusable classes and objects that encapsulate functionality and can be
extended or customized for specific requirements. Inheritance allows subclasses to
inherit behavior and attributes from super class, promoting code reuse and modularity.

4. Design Patterns: Design patterns are reusable solutions to common software design
problems. They encapsulate best practices and proven solutions for designing software
systems in a reusable and maintainable way. By applying design patterns, developers
can leverage established solutions to recurring design challenges, reducing the need for
reinventing the wheel.

Client Server Software Engineering:

Client/server software engineering is a methodology or approach to software


development that revolves around the concept of dividing the software application into
two distinct parts: the client and the server. This architectural pattern is widely used in
distributed computing environments where tasks or workload are divided between
clients and servers.
Here's an explanation of client/server software engineering:

1. Client: The client is the part of the application that interacts directly with the end-user.
It typically runs on the user's device, such as a computer, smart phone, or tablet. The
client's primary responsibility is to provide a user interface through which users can
interact with the application and initiate requests for services or resources from the
server.

2. Server: The server is the part of the application that provides services, resources, or data
to clients upon request. It typically runs on a remote computer or server, accessible over
a network such as the internet. The server's primary responsibility is to handle client
requests, process them, and return results or data back to the clients. Servers can
provide various services, including data storage, computation, application logic,
messaging, and authentication.

Software Components for Client/Server Systems

 Instead of viewing software as a monolithic application to be implemented on one


machine, the software that is appropriate for a c/s architecture has several distinct
subsystems that can be allocated to the client, the server, or distributed between
both machines:
 User interaction/presentation subsystem: This subsystem implements all functions
that are typically associated with a graphical user interface
 Application subsystem: This subsystem implements the requirements defined by the
application
 Database management subsystem: This subsystem performs the data manipulation
and management required by an application. Data manipulation and management
may be as simple as the transfer of a record or as complex as the processing of SQL
transactions.

Service Oriented Architecture (SOA):

Service-oriented architecture (SOA) is a method of software development that uses


software components called services to create business applications. Each service
provides a business capability, and services can also communicate with each other
across platforms and languages. Developers use SOA to reuse services in different
systems or combine several independent services to perform complex tasks.
Components in Service-Oriented Architecture:

There are four main components in service-oriented architecture (SOA).

Service

Services are the basic building blocks of SOA. They can be private—available only to internal
users of an organization—or public—accessible over the internet to all. Individually, each
service has three main features.

Service implementation
The service implementation is the code that builds the logic for performing the specific service
function, such as user authentication or bill calculation.

Service contract

The service contract defines the nature of the service and its associated terms and conditions,
such as the prerequisites for using the service, service cost, and quality of service provided.

Service interface

In SOA, other services or systems communicate with a service through its service interface. The
interface defines how you can invoke the service to perform activities or exchange data. It
reduces dependencies between services and the service requester. For example, even users
with little or no understanding of the underlying code logic can use a service through its
interface.

Advantages of SOA:
 Service reusability: In SOA, applications are made from existing services. Thus, services
can be reused to make many applications.
 Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
 Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
 Availability: SOA facilities are easily available to anyone on request.
 Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
 Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:
 High overhead: A validation of input parameters of services is done whenever services
interact this decreases performance as it increases load and response time.
 High investment: A huge initial investment is required for SOA.
 Complex service management: When services interact they exchange messages to tasks.
the number of messages may go in millions. It becomes a cumbersome task to handle a
large number of messages.

Software as a Service:

Software as a Service (SaaS) is a delivery model for software where instead of purchasing and
installing software on individual computers or servers, users access the software via the
internet, usually through a web browser. In the context of software engineering, SaaS refers to
developing, deploying, and maintaining software applications.

1. Development: SaaS applications are typically developed using modern software


engineering practices and technologies. This includes agile development methodologies,
continuous integration and deployment (CI/CD), and scalable architecture patterns like
microservices.

2. Deployment: SaaS applications are deployed on cloud infrastructure provided by


platforms like Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform
(GCP). This allows for easy scalability, reliability, and global accessibility.

3. Subscription-based pricing: SaaS applications are typically offered on a subscription


basis, where customers pay a recurring fee to access the software. This pricing model
provides predictable revenue for the provider and flexibility for customers to scale
usage according to their needs.

4. Maintenance and Updates: SaaS providers are responsible for maintaining and updating
the software, including security patches, bug fixes, and feature enhancements. This
relieves customers from the burden of managing infrastructure and allows them to
focus on using the software to achieve their business objectives.

5. Integration: SaaS applications often need to integrate with other systems and services,
such as third-party APIs, databases, or internal systems within an organization. Software
engineers design and implement these integrations to ensure seamless communication
and data flow between different components.

You might also like