Download as pdf or txt
Download as pdf or txt
You are on page 1of 375

Chapter - 01

Overview of Software
Engineering & The Software
Development Process
Marks-20
Definition of Software
 Software is a set of instructions that when
executed, provide desired features, functions
and performance.
 It is a datastructure that enables the programs
to manipulate the information.
 Software is a document that describes the
operation and use of programs.
Characteristics
1.Software is developed or engineered; it is not
manufactured.
2.Software doesnt “wear out”.
3.Although the industry is moving towards
component based constructions, most
software continues to be custom built.
Bathtub Curve for Hardware failure
Bathtub Curve for Software failure
Types/Categories of Software
1.System Software
2.Application Software
3.Engineering/Scientific Software
4.Embedded Software
5.Product-line Software
6.Web applications
7.Artificial Intelligence Software
System Software
 It is a collection of programs written to service
other programs.
 System software area is characterized by
heavy interaction with computer hardware.
 Ex: Operating system, Compilers, Editors, File
management utilities, Drivers, Networking
Software, Telecommunication Processors.
Application Software
 It consist of standalone programs that solve
specific business needs.
 Application software is used to control
business functions in real time.
 Ex: Microsoft office suite, Google docs,
Browser
Engineering/Scientific Software
 This application range from astronomy to
volconology, automotives stress analysis to
space shuttle orbital dynamics, molecular
biology to automated manufacturing etc.
Embedded Software
 It resides within a product or system and is
used to implement and controll features and
functions for the end users and for the system
itself.
 Ex: Keypad control for microwave oven, Digital
functions, Dashboard displays etc
Product-line Software
 Desigened to provide a specific capability for
use by many different customes.
 It focus on limited marketplace or address
mass consumer markets.
 Ex: Word processing, Spread sheets,
Computer graphics, Entertainment,
Multimedia, Database management, Business
financial application.
Web applications
 Span a wide area of applications.
 In their simplest form, WebApps can be a little
more than a set of linked hypertex files that
present information using text and limited
graphics.
Artificial Intelligence Software
 AI software makes use of nonnumerical
algorithms to solve complex problems that are
not amenable to computation or straight
forward analysis.
 Ex: Robotics, Pattern recognition, Artificial
nueral networks, etc
Definition of Software Testing

“The establishment and use of sound


engineering principles in order to obtain
economically software that is reliable and
works on real machines.”
Need of Software Engineering
1.Scientific and engineering approch to develop.
2.Project has to be divided into processes.
Frame work activities, activities, task, etc.
3.Scheduling and controlling are the main
activities guided by software project.
4.Different models are required for designing
and analysis.
5.Huge management of resources.
6.Continuously deal with time and new
technology challenges.
Relationship between System
Engineering & Software Engineering
 System engineering takes place before
Software engineering. It mainly focuses on
system.
 System engineering understands role of
people, procedures, database, hardware,
software and other components.
 It analysis Modeling, validating and
management etc of operational requirements.
 Software engineering is derived from System
engineering.
 It mainly focuses on software product
engineering and development process.
 It is a part of System engineering.
 System engineering is overall study of a
system where software is going to be placed.
Software Engineering – A Layered
Technology Approach
 Any engineering approach must rest on an
organizational commitment to quality.
 Total Quality Management, Six Sigma and
similar philosophies foster a continuous
process improvement culture.
 This culture in turn develop increasingly more
effective approaches to software engineering.
 The bedrock that supports SE is a quality
focus.
 The foundation for SE is process layer.
 SE process is glue that holds technology layer
together.
 Process defines a framework that must be
established for effective delivery of SE
technology.
 SE methods provide the technical “how to's”
for building software.
 Methods include communication, requirements
analysis, design modeling, program
construction, testing and support.
 SE tools provide automated or semiautomated
support for the process and methods.
 When tools are integrated so that information
created by one tool is used by other tool.
 It leads to computer aided software
engineering.
Software Process
 A software process as a framework for the
tasks that are required to build high quality
software.
 A software process defines the approach that
is taken as software is engineered. But SE
also encompasses technologies.
Process framework
 A process framework establishes the
foundation for a complete software process by
identifying a small number of framework
activities that are applicable to all software
projects.
 SE actions: a collection of related tasks that
produces a major SE work product.
 Each action is populated with individual work
tasks that accomplish some part of work
implied by action.
Generic process framework
activities
1.Communication
2.Planning
3.Modeling
4.Construction
5.Deployment
The framework described in the generic view of
SE is complimented by number of Umbrella
activities.
Umbrella Activities
1.Software project tracking and control.
2.Risk management.
3.Software quality assurance .
4.Formal Technical reviews.
5.Measurement.
6.Software configuration management.
7.Re usability management.
8.Work product preparation and production.
1.Software project tracking and control: assess
progress against the plan and take actions to
maintain the schedule.
2.Risk management: assesses risks that may
affect the outcome of project or quality of
product.
3.Software quality assurance: defines and
conduct activities required to ensure s/w
quality.
4.Formal Technical reviews: assesses SE work
products to uncover and remove errors before
going to the next activity.
5. Measurement: define and collects process, project,
and product measures in delivering s/w that meets
customers need.
6. Software configuration management: manage the
effects of change throughout the software process.
7. Re usability management: defines criteria for work
product reuse and establishes mechanism to achieve
reusable components.
8. Work product preparation and production: create
work products such as models, documents, logs,
forms and lists.
Personal and Team process Models
Personal Software process:-
The PSP model defines five framework activities
:-
1.Planning
2.High level Design
3.High level Design Review
4.Development
5.Postmortem
Team Software Process
 Build self directed teams that plan and track
their work,establish goals,own their processes
and plans.
 Show managers how to coach and motivate
their teams and how to help them.
 Accelerate software process improvement.
 Provide improvement guidance to high maturity
organisations.
 Facilitate university teaching of industrial grade
team skills.
 A self directed team has a consistent
understanding of its overall goals and objectives.
 It defines role and responsibilities for each
member, track project data.
 Continually assesses risk and reacts to it and
manage project status.
Prescriptive process model
There are called “prescriptive” because they prescribe
 a set of process framework activities
 software engineering actions,
 tasks,
 work products,
 quality assurance and
 change control mechanism for each project.
The Waterfall model
 The Waterfall Model was first Process Model to be introduced.
 It is also referred to as a linear-sequential life cycle model.
 It is very simple to understand and use.
 In a waterfall model, each phase must be completed fully before
the next phase can begin.
 This type of model is basically used for the for the project
which is small and there are no uncertain requirements.
 At the end of each phase, a review takes place to determine if the
project is on the right path and whether or not to continue or
discard the project.
 In this model the testing starts only after the development is
complete. In waterfall model phases do not overlap.
Communication

In communication the major task is requirements gathering


which helps to find out the exact need of customer.
Planning

It includes some major activities such as planning for schedule,


tasks, tracks on the process and the estimation related to the project
Modelling

Modelling is used to analyse the data and as per the analysis the
data and process will be designed.
Construction

Construction is based on the design of the project. According the


design of the project coding and testing is done.
Deployment

The product is actually delivered that is installed at customer's site.


As well as feedback is taken from the customer to ensure the
quality of product.
When to Use
• Requirements of the complete system are clearly
defined and understood.
• Major requirements must be defined.
• There is a need to get a product to the market
early.
• A new technology is being used.
• Resources with needed skill set are not available
• There are some high risk features and goals.
Advantages of waterfall model:
1.This model is simple and easy to understand and use.
2.It is easy to manage due to the rigidity of the model –
each phase has specific deliverables and a review
process.
3.In this model phases are processed and completed one
at a time. Phases do not overlap.
4.Waterfall model works well for smaller projects where
requirements are very well understood.
Disadvantages of waterfall model:
1.Once an application is in the testing stage, it is very
difficult to go back and change something that was not
well-thought out in the concept stage.
2.No working software is produced until late during the
life cycle.
3.High amounts of risk and uncertainty.
4.Major design problems may not be detected early.
5.The model implies that once the product is finished,
everything else is maintenance.
Incremental process Model
Incremental Process Model

C- Communication
P - Planning
M – Modelling
C - Construction
D - Deployment

Delivers software in small but usable pieces, each piece builds


on pieces already delivered
Incremental process Model
 This combines elements of waterfall model applied in parallel
process flows.
 Each linear sequence produce deliverable “increments” of
the s/w product.
 It produce a s/w product as a series of increment release.
 When an incremental model is used the first increment is
often a core product i.e. basic requirement
 The core product is used by customer. As a result of use ,a
plan is developed for the next increment.
 This process is repeated following the delivery of each
increment, until the complete product is produced.
For example : Word processing s/w is developed using
incremental paradigm then,
1) In 1st:Basic file management editing & document
production functions are delivered
2) In 2nd : More sophisticated editing & document
production capabilities are delivered
3) In 3rd : Spelling & grammar checking functions are
delivered
4) In last : Advanced web page layout capabilities
functions are delivered.
Advantages
• Customers get usable functionality earlier than with
waterfall.
• Early feedback improves likelihood of producing a product
that satisfies customers.
• The quality of the final product is better
• The core functionality is developed early and tested
multiple times (during each release)
• Only a relatively small amount of functionality added
in each release: easier to get it right and test it
thoroughly
• Detect design problems early and get a chance to
redesign
Disadvantages

• Needs good planning and design.


• Needs a clear and complete definition of the
whole system before it can be broken down and
built incrementally.
• Total cost is higher than waterfall.
Rapid Application Development Model(RAD)

• RAD is an incremental software process model


that emphasizes a short development cycle.
• Using Component based construction
approach.
Rapid Application Development Model(RAD)
The RAD approach activities :
1. Communication: Works to understand the business
problems
2. Planning : Is essential because multiple s/w teams
work in parallel on different system function.
3. Modelling : 3 Major phases -Business modelling,
Data modelling & process modelling.
4. Construction :Emphasizes on the use of pre-existing
s/w components & application.
5. Deployment : Changes & innovations are done if
required for customer satisfaction.
Advantages
1.Useful when the time limit for development is too short.
2.Since reusability is used ,many of the program
components are already tested. This reduce overall testing
time.
3.All functions are modularized so it is easy to work with
Disadvantages
• For large projects RAD require highly skilled engineers
in the team.
• Both end customer and developer should be committed to
complete the system in time, if commitment is lacking
RAD will fail.
• RAD is based on Object Oriented approach and if it is
difficult to modularize the project the RAD may not work
well.
Evolutionary Models: Prototyping

Q u i ck p l an
Quick
Co m m u n icat io n plan
communication
Modeling
Mo d e l i n g
Q u i ck d e si g n
Quick design

Deployment
Deployment
D e live r y
delivery &
& Fe e d b ack Co n st r u ct io n
feedback oConstruction
f
p of
r o tprototype
o t yp e
Evolutionary Models: Prototyping
• Business and product requirement often change as
development proceed.
• Software engineer need a process model that has been
explicitly designed to accommodate a product that
evolves over time.
• Evolutionary models are iterative.
• Enables software engineers to develop increasingly
more complete version of the software.
There are two types of evolutionary development:

– Exploratory development
• Start with requirements that are well defined
• Add new features when customers propose new requirements
– Throw-away prototyping
• Objective is to understand customer’s requirements (i.e. they
often don’t know what they want), hence poor requirements to
start
• Use means such as prototyping to focus on poorly
understood requirements, redefine requirements as you
progress
Steps in Prototyping
• Begins with requirement Gathering
• Identify whatever requirements are known.
• Outline areas where further definition is mandatory.
• A quick design occurs.
• Quick design leads to the construction of prototype.
• Prototype is evaluated by the customer.
• Requirements are refined
• Prototype is turned to satisfy the needs of customer
Advantages

1.The risk factor is very low


2.With less investment of finance & time, the
requirements are confirmed
Disadvantages

1. Leads to implementing and then repairing


way of building systems.
2. Practically, this methodology may increase
the complexity of the system.
3. Incomplete application may cause
application not to be used as the full
system.
When to use Prototype model:
 This model should be used when the desired system
needs to have a lot of interaction with the end users.
 Typically, online systems, web interfaces have a very high
amount of interaction with end users, are best suited for
Prototype model.
 Prototyping ensures that the end users constantly work
with the system and provide a feedback which is
incorporated in the prototype to result in a usable
system.
 They are excellent for designing good human computer
interface systems.
Spiral Model
 Spiral model is a combination of iterative
development process model and sequential
linear development model i.e. waterfall model
 It allows for incremental releases of the product, or
incremental refinement through each iteration
around the spiral.
Advantages of Spiral model:
High amount of risk analysis hence, avoidance of
Risk is enhanced.
1) Good for large and mission-critical projects.
2) Strong approval and documentation control.
3) Additional Functionality can be added at a later
date.
4) Software is produced early in the software life
cycle.
Disadvantages of Spiral model:
1.Can be a costly model to use.
2. Project’s success is highly dependent on the
risk analysis phase.
3. Doesn’t work well for smaller projects.
When to use Spiral model:

 When costs and risk evaluation is important


 For medium to high-risk projects
 Users are unsure of their needs
 Requirements are complex
 Significant changes are expected (research
and exploration)
Agile Software Development
 It focuses on the rapid development of the s/w product
by considering the current market requirements and time
limits.
 Todays market is rapidly changing and unpredictable.
 Agile solves the problem of long time and heavy
documentation s/w development process.
 Agile focuses on face to face or interactive processes
than documentation.
 It doesn't believe in more and more documentation
because it makes difficult to find the required
information.
 It supports team to work together with
management for supporting technical decision
making.
 This method focuses mainly on coding because it
is directly deliverable to the users.
 Agile saves man power, cost, documentation and
time.
Features of the Agile Software Development
Approach
Modularity: Modularity allows a process to be broken into components called
activities. A set of activities capable of transforming the vision of the software
system into reality.
Iterative: Agile software processes acknowledge that we get things wrong
before we get them right. Therefore, they focus on short cycles. Within each
cycle, a certain set of activities is completed.
Time-Bound: Iterations become the perfect unit for planning the software
development project. We can set time limits on each iteration and schedule
them accordingly.
Parsimony: Agile software processes focus on parsimony. That is, they require a
minimal number of activities necessary to mitigate risks and achieve their
goals.
Adaptive: During an iteration, new risks may be exposed which require
some activities that were not planned. The agile process adapts the
process to attack these new found risks.
Incremental: An agile process does not try to build the entire system at
once. Instead, it partitions the nontrivial system into increments which
may be developed in parallel, at different times, and at different rates.
Convergent: Convergence states that we are actively attacking all of the
risks worth attacking. As a result, the system becomes closer to the
reality that we seek with each iteration.
People-Oriented: Agile processes favor people over process and
technology. Developers that are empowered raise their productivity,
quality, and performance.
Collaborative: Communication is a vital part of any software development
project. When a project is developed in pieces, understanding how the
pieces fit together is vital to creating the finished product.
Difference between Prescriptive Process Model and
Agile Process Model
Extreme programming
 The best-known and most widely used agile method.
Extreme Programming (XP) takes an ‘extreme’ approach

to iterative development.
New versions may be built several times per day;

Increments are delivered to customers every 2 weeks;

All tests must be run for every build and the build is only

accepted if tests run successfully.


 XP is a disciplined approach to software development
based on value of simplicity, communication and
feedback.
 It empowers developers to confidently response to the
changing needs of customers even late in life cycle.
Chapter 2

Software engineering practices And Software


requirements Engineering

Marks-16
 Software engineering practice is a collection of
concepts, principals, methods and tools that
a software engineer calls upon on daily basis.
 Practice allows managers to manage software
projects and software engineers to build
computer programs.
Essence of practice
1.Understand the problem.(communication and
analysis)
2.Plan a solution.(modeling and software
design)
3.Carry out the plan.( code generation)
4.Examine the result for accuracy(testing and
quality assurance)
Understand the problem
 Who are the stakeholder?
 What data, functions, features and behavior
are required to properly solve the problem?
 Is it possible to represent smaller problems
that may be easier to understand?
 Can the problem be represented graphically?

Problem: Write a function which takes two numbers and returns their sum.
Plan a solution
 Have you seen similar problems before?
 Has a similar problem been solved?
 Can sub-problems be defined?
 Can you represent a solution in a manner that
leads to effective implementation?
 Can a design model be created?
Carry out the plan.
 Does the solution conform to the plan?
 Is each component part of the solution
provably correct?
 Has the design and code been reviewed?
Examine the result
 Is it possible to test each component part of the
solution?
 Has a reasonable testing strategy been
implemented?
 Has the software been validated against all
stakeholder requirements?
Core principles of Software
Engineering
1. The Reason It All Exists.
2.Keep It Simple, Stupid!
3.Maintain The Vision.
4.What you produce ,Others will Consume.
5.Be Open To The Future.
6.Plan Ahead For Reuse
7.Think!
The First Principle: The Reason It All
Exists
 The software exists for one reason: to provide
value to its users.
 Before specifying system requirement,
functionality, hardware platform, development
process ask question such as:
“Dose this add real value to the system?”
 If answer is no, don't do it.
The Second Principle: Keep It
Simple,Stupid!
 All design should be as simple as possible,
but no simpler.
 This facilitates having a more easily understood
and easily maintained system.
 Simple does not mean that features should be
discarded in the name of simplicity.
 Indeed, the more elegant designs are usually the
more simple ones.
The Third Principle: Maintain The Vision

 A Clear vision is essential to the success of a


software project.
 Compromising the architectural vision of a
software system weakens and will eventually
break even a well designed system.
 Having an empowered architect who can hold
the vision and enforce compliance helps ensure
a very successful software project.
The Fourth Principle: What you
produce ,Others will Consume

 Always specify design and implement knowing


someone else will have to understand what you
are doing.
 Someone may have to debug the code you
write, and that makes them a user of your code.
 Making their job easier adds value to the System.
The Fifth Principle: Be open to the
Future
 A system with a long lifetime has more value.
 Software lifetimes are typically measured in
months instead of years.
 System should be ready to adapt changes.
 System that adapt changes have been designed
this way from start.
 Never design yourself to a corner.
 Always keep asking “what if?” and prepare for all
possible answers.
The Sixth Principle: Plan ahead for
Reuse
 Reuse saves time and effort.
 Achieving a high level of reuse is arguably the
hardest goal to accomplish in developing a
software system.
 The reuse of code and designs has been
proclaimed as a major benefit of using object
oriented technologies.
The Seventh Principle: Think!

 Placing clear, complete thought before action


almost always produces better results.
 When you think about something, you are more
likely to do it right.
 You also gain knowledge about how to do it right
again.
 When clear thoughts has gone into system, value
comes out.
Communication Practices
 Before customer requirements can be
analyzed, modeled, or specified they must be
gathered through a communication activity.
 Effective communication is among the most
challenging activities that confront a software
engineer.
Principles
Principle #1: Listen. Try to focus on speakers words.
If something is unclear, ask for clarification.
Principle #2:Prepare before you communicate.
Spend some time to understand the problem before
you meet with others. If you have responsibilities for
conducting a meeting, prepare an agenda in
advance of the meeting.
Principle #3:Someone should facilitate the activity.
Every communication meeting should have a leader
to keep conversation moving in a productive direction
Principle #4: Face to face communication is best.
Works better when some other representation of the
relevant information is present.
For example, a participant may create a drawing
Principle #5: Take notes and document decisions.
Someone participating in the communication should
serve as a “recorder”
Principle #6: Strive for collaboration. trust among
team members and creates a common goal for the
team.
Principle #7: Stay focused, modularize your
discussion. discussion will bounce from one topic to
the next
Principles
Principle #8:Draw a picture to clear your idea: A
sketch or drawing can often provide clarity when
words fail to do the job.
Principle #9: Keep the discussion to “ move on ”
1. once there is an agreement to do something .
2. If you cannot agree to something.
3. If a feature of function is not clear.
Principle #10:Negotiation is successful when
both parties win.
Planning practices
 Good planning leads to successful result.
 The planning activity encompasses a set of
management and technical practices that enable
the software team to define a road map as it
travels towards its strategic goal and tactical
objectives.
 Planning includes complete cost estimation,
resources, scheduling and also risk analysis.
Principle 1: Understand the scope of
the project.
Its impossible to use a road map if you don't know
where you are going. Scope provides the
software team with destination.
Principle 2: Involve the customer in
the planning activity.
The customer defines priorities and establishes
project constraints.
To accommodate these realities software
engineers must often negotiate order of
delivery,time lines and other project related
issues.
Principle 3: Recognize that planning is iterative.
Principle 4:Estimate based on what you know
Principle 5:Consider risk as you define the plan
Principle 6: Be realistic.
Principle 7: Adjust granularity as you define the
plan.
Principle 8: Define how you intend to ensure
Quality.
Principle 9: Describe how you intend to
accommodate change.
Principle 10: Track the plan frequently and make
adjustment as required.
Modeling Practice
 Models are created for better understanding of
the actual entity to be built or design.
 When the entity is a physical thing, we can build
model that is identical in form and shape but
smaller in scale.
 When entity is software our model must take
different form. It must be capable of representing
information, Architecture, functions ,features
and behavior of the system.
In SE work, Two classes of model is created.
1. Analysis Model.
2. Design Model.
Principle 1: The information domain of problem
must be clearly represented.
Information domain encompasses the data that flow
into the system(from end user, external
devices),data that flow out of the system(via user
interface, n/w interface, graphics), data stores
collection of objects(data i.e. maintained
permanently).
Principle 2: The function of the software must
be defined clearly.
 Functions are the processes those transform the
I/p flow to the o/p flow.
 The process specification for example
algorithms provides function details. The
specification must be clearly defined.
Principle 3: The Behavior of the software must
be defined clearly.
 Analysis model uses state transition diagrams to
represent the behavior of the system clearly.
 It shows how the system makes transition from
one state to another on occurrence of some
external event.
Principle 4: The clear hierarchy among
information, functions and behavior must be
shown.
 The proper hierarchy of analysis model leads to
easy design. Hence information, functions and
behavior of the system must be represented
using proper hierarchy i.e. levels or layers.
Principle 5: analysis should be clear enough to
convert it into design model.
 If analysis of requirements is clear and simple
then it will be easy for design and implementation
in construction step. Hence requirement analysis
should be clear enough.
Design modeling
Principle 1: Design should be traceable from
analysis model.
Principle 2: Consider the architecture of the system
to be built.
Principle 3: Design of data is as important as
design of function.
Principle 4: Internal as well as external interfaces
must be designed.
Principle 5: user interface design must satisfy all
need of end user.
Principle 6: Component level design should be
functionally independent.
Principle 7: Components should be loosely
coupled to one another and to the external
environment.
Principle 8: designed modules should be easy to
understand.
Principle 9: Accept that design behavior is
Iterative.
Construction Practices
 The construction activity encompasses a set of
coding and testing tasks that lead to
operational software that is ready for delivery
to the customer or end user.
 The initial focus of testing is at the component
level, often called unit testing. Other levels of
testing include integration testing, validation
testing and acceptance testing.
Coding Principles & Concepts
Preparation Principles: Before you write one line of
code, be sure you,
1.Understand the problem you're trying to solve.
2.Understand basic design principles & concepts.
3.Pick a programming language that meets the needs
of the software to be built & the environment in
which it will operate.
4.Select a programming environment that provides
tools that will make you work easier.
5.Create a set of unit tests that will be applied once
the component you code is completed.
Coding Principles: As you begin writing code, be sure you:
1.Constrain your algorithms by following structured
programming practice.
2.Select data structures that will meet the needs of the
design.
3.Understand the software architecture and create
interfaces that are consistent with it.
4.Keep conditional logic as simple as possible.
5.Create nested loops in a way that makes them easily
testable.
6.Select meaningful variable names and follow other
local coding standards.
7.Write code that is self documenting.
8.Create a visual layout that aids understanding.
Validating principles
After you've completed your first coding pass, be
sure you:
1.Conduct a code walk through when
appropriate.
2.Perform unit tests and correct errors you've
uncovered.
3.Re-factor the code.
Testing principles

Testing rules or objectives:


• Testing is a process of executing a program with
the intent of finding an error.
• A good test is one that has a high probability
of finding an as yet undiscovered error.
• A successful test is one that uncovers an as
yet undiscovered error.
Principle #1: All tests should be traceable to
customer requirements.
Principle #2: Tests should be planned long
before testing begins.
Principle #3: The Pareto principle applies to
software testing.
Principle #4: Testing should begin “in the small”
and progress toward testing “in the large”.
Principle #5: Exhaustive testing is not possible.
Deployment
 The deployment activity encompasses 3 actions:
delivery, support and feedback.
 Modern software process models are
evolutionary in nature, deployment happens
not once, but a number of times as software
moves towards completion.
 Each delivery cycle provides the customer and
end users with an operational software
increment that provides usable functions and
features.
 Each support cycle provides documentation
& human assistance for all functions and
features introduced during all deployment cycles
to date.
 Each feedback cycle provides the software
team with important guidance that results in
modifications to the functions, features and
approach taken for the next increment.
Principle #1: Customer expectations for the
software must be managed.

Principle #2: A complete delivery package should


be assembled and tested.

Principle #3: A support regime must be


established before the software is delivered.

Principle #4: Appropriate instructional materials


must be provided to end users.

Principle #5: Buggy software should be fixed first,


delivered later.
Requirement Engineering
 Requirements engineering, like all other
software engineering activities, must be
adapted to the needs of the process, the
project , the product, and the people doing
the work.
 Software process perspective, requirements
engineering is a software engineering action
that begins during the communication
activity and continues into the modeling
activity.
The requirements engineering process is
accomplished through the execution of seven
distinct functions:
1.Inception
2.Elicitation
3.Elaboration
4.Negotiation
5.Specification
6.Validation
7.Management
Inception
 Inception-Starting point, beginning.
 At project inception, software engineers ask a set of
context free question.
 The intent is to establish a basic understanding of
 the problem,
 the people who want a solution,
 the nature of the solution that is desired and
 effectiveness of preliminary communication and
collaboration between the customers and the
developer.
Elicitation-collecting intelligence information
 Ask the customer, the user and others
 what is objectives for the system?
 What is to be accomplished?
 How the system fits into the needs of the business?
 How the system or product is to be used on a day to
day basis?
 Christel and Kang identified a number of problems that help
us understand why requirements elicitation is difficult
1. Problem of scope.
2. Problem of understanding
3. Problem of volatility
Elaboration
 It means to work out in detail.
 The information obtained from the customer during
inception and elicitation is expanded and refined in
elaboration.
 S/w engg focuses on developing a refined
technical model of software functions, features
and constraints.
 It describes how the end user will interact with the
system.
 The end result is an analysis model that defines
the informational, functional, behavioral domain of
the problem.
Negotiation
 The requirements engineer must reconcile conflicts
through process of negotiation.
 Customers, users & stakeholders are asked to rank
requirements and discuss conflicts in priority.
 Risks associated with each requirements are
identified and analyzed.
 Rough “guesstimates” of development effort are
made and used to assess the impact of each
requirement on project cost and delivery time.
 Using an iterative approach, requirements are
eliminated, combined, and /or modified so that
each party achieves some measure of satisfaction.
Specification
“Standard template” should be developed and used
for a specification, arguing that this leads to
requirements that are presented in a consistent
and therefore more understandable manner.
The specification is a final work product produced
by the requirements engineer. It serves as the
foundation for subsequent software engineering
activities.
Validation
The work products produced as a consequence of
requirements engineering are assessed for
quality during a validation step.
Requirements validation examines the specification
to ensure that all software requirements have
been stated unambiguously.
The review team that validates requirements
includes software engineers, customers, users
and other stakeholders.
Requirements Management
Requirements management is a set of activities that
help the project team identify, control and track
requirements and changes to requirements at any
time as the project proceeds.
Requirement management begins with identification.
Each requirement is assigned a unique identifier.
Once requirements have been identified, traceability
table are developed.
Each traceability relates requirements to one or more
aspects of the system.
Software requirements specification
(SRS)

A requirements specification for a software system,


is a description of the behavior of a system to be
developed and may include a set of use cases that
describe interactions the users will have with the
software.
SRS format
There is no single precise template for writing good
Software Requirement Specifications. The contents
of an SRS document depends on the software
product being developed and also on the expertise
of the people doing the requirement elicitation.
1.Project scope section
2.Functional requirements
3.Requirement analysis models
4.External interface requirements
5.Non functional requirements
The importance of SRS documents
Establish the basis for agreement: SRS helps in
establishing agreement between the customers and the
suppliers on what the software product is to do. The
complete description of the functions to be performed by
the software specified in the SRS will assist the potential
users to determine if the software specified meets their
needs or how the software must be modified to meet their
needs. Reduce the development effort.
Provide a basis for estimating costs and schedules. The
description of the product to be developed as given in the
SRS is a realistic basis for estimating project costs and
can be used to obtain approval for bids or price estimates.
Provide a baseline for validation and
verification. Organizations can develop their
validation and Verification plans much more
productively from a good SRS.
Facilitate transfer. The SRS makes it easier to
transfer the software product to new users or
new machines. Customers thus find it easier to
transfer the software to other parts of their
organization, and suppliers find it easier to
transfer it to new customers.
Serve as a basis for enhancement. Because
the-SRS discusses the product but not the
project that developed it, the SRS serves as a
basis for later enhancement of the finished
product.
Chapter 3

Analysis and design modeling


Analysis Modeling
 Basic aim of analysis modeling is to create the
model that represents the information, functions
and behavior of the system to be built.
 Afterwards these all are translated into
architectural, interface and component level
designs in design modeling.
 Analysis model acts as a bridge between system
description and design model.
Objectives of analysis modeling

1.To state clearly what customer wants exactly.


2.To establish the basic of the design model.
3.To define the set of requirement
Analysis modeling approaches
 Analysis modeling is the first technical
representation of system
 Methods for analysis modeling are:-
1.Structured analysis
2.Object oriented analysis.
Structured analysis
 Structure of structured analysis model is as shown in
above fig.
 At the center of model is a data dictionary.
 Data dictionary is a repository that contains descriptions
or information of all data objects used or created by the
software.
 Surrounding area of the core is occupied by different
diagrams such as:-
1.E-R diagram
2.Data Flow Diagram
3.State transition diagram
1. E-R diagram
E-R diagram is mainly used to represent the
relationship between two entities or data objects.
 This diagram is used to execute data modeling
activity.
 The additional information about data objects can
be given with the help of data objects description.
2. Data flow diagram
Data Flow Diagram used for following reasons:-
 Representing data transformation through the
system.
 To show the functions with its sub functions those
are responsible for transforming the data flow.
 The description of every function is written using
process specification
3. State transition diagram
 It indicates the behavior of the system as an
outcome of external events.
 It represents the different modes of behavior
called states of the system.
 It also shows the fashion in which transitions are
made from one state to another state.
 The additional information that is required for
control attribute is written using control
specification
Object oriented analysis
 The basis of object oriented analysis is classes
and members, objects and attributes.
 Classes are collection data members and
operations to be operated on data members.
 Objects are run time entities that encapsulates
data members and member functions.
 The objective of OOA is to define all classes that
are related to the problem, operations and
attributes with class, relationship between classes,
operations, and attributes need to be presented.
Steps to perform object oriented
analysis
1.Find out the exact customer requirement.
2.Prepare scenarios or use cases.
3.Selection of classes and objects based on
requirement.
4.Defines attributes and operations for every system
object.
5.Design structure and hierarchies that will help
organizing classes.
6.Construct an object relationship and behavior
model.
Domain analysis.
 Software domain analysis can be designed as a
process of recognizing, analyzing and specifying of
common requirements from a specific application
domain.
 It finds the common requirements in the project.
 Application domain common objects, common
classes, common frameworks can be identified and
can be reused.
Eg: The specific application domain may be 'bus
reservation system' can be used for 'railway
reservation system'.
Technical Domain
 Technical domain of the software is related to the
common technical requirements which can be
shared by many products.
 Ex: most of the mobile applications use common
facilities called calling, sending messages, access
to the Internet etc.
 Many applications can be developed where we do
not write above requirements again and again.
 They can be used by any applications once
installed on the mobile phone. These activities use
specific technical requirements that combine
hardware with software.
Application Domain
 The application domain is the common library that
contains the classes that can be used by other
products to minimize their work.
 Domain analysis helps in finding out common
requirements of the software and its domain is
created. It is called specific application domain.
 Ex: In finance and banking, different financial
products are offered to the customers such as
different types of accounts, fixed deposits,
mutual funds, insurance , loans, etc., comes
under specific application domain.
 Once it is created, many other software products
can use it.
Goals of Domain Analysis
1.Find out common requirement specification.
2.To save the time.
3.Reduce the repeated or duplicate work.
4.Reduction in the complications of the project.
5.To make library of classes available.
6.To enhance the portability.
Input and Output of domain analysis
 Figure shows the flow of the input and the output
data in the domain analysis module.
 The main goal is to create the analysis classes and
common functions.
 The input consists of the knowledge domain.
 The input is based on the technical survey,
customer survey and expert advice.
 This data is then analyses, meaningful information
comes out from this.
 The output domain consists of reusable classes,
standards, functional models and domain language.
Elements of the analysis model
Scenario based Elements

The system is described from the user‘s point of


view using this approach. This is often the first
part of analysis model that is developed to
serve as input for the creation of other
modeling elements.
Class-based Elements

Each usage scenario implies a set of objects that


are manipulated as an actor interacts with the
system. These objects are categorized into
classes – a collection of things that have
similar attributes and common behaviors.
Behavioral Elements

The behavior of the system can have profound


effect on the design that is chosen. The
analysis model must provide modeling
elements that depict the behavior. The state
diagram is one of the methods for representing
behavior of a system.
Flow-Oriented Elements

The information is transformed as it flows


through the computer based system. The
system accepts inputs in a variety of forms,
applies functions to transform it; and produces
output in different forms. The transforms may
comprise a single logical comparison, a
complex numerical algorithm or an expert
system. The elements of the information flow
are included here.
Data modeling concepts
It includes :-
1.Data objects
2.Data attributes
3.Data relationship
4.Cardinality and Modality
1. Data objects
A data object is a representation of almost any
composite information that must be understood
by software.
A data object can be an external entity, a thing,
an occurrence or event, a role, an
organizational unit, a place or a structure.
For ex, a person or a car can be viewed as a
data object in the sense that either can be
defined in terms of a set of attributes.
A data object encapsulates data only—there is
no reference within a data object to operations
that act on the data.
Data Attributes
Data attributes define the properties of a data object
and take on one of three different characteristics.
They can be used to
1. name an instance of data object.
2. describe the instance.
3. make reference to another instance in another table.
In addition, one or more attribute must be identifier.
Referring to data object car, identifier might be ID
number.
Data Relationship

Relationship indicates how Data objects are


connected to one another in different ways.
ex. Consider two data objects, person and car.
Customer purchases the car. Here purchase is the
relation.
These objects can be represented using the
simple notation.
A connection is established person and car
because the two objects are related.
But what are the relationships?
We can define a set of object/relationship pairs that
define the relevant relationships.
For ex
> A person owns a car.
> A person is insured to drive a car.
The arrows provide important information about the
directionality of the relationship and reduce
confusion.
Cardinality and Modality
A simple pair that states that objectX relates to
objectY does not provide enough information
for software engineering purposes.
We must understand how many occurrences of
objectX are related to how many occurrences
of objectY. This leads to data modeling
concept called cardinality.
“Cardinality is the specification of the number of
occurrences of one object that can be related
to the number of occurrences of another
object.”
For ex,
• one object can relate to only one other object (1:1
relationship)- a college is having only one
principal;
• one object can relate to many objects (1:N
relationship)-one class may have many students;
• some number of occurrences of an object can
relate to some other number of occurrences of
another object (M:N relationship) -an uncle may
have many nephews while a nephew may have
many uncles.
Cardinality also defines ”the maximum number of
objects that can participate in a relationship”.
The modality of a relationship is 0 if there is no
explicit need for the relationship to occur or the
relationship is optional. The modality 1 if an
occurrence of the relationship is mandatory.
Flow oriented modeling
• Data flow oriented modeling continues to be one
of the most widely used analysis notations
today.
• The DFD takes an input-process-output view of
a system.
• Data object flow into the software, are
transformed by processing elements, and
resultant data objects flow out of the software.
• Data objects are represented by labeled
arrows and transformation are represented by
circles.
Data Flow Diagram

A data flow diagram is a graphical


representation that depicts information flow
and the transforms that are applied as data move
from input to output.
A DFD shows what kinds of data will be input
to and output from the system, where the data
will come from and go to, and where the data
will be stored.
A data flow diagram is a graphical
representation that depicts information flow
and the transforms that are applied as data move
from input to output.
A DFD shows what kinds of data will be input
to and output from the system, where the data
will come from and go to, and where the data
will be stored.
A few simple guidelines can aid immeasurably during
derivation of a data flow diagram
1. The level 0 data flow diagram should depict the
software/system as single bubble.
2. Primary input and output should be carefully noted.
3. Refinement should begin by isolating processes, data
objects and data stores to be represented at the next level.
4. All arrows and bubbles should be labeled with
meaningful names.
5. Information flow continuity must be maintained from level
to level
6. One bubble at a time should be refined.
 Rectangle represents Entity
 A circle (bubble) represents a process
or transform that is applied to data (or
control).
 An arrow represents one or more data
items (data objects).
 All arrows on a data flow diagram
should be labeled.
 The double line represents a data store
stored information that is used by the
software.
Data Flow Diagram
Level 1 DFD
Level 2 DFD for Monitor Sensor
Data flow diagram level 0 and 1 for
a Book publishing House
Level 1 DFD for Book Publishing House
• Draw Level 0 and Level1 DFD for Food
Ordering System.
• Draw level 0 and Level 1 DFD for Online
Shopping System
Data Dictionary
• Data dictionary is the centralized collection of information
about data.
• It stores meaning and origin of data, its relationship
with other data, data format for usage etc.
• Data dictionary is often referenced as meta-data (data
about data) repository.
• It is created along with DFD (Data Flow Diagram) model
of software program and is expected to be updated
whenever DFD is changed or updated.
• Data dictionary provides a way of documentation for the
complete database system in one place. Validation of
DFD is carried out using data dictionary.
Requirement of Data Dictionary

• The data is referenced via data dictionary while


designing and implementing software.
• Data dictionary removes any chances of
ambiguity.
• It helps keeping work of programmers and
designers synchronized while using same object
reference everywhere in the program.
Contents

Data dictionary should contain information about


the following
1. Data Flow
2. Data Structure
3. Data Elements
4. Data Stores
5. Data Processing
Data Flow is described by means of DFDs as
studied earlier and represented in algebraic
form as described.
= Composed of
{} Repetition
() Optional
+ And
[ / ] Or
Example:
Address = House No + (Street / Area) + City + State
Course ID = Course Number + Course Name + Course
Level + Course Grades
Data Elements

Data elements consist of Name and descriptions


of Data and Control Items, Internal or External
data stores etc. with the following details:
1. Primary Name
2. Secondary Name (Alias)
3. Use-case (How and where to use)
4. Content Description (Notation etc. )
5. Supplementary Information (preset values,
constraints etc.)
Example
• Name: Mobile Number
• Alias: Mob-no
• Where used/How used:
Dial Phone and send Msg.
• Description:
Mob-no=country code+ mobile number.
Data Store: It stores the information from where the
data enters into the system and exits out of the
system. The Data Store may include -
• Files
– Internal to software.
– External to software but on the same machine.
– External to software and system, located on different
machine.
• Tables
– Naming convention
– Indexing property
Data Processing
There are two types of Data Processing:

Logical: As user sees it


Physical: As software sees it
The Control specification
The Control specification (CSPEC) represent the
behavior of a system in two different ways but it
gives no information about the inner working of
the processes that are activated as a result of
this behavior.
1.State diagram- sequential specification of
behavior
2.Program activation table- combinatorial
specification of behavior.
The process specification
The process specification(PSPEC) is used to
describe all flow model processes that appear
at the final level of refinement.
The content of process specification can include
narrative text, program design language
(PDL),mathematical equation, tables,diagrams
,charts etc.
Using process specification engineer creates
mini specification that can serve as a guide of
the s/w component that will implement the
process.
Scenario Based Modeling
If s/w engineer understand how end users want
to interact with a system ,s/w team will be
better able to properly characterized
requirement and build meaningful analysis and
design model.
It begins with the creation of scenarios with the
help of:-
1. Use case diagram
2. Activity diagram
3. Swim lane diagram.
Developing/writing use cases
The use case captures the interaction that occur
between producers and consumers of
information and system itself.
Purpose of Use Case
 Use cases are used to model the system from
the point of view of end user.
 Person or thing that are involved are called as
actors and the operations that take place are
called as actions.
 Use case helps in understanding the exact
product requirements
 Providing a clear and unmistakable description
of how system and end user interact with each
other.
 Provide basis for the purpose of validation
testing.
Activity Diagram
The UML (Unifies Modeling Language) activity
diagram supplements the use case by
providing a graphical representation of the flow
of interaction within a specific scenario.
An activity diagram uses rounded rectangles to
imply a specific systems function, arrows to
represent flow through the system, decision
diamonds to depict a branching decision and
solid horizontal lines to indicate that parallel
activities are occuring.
Behavioral model
The behavior model indicates how s/w will respond to
external events.
To create the behavior model analyst must perform
following steps:-
1.Evaluate all use cases to fully understand the
sequence of interaction within system.
2.Identify the events and understand how these events
relates to specific classes.
3.Create a sequence for each use case.
4.Build the static diagram for the system.
5.Review the model to verify accuracy and consistency.
A event occurs whenever the system and an
actor exchange information.
It is important to note that an event is not the
information that has been exchanged,but
rather the fact that information has been
exchanged.
Example. Homeowner enters a 4-digit pwd.
here object homeowner transmits an
event to the object control panel. here
event is password entered. The
information transferred is 4-digit that
constitute the pwd,but this is not an
essential part of behavioral model.
It is imp to note that some events have an
explicit impact on the flow of control of
use case,while others have no direct
impact on the flow of control.
Event pwd entered does not explicitly
change the flow of control but the result of
event compare password will have
explicit impact on the information and
control flow of safe home system.
Event pwd entered does not explicitly
change the flow of control but the result of
event compare password will have
explicit impact on the information and
control flow of safe home system.
State Representation
Two different characterizations of states must be
considered.
I) It shows how change proceed over time, it
shows the dynamic nature of a system.
II) Clarify following things for state diagram
- identify object
- identify state
- identify event.
State diagram for control panel class
Sequence Diagrams

From examining use case diagram for events,


modeler creates a sequence diagram- a
representation of how events cause flow from
one object to another as a function of time.
It represents key classes and the events that
cause behavior to flow from class to class.
Design modeling

Software design is an iterative process that is


used to translate requirement into design
model or blueprint of the construction of the
s/w.
Throughout the design process the quality of the
evolving design is assessed with a series of
formal technical reviews and code
walkthroughs.
Each element of the analysis model provides
information that is necessary to create the four design
models
 The data/class design transforms analysis classes

into design classes along with the data structures


required to implement the software
 The architectural design defines the relationship
between major structural elements of the software;
architectural styles and design patterns help achieve
the requirements defined for the system
 The interface design describes how the software
communicates with systems that interoperate with it
and with humans that use it
 The component-level design transforms structural
elements of the software architecture into a
procedural description of software components
Three characteristics for the evaluation of good
design:-
1. The design must implement all of the explicit
requirements contained in the analysis
model,and it must include all of the implicit
requirement desired by the customer.
2. Design must be readable and understandable
to everyone.
3. The design need to give complete idea or
picture of s/w,addressing the data , functional ,
behavioral domain.
Design quality guidelines.
1.A design should show architecture i.e.
developed with the help of understandable
patterns,styles,components that are having
characteristics of good design.
2.A design should be modular. Because of that
s/w can be partitioned logically into elements.
3.A design should contain different
representation of components ,interfaces ,
architectures and data.
4.A design should have appropriate classes and
data structures to be implemented,sourced
from recognizable data patterns.
5. A design should lead to components that
exhibits independent functional characteristics.
6. A design should lead to interfaces that reduce
the complexity of connection between
components and with the external
environment.
7. A design should be derived using a repeatable
method that is driven by information obtained
during analysis.
8. Design should be presented using a notation
that effectively communicates its meaning.
Design concepts
1. Abstraction
2. Architecture
3. Patterns
4. Modularity
5. Information hiding
6. Functional independence
7. Refinement
8. Refactoring
Abstraction
-Procedural abstraction – a sequence of
instructions that have a specific and limited
function
-Data abstraction – a named collection of data
that describes a data object
Architecture
-The overall structure of the software and the
ways in which the structure provides conceptual
integrity for a system
-Consists of components, connectors, and the
relationship between them
Patterns
-A design structure that solves a particular
design problem within a specific context
-It provides a description that enables a
designer to determine whether the pattern is
applicable, whether the pattern can be reused,
and whether the pattern can serve as a guide
for developing similar patterns
Modularity
-Separately named and addressable
components (i.e., modules) that are integrated
to satisfy requirements (divide and conquer
principle)
-Makes software intellectually manageable so
as to reduce overall complexity like the control
paths, span of reference, number of variables
etc.
Information hiding
-The designing of modules so that the
algorithms and local data contained within them
are inaccessible to other modules
-This enforces access constraints to both
procedural (i.e., implementation) detail and local
data structures
Functional independence
Independence is based on criteria called
cohesion and coupling.
Cohesion(connection or bond) represents
relative functional strength of a module.
Coupling is about relative interdependence
among modules.
Stepwise refinement

-Development of a program by successively refining


levels of procedure detail
-Complements abstraction, which enables a designer
to specify procedure and data .
 Refactoring
-A reorganization technique that simplifies the design
(or internal code structure) of a component without
changing its function or external behaviour.
-Removes redundancy, unused design elements,
inefficient or unnecessary algorithms, poorly
constructed or inappropriate data structures, or any
other design failures
Design Model
Data elements
Data model --> data structures
Data model --> database architecture
Architectural elements -it gives us an overall
view of the s/w.
“similar to the floor plan of a house”.(layout
,shape,size,movement of windows n doors)

Application domain, Analysis classes, their


Relationships, collaborations and behaviors
are transformed into design realizations
Patterns and “styles”
Interface elements “The way in which utilities
connections come into the house and are
distributed among the rooms”
the user interface (UI) external interfaces to
other systems, devices, networks or other
producers or consumers of information
internal interfaces between various design
components.
Component elements
It is equivalent to a set of detailed drawings
and specifications
for each room in a house.
The component-level design for software fully
describes the internal detail of each software
component.
Deployment elements
Indicates how software functionally and
subsystem terms will be allocated within the
physical computing environment that will
support the software.
Chapter - 04

Software Testing Strategies and


Methods
• The process of executing the programs with the
intention to find out the errors is software
testing.
• A software engineer must develop or design
software which should be flexible to test.
Objectives of software testing
1.Find out the errors and fix them.
2.Good test is able to find out undiscovered errors.
3.To fulfill customers requirement regarding
software product.
4.To enforce standards while developing software
product.
5.To support software quality assurance.
6.Check operability, flexibility, reliability of software
product.
Characteristics / attributes:-
1.Operability
2.Observability
3.Controllability
4.Simplicity
5.Stability
6.Understandability
7.Reliability
8.Safety and security.
Operability
• Ability of getting operated easily
• The more is operability, more efficiently s/w can
be tested.
• It make sure that-
• The system will have less number of bugs.
• Bugs wont block the execution of test.
observability
• It is nothing but what you can see is what you
can test.
• Points observed are:-
– Different o/p is generated for each an every I/p.
– Check system states and variables and their values.
– Each an every factor that affects the o/p is clearly
visible.
– Wrong o/p and internal errors can be easily find out.
Concept of Good Test
• A good test has a high probability of finding an
error.
• A good test is not redundant, every test should
have a unique purpose.
• A good test should be best of breed.(best test
selected from a group of tests.)
• A good test should be neither too simple nor too
complex.
Concept of Successful Test
• If testing is conducted successfully i.e. by
following the all objectives, it will uncover almost
all errors in the software.
• The benefit of successful testing is that software
functions appear to be working according to
design and specification of s/w.
• It leads to increase in the reliability and quality
of the s/w.
Testing strategies
• Testing strategy is a way or method using which
the testing is done.
• This strategy plays very imp role as it is a major
contributor in SQA activities.
• As per the type and scope of the software ,the
testing strategy is designed and implemented.
• In testing strategy we define test plan, test cases,
test data.
Test plan
• A s/w test plan is a document that contains the
strategy used to verify that the s/w product or
system adheres to its design specification and
other requirement.
Test plan may contain following test:-
1.Design verification test
2.Development or production test
3.Acceptance or commissioning test
4.Service and repair test
5.Regression test.
• Test plan format varies organization to
organization.
• Test plan should contain the three important
elements like test coverage, test methods, test
responsibilities.
Test cases
Test case is a set of conditions or variables under
which a testing team will determine whether an
application or s/w system is working correctly or not.
The test case is classified into three categories:-
1. Formal test case -(i/p is known and o/p is
expected)
2. Informal test case-(multiple steps, not written)
3. Typical written test case-
(contain test case data in detail)
Test data
• Test data are data which have been specifically
identified for use in tests.
• The test data may have different purpose as
follows :-
– It can be used for inputting data to the tests to produce
the expected o/p.
– To check an ability or behavior of a s/w program to deal
with unexpected, unusual, extreme and exceptional I/p.
– Test data may include data which describes details
about the testing strategy, i/p, o/p in general.
Char. Of system strategies
• Need to specify requirement of the product in a
quantifiable way or in a measurable way before
testing strategy.
• Testing objectives should be explicitly mentioned
in the testing strategy.
• Develops a profile for every category of the user
by understanding the users of the software.
• It should develop a testing plan that focuses on
“rapid cycle testing”
• A robust software has to be built that is capable of
testing itself.
• Formal technical reviews need to be conducted
to check the test strategy and test cases.
• For the testing process a continues improvement
approach should be used by testing strategy.
Software Verification and
Validation (V&V)
• Verification refers to the set of activities that
ensures that s/w correctly implements a specific
function.
• Validation refers to a different set of activities that
ensures that the s/w that has been built is
traceable to customer requirements.
• Verification- Are we building the product right?
• Validation- Are we building the right product?
• V&V encompasses a wide array of SQA
activities that include
– formal technical reviews,
– quality audits,
– performance monitoring,
– simulation,
– feasibility study,
– usability testing,
– database review,
– algorithm analysis,
– development testing,
– usability testing,
– qualification testing,
– installation testing etc.
Verification Validation

It answers the questions like: Am I It answers the question like: Am I building


building the product right? the right product?

Verification is a static practice of verifying Validation is a dynamic mechanism of


documents, design, code and program. validating and testing the actual product.

It does not involve executing the code. It always involves executing the code.
It is human based checking of documents It is computer based execution of program
and files

Verification uses methods like inspections, Validation uses methods like black box
reviews, walkthroughs, and Desk-checking (functional) testing, gray box testing, and
etc. white box (structural) testing etc.

Verification is to check whether the Validation is to check whether software


software conforms to specifications meets the customer expectations and
requirements.

It can catch errors that validation cannot It can catch errors that verification cannot
catch. catch.
4.4 Testing Strategies

Unit Testing
Integration Testing
Top-Down Approach
Bottom-up Approach
Regression Testing
Smoke Testing
Testing Strategies
 Testing begins at the component level and works
outward toward the integration of the entire computer-
based system.
 Different testing techniques are appropriate at different
points in time.
 The developer of the software conducts testing and
may be assisted by independent test groups(ITG) for
large projects.
 The role of the independent tester is to remove the
conflict of interest when the builder is testing his or her
own product.
Unit Testing
• Unit Testing is a level of the software testing
process where individual units/components of a
software/system are tested.
• The purpose is to validate that each unit of the
software performs as designed.
• A unit is the smallest testable part of software.
• It usually has one or a few inputs and usually a
single output.
• In procedural programming a unit may be an
individual program, function, procedure, etc.
• In OOP, the smallest unit is a method, which may
belong to a base/super class, abstract class or
derived/child class.
• The module interface is tested to ensure that
information properly flows into and out of the
program unit under test.
• Local data structures are examined to ensure that
data store temporarily maintains its integrity
during execution.
• All independent path ensures that all statements in
a module have been executed at least once.
• Boundary conditions are tested to ensure the
module operates properly at boundaries
established to limit or restrict processing.
• Finally all error handling paths are tested.
Unit Testing
Unit test procedures
• In most applications a driver is nothing more then
a main program that accepts test case data
passes to component to be tested and prints
results.
• Stubs serves to replace modules that are called
by the component to be tested.
• A stub or dummy sub program uses the sub
ordinate modules interface, may do data
manipulation, provides verification and returns
control to the module under going testing.
• Drivers and stub are kept simple, actual over
head is kept low.
Example of Stubs and Drivers is given below:-
• For Example we have 3 modules login, home, and
user module.
• Login module is ready and need to test it, but
we call functions from home and user (which is not
ready).
• To test at a selective module we write a short
dummy piece of a code which simulates home
and user, which will return values for Login, this
piece of dummy code is always called Stubs and it
is used in a top down integration.
• Considering the same Example:
• If we have Home and User modules get ready
and Login module is not ready, and we need to
test Home and User modules.
• Which return values from Login module, So to
extract the values from Login module we write a
Short Piece of Dummy code for login which
returns value for home and user, So these pieces
of code is always called Drivers and it is used in
Bottom Up Integration
• Conclusion:- So it is fine from the above example
that Stubs act “called” functions in top down
integration. Drivers are “calling” Functions in
bottom up integration.
• Suppose you have a function (Function A) that calculates the total
marks obtained by a student in a particular academic year.
• Suppose this function derives its values from another function
(Function B) which calculates the marks obtained in a particular
subject.
• You have finished working on Function A and wants to test it.
• But the problem you face here is that you can't seem to run the
Function A without input from Function B; Function B is still
under development.
• In this case, you create a dummy function to act in place of
Function B to test your function. This dummy function gets called by
another function. Such a dummy is called a Stub.
• To understand what a driver is, suppose you have finished Function
B and is waiting for Function A to be developed. In this case you
create a dummy to call the Function B. This dummy is called the
driver.
Integration Testing
• It is a systematic technique for constructing
software architecture while at the same time
conducting testing to uncover errors with
interfacing.
• The objective is to take unit tested components
and build program structure that has been
dictated by design.
• The entire program is tested as a whole.
Top down integration
• It is an incremental approach to construction of
the s/w architecture.
• Modules are integrated by moving downward ,
beginning with the main control module(main
program).
• Modules subordinate to the main control module
are incorporated into the structure in either a
DFS or BFS manner.
Steps to perform top down integration:-
• The main control module is used as a test driver,
stubs are substituted one at a time for all
components directly subordinates to the main
module.
• Tests are conducted as each component is
integrated.
• On completion of test, another stub is replaced with
real components.
Bottom up integration
• As its name implies, begins construction and
testing with lowest level component in the
program structure.
• Components are integrated bottom up, the
need for stub is eliminated.
Steps for bottom-up integration:-
• Low level components are combined into
clusters.(builds- performs specific s/w sub
function)
• A driver is written to coordinate test case input
and output.
• The cluster is tested.
• Drivers are removed and clusters are combined
moving upward in structure.
• Components are combined to form a clusters.
• Each of clusters is tested using a driver shown as
a dashed block. components in clusters 1,2 are
subordinate to Ma.
• Drivers D1,D2 are removed and clusters are
interfaced directly to Ma.
• As integration moves upward, the need for
separate test drivers are less.
Top- Down integration Bottom –Up Integration
This is incremental approach to Bottom up integration begins with
construction of the software sub modules and atomic checking
architecture.

Modules are integrated by moving Low-level Components are


downward through the control combined into clusters that perform
hierarchy, beginning with a specific software sub function
the main control module (main
Program).

Stubs are required for test cases. Drivers are required for test cases
Depth-first integration integrates Different clusters are formed for
all components on a major control the testing.
path of the program structure.
Regression testing
• Each time a new module is added as part of
integration testing, the s/w changes.
• New data flow paths are established, new i/o
may occur, new control logic is invoked. These
changes may cause problems with functions that
previously worked flawlessly.
• Regression testing is the re execution of some
subsets of tests that have already been
conducted to ensure that changes have not
propagated unintended side effects.
• It is the activity that helps to ensure that
changes do not introduce unintended behavior
and additional errors.
• Regression testing may be conducted manually
or by using automated tools.
• As integration testing proceeds, the number of
regression tests can grow quite large.
Smoke testing
• Smoke testing is an integration testing approach that is
commonly used when software products are being
developed.
• It is designed mechanism for time critical projects ,
allowing the s/w team to test their projects on frequent
basis.
 S /w components that have been translated into code are
integrated into a “Build” - build includes all data files,
libraries, reusable modules, engineered components that
are required to implement one or more product functions.
 A series of tests is designed to expose errors that will
keep the build from performing its function.
• The build is integrated with other build and the
entire product is smoke tested daily.
• It does not have to be exhaustive, but it should
be capable of exposing major problems.
• If the build passes ,you can assume that it is
stable enough to be tested more completely.
Benefits of s/w testing:-
1. Integration risk is minimized.
2. The quality of the end product is improved.
3. Error diagnosis and correction are
simplified.
4. Progress is easier to assess.
Validation Testing

Alpha testing:-
• This test takes place at the developer’s site. Developers
observe the users and note problems.
• Alpha testing is testing of an application when
development is about to complete.
• Alpha testing is typically performed by a group that is
independent of the design team, but still within the
company, e.g. in-house software test engineers, or
software QA engineers.
• Alpha testing is final testing before the
software is released to the general public.
• It has two phases:
1. In the first phase, the software is tested by
in-house developers. The goal is to catch
bugs quickly.
2. In the second phase, the software is
handed over to the software QA staff,
for additional testing in an environment
that is similar to the intended use.
Beta Testing
• It is also known as field testing. It takes place at
customer’s site. It sends the system to users who install
it and use it under real-world working conditions.
• A beta test is the second phase of software testing in
which a sampling of the intended audience tries the
product out. (Beta is the second letter of the Greek
alphabet.)
• Originally, the term alpha test meant the first phase of
testing in a software development process.
• The first phase includes unit testing, component testing,
and system testing.
• Beta testing can be considered “pre-release testing”.
The goal of beta testing is to place your
application in the hands of real users outside of
your own engineering team to discover any flaws
or issues from the user’s perspective that you
would not want to have in your final, released
version of the application.
ALPHA TESTING BETA TESTING

Alpha Testing Conducted at Beta Testing is conducted at User site


Developer Site by End user. by End user.

Alpha Testing is Conducted in Beta Testing is conducted in Un-


Control Environment as Developer is control Environment as Developer is
present Absent

Less Chances of Finding an error More Chances of Finding an error as


as Developer usually guides user. Developer can use system in any way

It is kind of mock up testing The system is tested as Real


application

Error/Problem may be solved in The user has to send difficulties to the


quick time if possible. developer who then corrects it.

Short process. Lengthy Process


System testing
System testing is a actually a series of different
tests whose primary purpose is to fully exercise
the computer based system.
Although each test has a different purpose,all
work to verify that system elements have been
properly integrated and perform allocated
functions.
Types of system testing
1.Recovery testing
2.Security testing
3.Stress testing
4.Performance testing
1. Recovery testing
System must be fault tolerance.- faults must not
cause overall system function down.
Recovery testing forces the s/w to fail in a variety
of ways and verifies that recovery is properly
performed.
If recovery is automatic or by system
itself,reinitialization, check pointing,data
recovery and restart are evaluated for
correctness.
If recovery requires human intervention,the
mean-time-to-repair(MTTR) is evaluated to
determine whether it is within acceptable limits.
Security testing
• Security testing is mainly concerned about
overall security of computer based system, that
can be targeted for misuse or gaining sensitive
information or causes actions that can
improperly harm individuals by unauthorized
person.
• Security testing verifies that protection
mechanism built into a system, protecting it from
improper penetration.
• During Security testing tester plays role of
individual who desires to penetrate system.
• The tester may attempt to acquire passwords,
may attack the system, may purposely cause
errors, hoping to penetrate during recovery,
may browse through insecure data, hoping to
find the key to system entry.
Stress testing
• Stress tests are designed to confront programs
with abnormal situations
• Tester who performs stress testing asks “how
high can we crank this up before it fails?”
• Stress testing executes a system in a manner
that demands resources in a abnormal quantity,
frequency, volume.
Example:-
Produce 10-15 interrupts /sec when average rate
is 1-2/sec.
Performance Testing
• A software in real-time and embedded systems,
need to confirm the performance and not only
provide the required function.
• Performance testing is necessary as it is designed
to test the run time performance of software within
the context of an integrated system.
• Performance testing occurs throughout all steps in
the testing process.
• Even at the unit level, the performance of an
individual module may be assessed as tests are
conducted.
White Box Testing
 White Box Testing is also known as glass box
testing or clear box testing.
 In White Box Testing code is tested and it is
done by s/w developers.
 In this testing test cases are written to check :-
1. Independent paths at least once
2. All logical decision(true or false)
3. All loops( boundaries)
4. All internal data structures.
White Box Testing helps to remove the logical
errors, logical path related errors.
The types of White Box Testing are:
– Statement testing,
– Condition coverage and
– Decision coverage.
Black box testing
• Black box testing ignores the internal mechanism
of the system or components and focuses only on
the o/p generated.
• Here system is considered as a black box.
• I/p are given and o/p are compared against
specification.
• Test cases are based on requirements
specification.
• Here testers do not requires the knowledge of
code in which system is developed.
Concept and need of debugging

• Testing strategy is planned, test cases are


designed and the results are evaluated by
comparing them with prescribed expectations.
• Successful testing leads to the process
called as debugging.
• When test cases cannot find the errors, the
process of debugging is used to find out the
errors.
Debugging process:-
• Debugging is not testing but always occurs
as a outcome or conclusion of testing.
• In above fig process begins with the execution of
a test cases and results are assessed.
• It is found that in many cases it is difficult to
match the symptom and its causes to fix errors.
• The process of debugging can find out the
symptom as well as its cause and results into
error correction.
Char. Of bugs:-
1.Symptom may appear in one part of a program,
while the cause may actually be located at a site
that is far removed. So highly coupled
components makes the situation worst.
2.The symptom may disappear temporarily when
another error is corrected.
3.The symptom may actually be caused by non-
errors (round off inaccuracies)
4.The symptom may caused by human errors that is
not easily traced.
5.The symptom may be a result of timing problem
rather than processing problem.
6. Reproducing accuracy in input conditions.
7. Symptom are irregular, situation is common in
embedded system that combines s/w and h/w.
8. The cause of symptom could be number of tasks
running on different processors at the same time.
Debugging strategies:-
• Debugging is straightforward application to find
and correct the cause of s/w error by locating the
problem source.
There are 3 debugging strategies:-
1. brute force
2. backtracking
3. cause elimination.
1. Brute Force:
Brute Force:
• This category of debugging is probably the most common
and least efficient method for isolating the cause of a
software error.
• Brute force debugging methods are applied when all else
fails.
• Using a "let the computer find the error“ philosophy,
memory dumps are taken, run-time traces are invoked,
and the program is loaded with WRITE statements. In the
morass of information that is produced a clue is found that
can lead us to the cause of an error.
• Although the mass of information produced may ultimately
lead to success, it more frequently leads to wasted effort
and time. Thought must be expended first.
2. Backtracking:
Backtracking:
• It is a fairly common debugging strategy that can
be used successfully in small programs.
• Beginning at the site where a symptom has been
uncovered, the source code is traced backward
(manually) until the site of the cause is found.
• Unfortunately, as the number of source lines
increases, the number of potential backward
paths may become unmanageably large.
3. Cause Elimination:
• It is manifested by induction or deduction and
introduces the concept of binary partitioning.
• Data related to the error occurrence are organized
to isolate potential causes.
• A "cause hypothesis" is devised and the
aforementioned data are used to prove or disprove
the hypothesis.
• Alternatively, a list of all possible causes is
developed and tests are conducted to eliminate
each.
• If initial tests indicate that a particular cause
hypothesis shows promise, data are refined in an
attempt to isolate the bug.
Chapter- 5
Software Project Management
Software Project Management
• Software Project Management is to be done in
scientific way.
• It involves the knowledge, techniques and tools
necessary to manage the software development.
• It starts before any activity starts.
• The Software Project Management includes
basic function such as scoping, planning,
estimating, scheduling, organizing, directing,
coordinating, controlling and closing.
Management Spectrum
• The management spectrum describes the
management/hierarchy of people associated with
a software project.
• How to make a software project successful.
• Effective Software Project Management focuses
on the four P’s:
– People
– Product
– Process
– Project
• The order is not arbitrary.
The People
• People factor is very much important in the
process of software development.
• There are following areas for software people like,
recruiting, performance management,
training, compensation, career development,
workgroup development, and team/culture
development.
• Organizations achieve high levels of maturity in
the people management area.
Stakeholders
• Senior managers who define the business issues that often have
significant influence on the project.
• Project (technical) managers who must plan, motivate, organize,
and control the practitioners who do software work.
• Practitioners who deliver the technical skills that are necessary to
engineer a product or application.
• Customers who specify the requirements for the software to be
engineered
• End-users who interact with the software once it is released for
production use.
Project Manager
The following characteristics defines an effective
project manager
1. Problem solving: An effective software manager
can diagnose the technical and organizational
issues and systematically structure a solution or
motivate other practitioners to apply the solution.
2. Managerial Identity: A good manager must take
charge of the project and must have confidence to
assume control when necessary.
3. Achievement: To optimize the productivity of a
project team, a manager must reward initiative
and accomplishment of the practitioners.
4. Influence and Team Building: An effective
project manager must be able to read people, to
understand verbal and non verbal signals and
react to the needs of the people sending these
signals. The manager must remain under control
in high stress situation.
Software Teams
How to lead?
How to organize?
How to collaborate?

How to motivate? How to create good ideas?


Product
1. Before a project can be planned, the product
objectives and scope should be identified.
2. Objectives identify the overall goals for the
product without considering how these goals will be
achieved.
3. Scope identifies the primary data, functions and
behaviors that categorize the product.
4.It identifies cost, risk and realistic break downs of
project task.
Process
• The process model is the plan to be selected
depending on following factors
– a) Customers and developers.
– b) Characteristics of product itself.
– c) Project environment of software team.
• Regardless of the size and type of project, there
are small number of framework activities that
are applicable to all of them.
• There are also umbrella activities like SQA, that
occur throughout the system.
Project
• We conduct planned and controlled software
projects for one primary reason-it is the only
known way to manage complexity.
• A software project manager, who build the product
must avoid a set of common warning signs,
understand the critical success factors that
lead to good project management.
• And develop a common sense approach for
planning, monitoring and controlling the project.
Project Scheduling
1. In project management, a schedule consist of a list of
project terminal elements, with intended start date and
finish date.
2. The s/w project scheduling distributes estimated efforts
across the planned project period by allocating the
effort to particular s/w engineering tasks.
3. There are many tasks in a s/w project. The project
manager defines all the task and generates the
schedule.
4. Initially a macroscopic schedule is developed,
identifying all major process framework activities and then
the detailed schedule of specific tasks are identified and
scheduled.
Factors that delay Project
Schedule
Although there are many reasons why software is
fail, most can be traced to one or more of the
following root causes:
1. An unrealistic deadline established by
someone outside the software team and forced on
managers and practitioners.
2. Changing customer requirements that are not
reflected in schedule changes.
3. An honest underestimate of the amount of
effort and/or the number of resources that will be
required to do the job.
4. Predictable and/or unpredictable risks that
were not considered when project commenced.
5. Technical difficulties that could not have been
foreseen in advance.
6. Human difficulties that could not have been
foreseen in advance.
7. Miscommunication among project staff that
results in delays.
8. A failure by project management to recognize
that the project is falling behind schedule and a
lack of action to correct the problem.
Principles of Project Scheduling
i. Compartmentalization: The project must be
compartmentalized into a number of manageable
activities and tasks.
ii. Interdependency: The interdependency of each
compartmentalized activity or task must be
determined.
iii. Time allocation: Each task to be scheduled
must be allocated some number of work units
(e.g., person‐days of effort).
iv. Effort validation: the project manager must
ensure that no more than the allocated number
of people have been scheduled at any given
time.
v. Defined responsibilities: Every task should be
assigned to a specific team member
vi. Defined outcomes: Every task should have a
defined outcome.
Vii. Defined milestones: Every task or group of
tasks should be associated with a project
milestone.
viii. A milestone is accomplished when one or
more work products has been reviewed for
quality and has been approved.
Project schedule can be tracked by using different
scheduling tools and techniques.
Program Evaluation Review Technique (PERT)
i. PERT, is used in projects that have unpredictable
tasks and activities such as in research and
development projects.
ii. It utilizes three estimates of the time to complete
the project: the most probable, the most
promising, and the most unfavorable.
iii. It is a probabilistic tool.
iV) It
uses several estimates to determine the time
completion of the project and controls activities so
that it will be completed faster and at a lower cost.
Critical Path Method (CPM)
i. CPM is a technique that is used in projects that
have predictable activities and tasks such as in
construction projects.
ii. It allows project planners to decide which aspect
of the project to reduce or increase when a
trade-off is needed.
iii. It is a deterministic tool and provides an
estimate on the cost and the amount of time to
spend in order to complete the project.
iv. It allows planners to control both the time and
cost of the project.
Difference PERT Vs CPM
1. The Program Evaluation and Review Technique (PERT)
is suitable for projects that have unpredictable activities
while the Critical Path Method (CPM) is suitable for
projects that have predictable activities.
2. CPM uses a single estimate for the time that a project
can be completed while PERT uses three estimates for
the time that it can be completed.
3. CPM is a deterministic project management tool while
PERT is a probabilistic project management tool.
4. CPM allows project management planners to determine
which aspect of the project to sacrifice when a trade-off is
needed in order to complete the project while PERT does
not.
Time-Line Charts or Gantt chart
• A time-line chart can be developed for the
entire project or separate charts can be
developed for each project function or for
each individual working on the project.
• All project tasks (for concept scoping) are listed
in the left-hand column.
• The horizontal bars indicate the duration of each
task.
• When multiple bars occur at the same time on
the calendar, task concurrency is implied.
• The diamonds indicate milestones.
Timeline Charts
Tasks Week 1 Week 2 Week 3 Week 4 Week n

Task 1

Task 2
Task 3
Task 4
Task 5
Task 6
Task 7
Task 8

Task 9
Task 10
Task 11
Task 12
Concept of Task Network
• A task network, also called an activity network, is a
graphic representation of the task flow for a project.
• It is the mechanism through which task sequence and
dependencies are input to an automated project
scheduling tool.
• In its simplest form, the task network depicts major
software engineering tasks.
• The concurrent nature of software engineering activities
leads to a number of important scheduling requirements.
• In addition, the project manager should be aware of
those tasks that lie on the critical path.
• That is, tasks that must be completed on schedule if the
project as a whole is to be completed on schedule.
Ways of Project Tracking
Tracking: - can be accomplished in different ways:
 Conducting periodic project status meetings in which
each team member reports progress and problems.
 Evaluating the results of all reviews conducted
throughout the software engineering process.
 Determining whether formal project milestones have
been accomplished by the scheduled date.
 Comparing actual start-date to planned start-date for
each project task.
 Meeting informally with practitioners to obtain their
subjective assessment of progress to date and
problems on the horizon.
Project Risks
What can go wrong?
What is the likelihood?
What will the damage be?
Risk Management
• Software risk :A software risk is anything which
can cause a delay in software or stops the
progress of a system or even terminates the
software project.
• Risk is an expectation of loss, a potential
problem that may or may not occur in the future.
• It is generally caused due to lack of information,
control or time.
• A possibility of suffering from loss.
• Loss can be anything, increase in production
cost, development of poor quality software, not
being able to complete the project on time.
Concept of Proactive and Reactive
risk strategies
Reactive:
1. Majority of the software teams and managers
rely on this approach i.e., nothing is done about
risks until something actually goes wrong.
2. Here when something goes wrong the team
files into action and attempts to correct the
problem.
3. Here disaster management or risk management
is the choice of management technique.
Proactive:
1. Here the primary objective is to avoid risk and
to have a contingency plan in place to handle
unavoidable risk in controlled and effective manner.
2. Here the proactive strategy start in the early of
project development.
3. Possibilities and types of risks are identified, they
are checked and arranged as per their priority
and impact and prioritized according to their
importance.
4. Then after ranking the risk the main plan of the
project team is to avoid the risk.
Types of Software Risks
There are two basic types of risks:
(a) Generic Risk : Generic Risk is the general
purpose possible threat to every software
product.
(b) Product Specific Risk: Product Specific risk
can be find out only by those with a clear
understanding of the technology going to be
used for that project.
Different Categories of Risks: -
(a) Project Risk: - Threaten the project plan. That is if
project risk become real it is likely that project schedule
will slip and that costs will increase. Project risks
identity potential budgetary, schedule, personnel,
resource, customer and requirement problem.
(b) Technical Risk: - Threaten the quality and
timeliness of the software to be produced. If technical
risk becomes real, implementation may become difficult
or impossible.
(c) Business Risk: - Threaten the viability of the
software to be built. Business risk often jeopardizes the
product or the project.
(d) Market Risk:- Building product that no one wants
(e) Strategic Risk: - Building a product that no
longer fits into the business strategy
(f) Management Risk: - Building a product that the
sale force doesn‘t understand.
(g) Budget Risk: - Losing budgetary or personnel
commitment
Risk Assessment
• Risk assessment involves the evaluation of the
risk. We take a triplet into consideration for
understanding risk assessment in detail.
• In the triple ri stands for risk, li stands for
likelihood that is probability of the risk, and xi
represents impact of the risk.
• In initial stages of project planning a risk is stated
generally. As the time passes, it is important to
divide that risks into sub types and try to refine it.
Risk Management Paradigm

control

track

identify

plan
analyze
Risk identification
• Risk identification can be defined as systematic
attempt to specify or find out threats to the
project plan. Project plan includes estimation,
resource distribution etc.
• With the help of finding well known and
predictable risks, the project manager or leader
can take first step to deal with or avoid them.
• These risks need to be controlled or avoided as
per its form.
Risk Analysis
• After identification of the risk, risk analysis has to
be done.
• The process of risk analysis is analyzing the
potential risks with its types and details.
• The time and efforts needed for risks analysis
are huge.
• Analysis starts with what risks are there, their
probability and consequences with their impact
on the project.
Risk refinement : In initial stages of project
planning a risk is stated generally. As the time
passes it is important to divide that risks into its
sub types and try to refine it.
The way to refine risk is to represent the risk in
condition – transition – consequence i.e. CTC
format. The risk can be stated as follows.
<Condition>(Possibly)<Consequence>
Risk Prioritization
• For the purpose of deciding the priority of the
risk, process called as risk prioritization is used.
• When first four columns of the risk table are filled
with the risk related data, the table needs to be
sorted by probability and by impact.
• The risk with high probability and high impact
risks are located at the top of the table.
• The risks with low probability go to the bottom of
the table after sorting.
• This is called as first order prioritization of the
risks.
• The project manager analyses the sorted risk
table and defines the cutoff line.
• It is a horizontal line at some point in the table.
The risk above this line will be given more
attention. The risks below the line will be again
reviewed and evaluated.
• After this, again second order prioritization of the
risk below cut off line is done.
RMMM strategy
• To assist the project team in developing a
strategy for dealing with risk.
• An effective strategy must consider three issues:
risk avoidance, risk monitoring, and risk
management and contingency planning.
• If a software team adopts a proactive approach
to risk avoidance is always the best strategy.
• This is achieved by developing a plan for risk
mitigation.
Risk Mitigation
• To mitigate this risk, you would develop a strategy for
reducing turnover.
• Among the possible steps to be taken are:
– Meet with current staff to determine causes for
turnover (e.g., poor working conditions, low pay,
competitive job market).
– Mitigate those causes that are under your control
before the project starts.
– Once the project commences, assume turnover will
occur and develop techniques to ensure continuity
when people leave.
– Organize project teams so that information about
each development activity is widely dispersed.
• Define work product standards and establish
mechanisms to be sure that all models and documents
are developed in a timely manner.
• Conduct peer reviews of all work (so that more than one
person is ―up to speed).
• Assign a backup staff member for every critical
technologist
Risk Monitoring
• As the project proceeds, risk-monitoring activities
commence.
• The project manager monitors factors that may
provide an indication of whether the risk is
becoming more or less likely.
• In the case of high staff turnover, the general
attitude of team members based on project
pressures, the degree to which the team has
jelled, interpersonal relationships among team
members, potential problems with compensation
and benefits, and the availability of jobs within the
company and outside it are all monitored.
Risk Management
• In addition to monitoring these factors, a project
manager should monitor the effectiveness of risk
mitigation steps. Risk management and contingency
planning assumes that mitigation efforts have failed and
that the risk has become a reality.
• It is important to note that risk mitigation, monitoring,
and management (RMMM) steps incur additional
project cost. For example, spending the time to back-up
every critical technologist costs money. Part of risk
management, therefore, is to evaluate when the
benefits accrued by the RMMM steps are outweighed
by the costs associated with implementing them.
Software Configuration
Management (SCM)
• Software Configuration Management (SCM) is also known as
change management.
• Changes can occur at any point of a time and that too
because of any reason throughout the life cycle.
• Sources of change can be:-
 Change in a product requirement and business rule because
of new business or market conditions.
 Modification in data,functionality,services etc.
 Reorganization or business growth
 Redefinition of product because of budgetary and scheduling
constraints.
Elements of software configuration
management system
Four important elements that should exist when a
configuration management system is developed:
Component elements -a set of tools coupled within a file
management system (e.g., a database) that enables
access to and management of each software
configuration item.
Process elements -a collection of actions and tasks that
define an effective approach to change management
(and related activities) for all constituencies involved
in the management, engineering, and use of
computer software.
Construction elements -a set of tools that
automate the construction of software by
ensuring that the proper set of validated
components (i.e., the correct version) have
been assembled.
Human elements-a set of tools and process
features (encompassing other CM elements)
used by the software team to implement
effective SCM.
Need of SCM
1. Identify change or changes.
2. Keep control on change.
3. For ensuring that change is properly
implemented.
4. Make report of changes and present it to the
people who have interest in that.
Benefits of SCM
1. Changes can be made with ease throughout software
development process.
2. Changes are systematically recorded.
3. Changes are available whenever required for review.
4. Different version of software project can be created and
process can be continued for long time.
5. Repository is available in the form of database.
6. Every object's detail can be stored with its attributes.
7.Changes can be traced in forward and backward direction.
8. Customer requirements are fulfilled by accepting
changes.9. It is one of the activities of software quality
assurance that supports validation as well as verification.
10. Avoids lot of confusion because of change in software
project.
Software Configuration
Management Activities/Process
Identification of change
To control and manage configuration items, each
must be named and managed using an object-
oriented approach
Basic objects are created by software
engineers during analysis, design, coding, or
testing
Aggregate objects are collections of basic
objects and other aggregate objects
An entity-relationship (E-R) diagram can be
used to show the interrelationships among the
objects
Software Configuration
Management Activities
Identification of change
To control and manage configuration items, each
must be named and managed using an object-
oriented approach
Basic objects are created by software
engineers during analysis, design, coding, or
testing
Aggregate objects are collections of basic
objects and other aggregate objects
An entity-relationship (E-R) diagram can be
used to show the interrelationships among the
objects
Version Control
Combines procedures and tools to manage
the different versions of configuration objects
created during the software process
An entity is composed of objects at the same
revision level
A variant is a different set of objects at the
same revision level and coexists with other
variants
A new version is defined when major changes
have been made to one or more objects
Change Control
Change request is submitted and evaluated to assess
technical merit and impact on the other configuration objects
and budget
Change report contains the results of the evaluation
Change control authority (CCA) makes the final decision on the
status and priority of the change based on the change report.
Software Configuration Audit
A software configuration audit complements the formal
technical review by assessing a configuration object for
characteristics that are generally not considered during review.
The audit asks and answers the questions such as:
Has the change specified in the ECO been made? Have any
additional modifications been incorporated?
Has a formal technical review been conducted to assess
technical correctness?
Status Reporting
Configuration status reporting (sometimes called
status accounting) is an SCM task that
answers the following questions:
1. What happened?
2. Who did it?
3. When did it happen?
4. What else will be affected?
SCM Repository Functions
Data integrity: It ensure consistency among related objects.
Information sharing: Sharing information among multiple
developers, multiple tools, manages and controls
multiuser access to data.
Tool integration: Establishes data model that can accessed
by many software engineering tools, controls access to
data.
Data integration: Provides data base functions that allow
various SCM task to be performed on one or more
Software Configuration Items(information as part of
project) .
Methodology enforcement: Defines an entity relationship
model stored in repository for software engineering.
Document standardization: It is a standard approach for
creation of software engineering documents.
SCM Tool Features
Versioning - control changes to all work products before and after
release to customer.
Dependency tracking and change management - tracking
relationships among multiple versions of work products to
enable efficient changes (link management)
Requirements tracing – depends on link management, provides
the ability to track all work products that result from a specific
requirements specification (forward tracing) and to identify
which requirement generated by any given work product
(backward tracing)
Configuration management – works closely with link
management and versioning facilities to keep track of a series of
configurations representing project milestones or production
releases
Audit trails - establishes additional information about when,
where, why, and by whom changes were made
Chapter - 06
Software Quality Management

Visit to more Learning Resources


Basic Quality Concepts

1. Quality
2. Quality control
3. Quality assurance
4. Cost of quality
Quality

We define quality as a characteristic or attributes of something.


eg. programs char include complexity,number of functions,line
of code etc.
Two kinds of quality are:-
Quality of design :- char. that designers specify for an item. It
encompasses requirements,specifications and design of the
system.
Quality of conformance:- the degree to which the design
specifications are followed during manufacturing. It is an issue
focused on implementation.
If implementation follows the design, resulting system meets its
requirments and performance goals,conformance quality will
be high.
User satisfaction= compliant product+good quality+delivery within
Quality control
Quality control involves series of
inspections,reviews,tests used throughout the
process.
It includes a feedback loop to the process.
A key concept of quality control is that all work
product have defined,specifications are
compared and feedback loop is essential to
minimize the defects produced.
Quality assurance
Quality assurance assess the effectiveness and
completeness of quality control activities. The
goal of Quality assurance is to provide
management with the data necessary about
product quality,gaining confidence that product
quality is meeting its goal.
If not ,its managements responsibility to address
the problem and apply the necessary
resources to resolve quality issues.
Cost of quality
It is divided into:-
1. Prevention cost- includes quality
planning,formal technical reviews, test
equipment and testing.
2. Appraisal cost- includes process
inspection,equipment calibration,maintenance.
3. Failure cost-
 Internal failure cost- when we detect a defect
in our product prior to shipment. It includes
rework,repair etc.
Software Quality Assurance
 Software quality assurance is composed of a
variety of tasks associated with two different
aspects - the software engineers who do
technical work and an SQA group that has
responsibility for quality assurance planning,
oversight, record keeping, analysis, and
reporting.
 Software engineers address quality (and
perform quality assurance and quality control
activities) by applying solid technical methods
and measures, conducting formal technical
reviews, and performing well-planned software
testing.
Activities of SQA
1) Prepare an SQA plan for a project:- The plan
is developed during project planning and is
reviewed by all interested parties. Quality
assurance activities performed by the software
engineering team and the SQA group are
governed by the plan. The plan identifies
> evaluations to be performed
> audits and reviews to be performed
> standards that are applicable to the project
> procedures for error reporting and tracking
> documents to be produced by the SQA group
> amount of feedback provided to the software
2) Participate in the development of the project’s
software process description:- The software
team selects a process for the work to be
performed. The SQA group reviews the
process description for compliance with
organizational policy, internal software
standards, externally imposed standards (e.g.,
ISO-9001), and other parts of the software
project plan.
3) Review software engineering activities to
verify compliance with the defined software
process. The SQA group identifies, documents,
and tracks deviations from the process and
verifies that corrections have been made.
4) Audits are designed for s/w work products to
verify compliance with those defined as a part
of process. verify that corrections have been
made and periodically reports the results of its
work to the project manager.
5) Ensure that deviations in software work and
work products are documented and handled
according to a documented procedure.
Deviations may be encountered in the project
plan, process description, applicable
standards, or technical work products.
6) Records any noncompliance and reports to
senior management. Noncompliance items are
tracked until they are resolved.
Concept of Statistical SQA
Statistical SQA reflects growing trend throughout
industry to become more quantitative about
quality.
1. information about s/w defects is collected and
categorized.
2. tracking fundamental causes of
defects.(design error,violation of standard,poor
communication, inaccurate documentation)
3. use of pareto principle.(80% of defects can be
traced to 20%of all possible causes)
4. once the causes have been identified,move to
correct the problem that have caused the
defects.
Quality Evaluation Standards

1. Six Sigma for s/w engineering.


2. ISO:9000 for software
Six sigma for software
 Six Sigma is the most widely used strategy for
statistical quality assurance in industry today.
Originally popularized by Motorola in the 1980s,
 The Six Sigma strategy ―is a rigorous and
disciplined methodology that uses data and statistical
analysis to measure and improve a company's
operational performance by identifying and
eliminating defects in manufacturing and service-
related processes.
 The term Six Sigma is derived from six standard
deviations instances (defects) per million
occurrences implying an extremely high quality
standard
DMAIC and DMDAV
Six sigma methodology defines three core steps:-
These core and additional steps are sometimes
referred to as the DMAIC (define, measure, analyze,
improve, and control) method.
 Define customer requirements and deliverables and
project goals via well-defined methods of customer
communication.
 Measure the existing process and its output to
determine current quality performance (collect defect
metrics).
 Analyze defect metrics and determine the vital few
causes.
If an existing s/w process is in place,but improvement
is required,six sigma suggest two additional steps:-
 Improve the process by eliminating the root causes
of defects.
 Control the process to ensure that future work does
not reintroduce the causes of defects.
If an organization is developing a software process
(rather than improving an existing process), the
steps are as follows:-
Design the process to avoid the root causes of defects
and to meet customer requirement.
Verify that the process model will avoid defects and
meet customer requirement.
The variation is sometimes called as DMDAV
(define,measure,analyze,design and verify method.)
ISO 9000 for software
 International set of standards for quality
management
 Quality standards and procedures must be
documented in an organizational quality
manual
 An external body is often used to certify that
the quality manual conforms to ISO 9000
standards
ISO principles/standards with
benifits.
1. customer focus
2. Leadership
3. Involvment of People.
4. Process approach
5. System appraoch
6. Continual improvment.
7. Factual appraoch to decision making
8. Mutually beneficial supplier relationships.
CMMI
 Definition- Capability Maturity Model Integration
(CMMI) is a process improvement approach that
helps organizations improves their performance.
 CMMI (Capability Maturity Model Integration) is a
proven industry framework to improve product quality
and development efficiency for both hardware and
software
Objectives of CMMI:
Specific Objectives
> Establish Estimates
> Develop a Project Plan
> Obtain Commitment to the Plan
Generic Objectives:
> Achieve Specific Goals
> Institutionalize a Managed Process
>Institutionalize a Defined Process
> Institutionalize a Quantitatively Managed
Process
>Institutionalize an Optimizing Process
 CMMI maturity levels:
 Level 1: Initial. The software process is characterized as
ad hoc and occasionally even chaotic. Few processes
are defined, and success depends on individual effort.
 Level 2: Repeatable. Basic project management
processes are established to track cost, schedule, and
functionality. The necessary process discipline is in place
to repeat earlier successes on projects with similar
applications.
 Level 3: Defined. The software process for both
management and engineering activities is documented,
standardized, and integrated into an organization wide
software process. All projects use a documented and
approved version of the organization's process for
developing and supporting software. This level includes
all characteristics defined for level2
 Level 4: Managed. Detailed measures of the
software process and product quality are
collected. Both the software process and
products are quantitatively understood and
controlled using detailed measures. This level
includes all characteristics defined for level3
 Level 5: Optimizing. Continuous process
improvement is enabled by quantitative
feedback from the process and from testing
innovative ideas and technologies. This level
includes all characteristics defined for level 4.
CMMI Vs ISO
McCall’s Quality factors
The factors that affect S/W quality can be categorized in
two broad groups:
1. Factors that can be directly measured (defects
uncovered during testing)
2. Factors that can be measured only indirectly
(Usability and maintainability)
The S/W quality factors shown above focus on three
important aspects of a S/W product:
i. Its operational characteristics
ii. Its ability to undergo change
iii. Its adaptability to new environments
The various factors of quality are:
(a) Correctness: The extent to which a program satisfies
its specs and fulfills the customer‘s mission
objectives.
(b) Reliability: The extent to which a program can be
expected to perform its intended function with
required precision.
(c) Efficiency: The amount of computing resources and
code required to perform is function.
(d) Integrity: The extent to which access to S/W or data
by unauthorized persons can be controlled.
(e) Usability: The effort required to learn, operate,
prepare input for, and interpret output of a program.
(f) Maintainability: The effort required to locate and fix
errors in a program.
(g) Flexibility: The effort required to modify an
operational program.
(h) Testability: The effort required to test a program to
ensure that it performs its intended function.
(i) Portability: The effort required to transfer the program
from one hardware and/or software system
environment to another.
(j) Re usability: The extent to which a program can be
reused in other applications- related to the packaging
and scope f the functions that the program performs.
(k) Interoperability: The effort required to couple one
system to another.

For more Details contact us

You might also like