Professional Documents
Culture Documents
UNIT I Original
UNIT I Original
UNIT I Original
UNIT I
SOFTWARE PROCESS AND PROJECT MANAGEMENT
Introduction to Software Engineering, Software Process, Perspective and Specialized
Process Models Software Project Management: Estimation LOC and FP Based
Estimation, COCOMO Model Project Scheduling Scheduling, Earned Value Analysis Risk Management.
Fig1
Fig2
Fig1: The relationship, often called the bath-tub curve, indicates that hardware exhibits
relatively high failure rates early in its life (these failures are often attributable to design or
manufacturing defects); defects are corrected and the failure rate drops to a steady-state level
(hopefully, quite low) for some period of time. As time passes, however, the failure rate rises
again as hardware components suffer from the cumulative effects of dust, vibration, abuse,
temperature extremes, and many other environmental maladies. Stated simply, the hardware
begins to wear out.
Fig2: The failure rate curve for software should take the form of the idealized curve.
Undiscovered defects will cause high failure rates early in the life of a program. However,
these are corrected and the curve flattens as shown. The idealized curve is a gross
oversimplification of actual failure models for software. However, software doesnt wear
out.
3. Although the industry is moving toward component-based construction, most software
continues to be custom built.
A software component should be designed and implemented so that it can be
reused in many different programs. Modern reusable components encapsulate
both data and the processing that is applied to the data, enabling the software
engineer to create new applications from reusable parts.
The software must be adapted to meet the needs of new computing environments or
technology.
The software must be enhanced to implement new business requirements.
The software must be extended to make it interoperable with other more modern systems or
databases.
The software must be re-architected to make it viable within a network environment.
SOFTWARE ENGINEERING
Definition:
Software Engineering:
A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created.
An activity strives to achieve a broad objective (e.g., communication with stakeholders) and
is applied regardless of the application domain, size of the project, complexity of the effort,
or degree of rigor with which software engineering is to be applied.
An action (e.g., architectural design) encompasses a set of tasks that produce a major work
product (e.g., an architectural design model).
A task focuses on a small, but well-defined objective (e.g., conducting a unit test) that
produces a tangible outcome.
A process framework establishes the foundation for a complete software engineering
process by identifying a small number of framework activities that are applicable to all
software projects, regardless of their size or complexity.
o In addition, the process framework encompasses a set of umbrella activities that are
applicable across the entire software process.
o A generic process framework for software engineering encompasses five activities:
1. Communication:
It is critically important to communicate and collaborate with the customer (and
other stakeholders).
The intent is to understand stakeholders objectives for the project and to gather
requirements that help define software features and functions.
2. Planning:
A software project is a complicated journey, and the planning activity creates a
map that helps guide the team as it makes the journey.
The mapcalled a software project plandefines the software engineering
work by describing the technical tasks to be conducted, the risks that are likely,
the resources that will be required, the work products to be produced, and a work
schedule.
3. Modeling:
You create a sketch of the thing so that youll understand the big picture
what it will look like architecturally, how the constituent parts fit together, and
many other characteristics.
If required, you refine the sketch into greater and greater detail in an effort to
better understand the problem and how youre going to solve it.
A software engineer does the same thing by creating models to better understand
software requirements and the design that will achieve those requirements.
4. Construction:
This activity combines code generation (either manual or automated) and the
testing that is required uncovering errors in the code.
5. Deployment:
Agile process models emphasize project agility and follow a set of principles that lead to a
more informal (but, proponents argue, no less effective) approach to software process. These
process models are generally characterized as agile because they emphasize maneuverability
and adaptability. They are appropriate for many types of projects and are particularly useful
when Web applications are engineered.
SOFTWARE ENGINEERING PRACTICE
The Essence of Practice
1. Understand the problem (communication and analysis).
Who has a stake in the solution to the problem?That is, who are the stakeholders?
What are the unknowns?What data, functions, and features are required to
properly solve the problem?
Can the problem be compartmentalized?Is it possible to represent smaller
problems that may be easier to understand?
Can the problem be represented graphically? Can an analysis model be created?
The First Principle: The Reason It All Exists: A software system exists for one
reason: to provide value to its users.
The Second Principle: KISS (Keep It Simple, Stupid!): All design should be as
simple as possible, but no simpler.
The Third Principle: Maintain the Vision: A clear vision is essential to the success
of a software project.
The Fourth Principle: What You Produce, Others Will Consume: always specify,
design, and implement knowing someone else will have to understand what you are
doing.
The Fifth Principle: Be Open to the Future: Never design yourself into a corner.
Always ask what if, and prepare for all possible answers by creating systems that
solve the general problem, not just the specific one.
The Sixth Principle: Plan Ahead for Reuse: Reuse saves time and effort.
The Seventh principle: Think!: Placing clear, complete thought before action
almost always produces better results.
PERSPECTIVE AND SPECIALIZED PROCESS MODELS
10
A linear process flow executes each of the five framework activities in sequence,
beginning with communication and culminating with deployment (Figure 2.2a).
An iterative process flow repeats one or more of the activities before proceeding to the
next (Figure 2.2b).
An evolutionary process flow executes the activities in a circular manner. Each
circuit through the five activities leads to a more complete version of the software
(Figure 2.2c).
A parallel process flow (Figure 2.2d) executes one or more activities in parallel with
other activities (e.g., modeling for one aspect of the software might be executed in
parallel with construction of another aspect of the software).
11
Prescriptive process models were originally proposed to bring order to the chaos of
software development.
The Waterfall Model:
The waterfall model,sometimes called the classic life cycle, suggests a systematic,
sequential approach to software development that begins with customer specification of
requirements and progresses through planning, modeling, construction, and deployment,
culminating in ongoing support of the completed software.
12
The incremental model combines elements of linear and parallel process flows.
The incremental model applies linear sequences in a staggered fashion as calendar time
progresses.
Each linear sequence produces deliverable increments of the software in a manner that
is similar to the increments produced by an evolutionary process flow.
13
EXAMPLE:
For example, word-processing software developed using the incremental paradigm
might deliver basic file management, editing, and document production functions in
the first increment;
More sophisticated editing and document production capabilities in the second
increment; spelling and grammar checking in the third increment; and advanced page
layout capability in the fourth increment.
It should be noted that the process flow for any increment can incorporate the
prototyping paradigm.
When an incremental model is used, the first increment is often a core product.
That is, basic requirements are addressed but many supplementary features remain
undelivered. The core product is used by the customer .
As a result of use and/or evaluation, a plan is developed for the next increment.
The plan addresses the modification of the core product to better meet the needs of the
customer and the delivery of additional features and functionality.
This process is repeated following the delivery of each increment, until the complete product
is produced.
The incremental process model focuses on the delivery of an operational product with
each increment.
Early increments are stripped-down versions of the final product, but they do provide
capability that serves the user and also provide a platform for evaluation by the user.
Incremental development is particularly useful when staffing is unavailable for a complete
implementation by the business deadline that has been established for the project.
Early increments can be implemented with fewer people. If the core product is well
received, then additional staff (if required) can be added to implement the next increment. In
addition, increments can be planned to manage technical risks.
14
PROTOTYPING:
Often, a customer defines a set of general
objectives for software,
but does not identify detailed requirements for functions and features.
In other cases, the developer may be unsure of the efficiency of an algorithm, the
adaptability of an operating system, or the form that human-machine interaction should
take.
In these, and many other situations, a prototyping paradigmmay offer the best
approach.
Prototyping can be used as a stand-alone process model, it is more commonly used as a
technique that can be implemented within the context of any one of the process
models.
The prototyping paradigm assists you and other stakeholders to better understand what is
to be built when requirements are fuzzy.
Steps:
o The prototyping paradigm begins with communication.
o You meet with other stakeholders to define the overall objectives for the
software, identify whatever requirements are known, and outline areas where further
definition is mandatory.
o A prototyping iteration is planned quickly, and modeling occurs.
o A quick design focuses on a representation of those aspects of the software that
will be visible to end users (e.g., human interface layout or output display formats).
o The quick design leads to the construction of a prototype.
o The prototype is deployed and evaluated by stakeholders, who provide feedback
that is used to further refine requirements.
o Iteration occurs as the prototype is tuned to satisfy the needs of various
stakeholders, while at the same time enabling you to better understand what needs to
be done.
o Ideally, the prototype serves as a mechanism for identifying software
requirements.
15
o PROBLEMS:
Stakeholders see what appears to be a working version of the software,
unaware that the prototype is held together haphazardly, unaware that in
the rush to get it working you havent considered overall software quality or
long-term maintainability. When informed that the product must be
rebuilt so that high levels of quality can be maintained, stakeholders cry
foul and demand that a few fixes be applied to make the prototype a
working product.
As a software engineer, you often make implementation compromises in
order to get a prototype working quickly. An inappropriate operating
system or programming language may be used simply because it is available
and known; an inefficient algorithm may be implemented simply to
demonstrate capability. After a time, you may become comfortable with these
choices and forget all the reasons why they were inappropriate. The lessthan-ideal choice has now become an integral part of the system.
THE SPIRAL MODEL:
The spiral model is an evolutionary software process model that couples the iterative
nature of prototyping with the controlled and systematic aspects of the waterfall
model.
It provides the potential for rapid development of increasingly more complete versions
of the software.
16
17
CONCURRENT MODELS:
The concurrent development model,sometimes called concurrent engineering,allows a
software team to represent iterative and concurrent elements of any of the process models.
The activitymodelingmay be in any one of the states noted at any given time.
Similarly, other activities, actions, or tasks (e.g., communicationor construction) can be
represented in an analogous manner.
All software engineering activities exist concurrently but reside in different states.
For example, early in a project the communication activity has completed its first iteration
and exists in the awaiting changes state.
The modeling activity which existed in the inactive state while initial communication was
completed, now makes a transition into the under development state.
If, however, the customer indicates that changes in requirements must be made, the
modeling activity moves from the under development state into the awaiting changes state.
Concurrent modeling defines a series of events that will trigger transitions from state to
state for each of the software engineering activities, actions, or tasks.
For example, during early stages of design, an inconsistency in the requirements model is
uncovered. This generates the event analysis model correction, which will trigger the
requirements analysis action from the done state into the awaiting changes state.
Concurrent modeling is applicable to all types of software development and provides an
accurate picture of the current state of a project. Each activity, action, or task on the
network exists simultaneously with other activities, actions, or tasks. Events generated at one
point in the process network trigger transitions among the states.
18
19
The formal methods model encompasses a set of activities that leads to formal
mathematical specification of computer software.
Formal methods enable you to specify, develop, and verify a computer-based system by
applying a rigorous, mathematical notation.
A variation on this approach, called cleanroom software engineering, is currently applied
by some software development organizations.
When formal methods are used during development, they provide a mechanism for
eliminating many of the problems that are difficult to overcome using other software
engineering paradigms.
When formal methods are used during design, they serve as a basis for program
verification and therefore enable you to discover and correct errors that might otherwise
go undetected.
The formal methods model offers the promise of defect-free software.
20
21
22
LOC is the simplest among all metrics available to estimate project size. It is a popular
metric.
This metric measures the size of a project by counting the number of source instructions in
the developed program.
Obviously, while counting the number of source instructions, lines used for commenting the
code and the header lines are ignored.
Determining the LOC count at the end of a project is very simple. However, accurate
estimation of the LOC count at the beginning of a project is very difficult.
Project managers usually divide the problem into modules and each module into sub
modules and so on. Until the size of the different leaf-level modules can be approximately
predicted,
To be able to predict the LOC count for the various leaf-level modules sufficiently
accurately past experience in developing similar products is ver helpful.
By y using the estimation of the lowest level modules, project managers arrive at the total
size estimation.
LOC gives a numerical value of problem size that can vary widely with individual coding
style different programmers lay out their code in different ways. Count the language token
instead of lines of code in the program. But this situation does not improve even if language
tokens are counted instead of lines of code.
LOC is a measure of the coding activity alone. It computes the number of source lines in
the final program.
LOC measure correlates poorly with the quality and efficiency of the code. Larger code
size does not necessarily imply better quality or higher efficiency.
LOC metric penalizes use of higher-level programming languages , code reuse, etc.,
23
LOC metric measures the lexical complexity of a program and does not address the more
issues of logical or structural complexities.
It is very difficult to estimate LOC in the final product from the problem specification.
The LOC count can only be accurately computed only after the code has been fully
developed.
24
Types
Simple
Average
Complex
Input(I)
Output(O)
Inquiry(E)
Number of Files(F)
10
15
25
Number of Interfaces
10
A technical complexity factor (TCF) for the project is computed and the TCF is multiplied
with UFP to yield FP.
It expresses the overall impact of various project parameters that can influence the
development effort such as high transaction rates, response time requirements, scope of
reuse, etc.
Albrecht identified 14 parameters that can influence the development effort. Each of these 14
factors is assigned a value from 0(not present or no influence) to 6 (strong influence).
The resulting numbers are summed, yielding the total degree of influence (DI).
FP=UFP*TCF
FEATURE POINT METRIC:
Disadvantages of function point:
It does not take into account the algorithmic complexity of software.
It implicitly assumes that the effort required to design and develop any two functionalities
of the system
It takes only the number of functions that the system supports into consideration without
distinguishing the difficult of developing the various functionalities.
To overcome this problem, extension of the function point metric called feature point metric has
been proposed.
In Feature Point Metric, that incorporates algorithm complexity as an extra parameter. It
ensures that the computed size using the feature point metric reflects the fact that the more is the
complexity of a function, the greater is the effort required to develop it and therefore its size
should be larger compared to simpler functions.
26
COCOMO MODEL
CLASSIFICATION
1. Organic:
If the project deals with developing a well-understood application program, the size of
the development team is reasonably small and the team members are experienced in
developing similar types of project
2. Semidetached:
If the development team consists of mixture of experienced and inexperienced staff, team
members may have limited experience on related systems but may be unfamiliar with
some aspect of the system being developed.
3. Embedded:
If the software being developed is strongly coupled to complex hardware, or if strict
regulation on the operational procedure exist.
According to Boehm, software cost estimation should be done through three stages:
Basic COCOMO
Intermediate COCOMO
Complete COCOMO
Basic COCOMO
The basic COCOMO model gives an approximate estimate of the project parameters. The
basic COCOMO estimation model is given by the following expressions.
Where
27
KLOC is the estimated size of t he software product expressed in Kilo Lines of Code,
a1, a2 , b1 , b 2 are constants for each category of software products,
Tdev is the estimated time to develop the software, expressed in months,
Effort is the total effort required to develop the software product, expressed in person
months (PMs).
The effort estimation is expressed in units of person-months (PM). It is the area under the
person-month plot. It should be carefully noted that an effort of 100 PM does not imply that
100 persons should work for 1 month nor does it imply that 1 person should be employed for 100
months, but it denotes the area under the person-month curve.
According to Boehm, every line of source text should be calculated as one LOC irrespective of
the actual number of instructions on that line. Thus, if a single instruction spans several lines (say
n lines), it is considered to be nLOC.
The values of a1, a2, b1, b2 for different categories of products (i.e. organic, semidetached, and
embedded) as given by Boehm [1981] are summarized below. He derived the above expressions
by examining historic al data collected from a large number of actual projects.
Estimation of development effort
For the three classes of software products, the formulas for estimating the effort based on the
code size are shown below:
28
For the three classes of software products, the formulas for estimating the development time
based on the effort are given below:
From the effort estimation, the project cost can be obtained by multiplying the required
effort by the manpower cost per month. But, implicit in this project cost computation is the
assumption that the entire project cost is incurred on account of the manpower cost alone. In
29
addition to manpower cost, a project would incur costs due to hardware and software required for
the project and the company overheads for administration, office space, etc.
It is important to note that the effort and the duration estimations obtained using the
COCOMO model are called as nominal effort estimate and nominal duration estimate. The
term nominal implies that if anyone tries to complete the project in a time shorter than the
estimated duration, then the cost will increase drastically. But, if anyone completes the
project over a longer period of time than the estimated, then there is almost no decrease in the
estimated cost value.
Intermediate COCOMO
The basic COCOMO model assumes that effort and development time are functions of the
product size alone. However, a host of other project parameters besides the product size
affect the effort required to develop the product as well as the development time. Therefore,
in order to obtain an accurate estimation of the effort and project duration, the effect of
all relevant parameters must be taken into account.
The intermediate COCOMO model recognizes this fact and refines the initial estimate
obtained using the basic COCOMO expressions by using a set of 15 cost drivers
(multipliers) based on various attributes of software development.
For example, if modern programming practices are used, the initial estimates are scaled
downward by multiplication with a cost driver having a value less than 1. If there are
stringent reliability requirements on the software product, this initial estimate is scaled
upward.
Boehm requires the project manager to rate these 15 different parameters for a particular
project on a scale of one to three.
30
Then, depending on these ratings, he suggests appropriate cost driver values which should be
multiplied with the initial estimate obtained using the basic COCOMO.
In general, the cost drivers can be classified as being attributes of the following items:
Product:
The characteristics of the product that are considered include the inherent
complexity of the product, reliability requirements of the product, etc.
Computer:
Characteristics of the computer that are considered include the execution speed
required, storage space required etc.
Personnel:
The attributes of development personnel that are considered include the
experience level of personnel, programming capability, analysis capability, etc.
Development Environment:
Development environment attributes capture the development facilities available
to the developers. An important parameter that is considered is the sophistication
of the automation (CASE) tools used for software development.
Complete COCOMO
A major shortcoming of both the basic and intermediate COCOMO models is that they
consider a software product as a single homogeneous entity.
For example, some sub-systems may be considered as organic type, some semidetached, and
some embedded.
Not only that the inherent development complexity of the subsystems may be different, but
also for some subsystems the reliability requirements may be high, for some the development
team might have no previous experience of similar development, and so on.
The complete COCOMO model considers these differences in characteristics of the
subsystems and estimates the effort and development time as the sum of the estimates
for the individual subsystems.
The cost of each subsystem is estimated separately.
This approach reduces the margin of error in the final estimate.
The following development project can be considered as an example application of the
complete COCOMO model.
A distributed Management Information System (MIS) product for an organization
having offices at several places across the country can have the following sub-components:
Database part
Graphical User Interface (GUI) part
Communication part
o Of these, the communication part can be considered as embedded software.
o The database part could be semi-detached software, and the GUI part organic
software.
31
o The costs for these three components can be estimated separately, and summed up
to give the overall cost of the system.
Conclusion:
It is important to note that the effort and the duration estimations obtained using the
COCOMO model are called as nominal effort estimate and nominal duration estimate.
The term nominal implies that if anyone tries to complete the project in a time shorter than
the estimated duration, then the cost will increase drastically.
But, if anyone completes the project over a longer period of time than the estimated, then
there is almost no decrease in the estimated cost value.
32
PROJECT SCHEDULING
Software project scheduling is an action that distributes estimated effort across the
planned project duration by allocating the effort to specific software engineering tasks.
It is important to note, however, that the schedule evolves over time.
During early stages of project planning, a macroscopic schedule is developed.
This type of schedule identifies all major process framework activities and the product
functions to which they are applied.
As the project gets under way, each entry on the macroscopic schedule is refined into a
detailed schedule.
Here, specific software actions and tasks (required to accomplish an activity) are identified
and scheduled.
Scheduling for software engineering projects can be viewed from two rather different
perspectives.
o In the first, an end date for release of a computer-based system has already (and
irrevocably) been established. The software organization is constrained to distribute
effort within the prescribed time frame.
o The second view of software scheduling assumes that rough chronological bounds
have been discussed but that the end date is set by the software engineering
organization. Effort is distributed to make best use of resources, and an end date is
defined after careful analysis of the software.
33
Basic Principles:
34
o The implication here is that delaying project delivery can reduce costs
significantly.
o The number of delivered lines of code (source statements), L, is related to effort and
development time by the equation:
where E is the effort expended (in person-years) over the entire life cycle for
software development and maintenance and t is the development time in years. The
equation for development effort can be related to development cost by the inclusion
of a burdened labor rate factor ($/person-year).
Effort Distribution:
Each of the software project estimation techniques leads to estimates of work units (e.g.,
person-months) required to complete software development.
A recommended distribution of effort across the software process is often referred to as the
402040 rule.
o Forty percent of all effort is allocated to frontend analysis and design (40 %).
o You can correctly infer that coding (20 percent of effort) is deemphasized (20%).
o A similar percentage is applied to back-end testing (40 %)
This effort distribution should be used as a guideline only.
The characteristics of each project dictate the distribution of effort.
o Work expended on project planning rarely accounts for more than 2 to 3 percent of
effort, unless the plan commits an organization to large expenditures with high risk.
o Customer communication and requirements analysis may comprise 10 to 25
percent of project effort.
o Effort expended on analysis or prototyping should increase in direct proportion
with project size and complexity.
o A range of 20 to 25 percent of effort is normally applied to software design.
o Time expended for design review and subsequent iteration must also be
considered.
35
o Because of the effort applied to software design, code should follow with relatively
little difficulty.
o A range of 15 to 20 percent of overall effort can be achieved.
o Testing and subsequent debugging can account for 30 to 40 percent of software
development effort.
o The criticality of the software often dictates the amount of testing that is required.
o If software is human rated (i.e., software failure can result in loss of life), even higher
percentages are typical.
DEFINING A TASK SET FOR THE SOFTWARE PROJECT
A task set is a collection of software engineering work tasks, milestones, work products,
and quality assurance filters that must be accomplished to complete a particular project.
The task set must provide enough discipline to achieve high software quality.
But, at the same time, it must not burden the project team with unnecessary work.
To develop a project schedule, a task set must be distributed on the project time line. The
task set will vary depending upon the project type and the degree of rigor with which the
software team decides to do its work.
DEFINING A TASK NETWORK
A task network, also called an activity network, is a graphic representation of the task
flow for a project.
It is sometimes used as the mechanism through which task sequence and dependencies are
input to an automated project scheduling tool.
The task network depicts major software engineering actions.
The concurrent nature of software engineering actions leads to a number of important
scheduling requirements. Because parallel tasks occur asynchronously, you should
determine intertask dependencies to ensure continuous progress toward completion. In
addition, you should be aware of those tasks that lie on the critical path. That is, tasks that
must be completed on schedule if the project as a whole is to be completed on schedule.
SCHEDULING
36
Scheduling of a software project does not differ greatly from scheduling of any
multitask engineering effort. Therefore, generalized project scheduling tools and
techniques can be applied with little modification for software projects.
Program evaluation and review technique (PERT) and the critical path method (CPM) are
two project scheduling methods that can be applied to software development.
Both techniques are driven by information already developed in earlier project planning
activities which of the following are selected
estimates of effort,
a decomposition of the product function,
the selection of the appropriate process model and task set, and
decomposition of the tasks
Interdependencies among tasks may be defined using a task network.
Tasks, sometimes called the project work breakdown structure (WBS), are defined for the
product as a whole or for individual functions.
Both PERT and CPM provide quantitative tools that allow you to
(1) Determine the critical paththe chain of tasks that determines the duration of
the project,
(2) Establish most likely time estimates for individual tasks by applying statistical
models,
(3) Calculate boundary times that define a time window for a particular task.
Time-Line Charts
o When creating a software project schedule, you begin with a set of tasks (the work
breakdown structure).
o If automated tools are used, the work breakdown is input as a task network or
task outline.
o Effort, duration, and start date are then input for each task.
o In addition, tasks may be assigned to specific individuals.
o As a consequence of this input, a time-line chart, also called a Gantt chart, is
generated.
o A time-line chart can be developed for the entire project.
o Alternatively, separate charts can be developed for each project function or for each
individual working on the project.
37
It depicts a part of a software project schedule that emphasizes the concept scoping
task for a word-processing (WP) software product.
All project tasks (for concept scoping) are listed in the left-hand column.
The horizontal bars indicate the duration of each task.
When multiple bars occur at the same time on the calendar, task concurrency is
implied.
The diamonds indicate milestones.
Tracking the Schedule
o The project schedule becomes a road map that defines the tasks and milestones to be
tracked and controlled as the project proceeds. Tracking can be accomplished in a
number of different ways:
Conducting periodic project status meetings in which each team member
reports progress and problems
Evaluating the results of all reviews conducted throughout the software
engineering process
Determining whether formal project milestones have been accomplished by
the scheduled date
Comparing the actual start date to the planned start date for each project
task listed in the resource table
Meeting informally with practitioners to obtain their subjective assessment of
progress to date and problems on the horizon
Using earned value analysis to assess progress quantitatively
o In reality, all of these tracking techniques are used by experienced project managers.
38
39
40
A CPI value close to 1.0 provides a strong indication that the project is within its defined
budget. CV is an absolute indication of cost savings (against planned costs) or shortfall at a
particular stage of a project.
41
RISK MANAGEMENT
A risk is any anticipated unfavorable event or circumstance that can occur while a project is
underway.
Risk management aims at reducing the impact of all kinds that might affect a project.
It consists of three essential activities:
Risk identification
Risk assessment
Risk containment
Risk Identification:
o The project manager needs to anticipate the risks in the project as early as possible so
that the impact of the risks can be minimized by making effective risk management
plans.
o So early risk identification is important.
o For examples, we might worried whether the vendors whom have asked to develop
certain modules might not complete their work in time, whether they would turn in
poor quality work, whether some of key personnel might leave the organization. All
such risks that are affecting a project must be identified and listed.
o There are the three categories of risk which can affect a software project as follows:
Project risks
It concern various forms of budgetary, schedule, personnel, resource
and customer- related problems.
An important project risk is schedule slippage. Since software is
intangible, it is very difficult to monitor and control a software project.
Technical risks
It concern potential design, implementation, interfacing, testing and
maintenance problems.
It also include ambiguous specification, incomplete specification,
technical uncertainty and technical obsolescence.
Most technical risk occur due to the development teams insufficient
knowledge about the product.
Business risks
This type of risk include riskd of building an excellent product that no
one wants, losing budgetary or personnel commitment
Risk Assessment:
o The objective is to rank the risks in terms of their damage causing potential.
o For risk assessment, first each risk should be rated in two ways:
The likelihood of a risk coming true ( r )
The consequence of the problems associated with the risks (s)
o Based on these two factors, the priority of each risk can be computed:
42
P=r*s
Where p is the priority with which the risk must be handled, r is
the probability of the risk becoming true.
S is the severity of damage caused due to thr risk becoming
true.
o If all identified riskd are prioritized, then the most likely and damaging riskd can be
handled first and more comprehensive risk abatement procedures can be designed for
these risks.
Risk Containment:
o After all the identified risks of a project are assessed, plans must be made to contain
the most damaging and the most likely risks.
o Different risks require different containment procedures.
o In fact, most risks require ingenuity on the part of the project manager in tackling the
risk.
o There are three main strategies to plan for risk containment:
Avoid the risk:
This may take several forms such as discussing with the customer to change
the requirements to reduce the scope of the work, giving incentives to the
engineers to avoid the risk of manpower turnover, etc.
Transfer the risk:
This strategy involves getting the risky component developed by a third party,
buying insurance cover, etc.
Risk reduction:
This involves planning ways to contain the damage due to a risk. For example,
if there is risk that some key personnel might leave, new recruitment may be
planned.
o Risk leverage
To choose between the different strategies of handling a risk, the project
manager must consider the cost of handling the risk and the corresponding
reduction of risk. For this the risk leverage of the different risks can be
computed.
Risk leverage is the difference in risk exposure divided by the cost of reducing
the risk.
More formally,
risk leverage = (risk exposure before reduction risk exposure
after reduction) / (cost of reduction)