Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

CSE3001 SOFTWARE ENGINEERING

L T P J C
2 0 2 4 4

Dr. S M SATAPATHY
Associate Professor,
School of Computer Science and Engineering,
VIT Vellore, TN, India – 632 014.
Module – 2

Software Project Management

1. Project Planning

2. Project Monitoring and Control

3. Risk Management

4. Metrics and Measurements

2
Software Project Management (SPM)
 Many software projects fail due to faulty project management
practices.
 Goal of software project management:
enable a group of engineers to work efficiently towards successful
completion of a software project.
 To conduct a successful software project, we must understand:
 Scope of work to be done
 The risk to be incurred
 The resources required
 The task to be accomplished
 The cost to be expended
 The schedule to be followed
3
Project Manager’s Responsibilities
 A project manager’s activities can be broadly classified into:
 project planning
 project monitoring and control activities.
 Activities:
 Project proposal writing,
 Project cost estimation,
 Scheduling,
 Project staffing,
 Project monitoring and control,
 Software configuration management,
 Risk management,
 Managerial report writing and presentations, etc.

Once a project found to be feasible, project manager undertake


project planning.

4
PROJECT PLANNING

5
Project Planning Activities
 Software planning begins before technical work starts, continues as
the software evolves from concept to reality, and culminates only
when the software is retired.
 Activities:

Effort Cost
Estimation Estimation

Size Staffing
Estimation Estimation

Duration
Estimation Scheduling

6
Sliding Window Planning
 It involves project planning over several stages:
 protects managers from making big commitments too early.
 More information becomes available as project progresses.
 Facilitates accurate planning

 After Planning is complete, document the plans in a Software Project


Management Plan (SPMP) document.

7
SPMP Document
 Organization of SPMP Document:
 Introduction (Objectives, Major Functions, Performance Issues, Management and
Technical Constraints)

 Project Estimates (Historical Data, Estimation Techniques, Effort, Cost, and Project
Duration Estimates)

 Schedules (Work Breakdown Structure, Task Network, Gantt Chart Representation,


PERT Chart Representation)

 Project Resources Plan (People, Hardware and Software, Special Resources)


 Staff Organization (Team Structure, Management Reporting)
 Risk Management Plan (Risk Analysis, Risk Identification, Risk Estimation,
Abatement Procedures)

 Project Tracking and Control Plan (Metrics to be tracked, Tracking Plan,


Control Plan)

 Miscellaneous Plans (Process Tailoring, Quality Assurance Plan, Configuration


Management Plan, Validation and Verification, System Testing Plan, Delivery, Installation
and Maintenance Plan)

8
Software Cost Estimation
 Determine size of the product.
 From the size estimate,
o determine the effort needed.
 From the effort estimate,
o determine project duration, and cost.

 Three Main approaches to estimation


 Empirical
 Heuristics
 Analytical

9
Software Cost Estimation
 Three Main approaches to estimation
 Empirical
 An educated guess based on past experience.
 Ex.: Expert Judgement, Delphi Estimation
 Heuristics
 assume that the characteristics to be estimated can be
expressed in terms of some mathematical expression.
 Ex.: Function Point, COCOMO
 Analytical
 derive the required results starting from certain simple
assumptions.
 Ex.: Halstead’s Software Science
10
Lines of Code (LOC)
 Simplest Metric
 Comments and Blank lines should not be counted.

 LOC = 18
 Uncommented LOC = 17
 Executable statements = 13
(5 - 17)

11
Software Size Estimation

12
Software Cost Estimation
 LOC Definition
A line of code is any line of program text that is not a
comment or blank line, regardless of the number of
statements or fragments of statements on the line. This
specifically includes all lines containing program header,
declaration, and executable and non-executable
Statements

 This is the predominant definition for lines of code used by


researchers.
 By this definition, the program shown earlier has 17 LOC.

13
Disadvantages of LOC
 Size can vary with coding style.
 Focuses on coding activity alone.
 Correlates poorly with quality and efficiency of code.
 Penalizes higher level programming languages, code reuse, etc.
 Measures lexical / textual complexity only.
o does not address the issues of structural or logical complexity.
 Difficult to estimate LOC from problem description.
o So not useful for project planning

14
Software Size Estimation
Function Point Document

COCOMO

15
Function Point Example

16
Function Point Example

17
Function Point Example

18
Function Point Example

19
COCOMO Example

20
COCOMO Example

21
COCOMO Practice Questions

22
Scheduling
 Scheduling is an important activity for the project managers.
 To determine project schedule:
 Identify tasks needed to complete the project.
 Determine the dependency among different tasks.
 Determine the most likely estimates for the duration of the
identified tasks.
 Plan the starting and ending dates for various tasks.

23
Work Breakdown Structure
 Work Breakdown Structure (WBS) provides a notation for
representing task structure:
 Activities are represented as nodes of a tree.
 The root of the tree is labelled by the problem name.
 Each task is broken down into smaller tasks and represented
as children nodes.
 It is not useful to subdivide tasks into units which take less than a
week or two to execute.
 Finer subdivisions mean that a large amount of time must be
spent on estimating and chart revision.

24
Work Breakdown Structure

Compiler Project

Requirements Design Code Test Write Manual

Lexer Parser Code Generator

25
Work Breakdown Structure

26
Activity Network & Critical Path Method (CPM)
 Minimum time to complete project (MT) = Maximum of all paths
from start to finish
 Earliest start time (ES) of a task = Maximum of all paths from
start to this task
 Earliest finish time (EF) of a task = ES + duration of the task
 Latest finish time (LF) of a task = MT - Maximum of all paths
from this task to finish
 Latest start time (LS) of a task = LF - duration of the task
 Slack time = LS - ES = LF - EF

27
Activity Network & Critical Path Method (CPM)
 A forward pass (from the starting node to the finish node) is used to
compute the EST and EFT, whereas a backward pass (from the
finish node to the starting node) is used for the LST and LFT.
 To determine the critical path and the project schedule, the approach
consists of calculating, respectively, the starting time and the
completion time for each activity as well as identifying the
corresponding slack.

28
Activity Network & Critical Path Method (CPM)

29
Activity Network & Critical Path Method (CPM)

30
Activity Network & Critical Path Method (CPM)

31
Activity Network & Critical Path Method (CPM)

32
Activity Network & Critical Path Method (CPM)

33
Activity Network & Critical Path Method (CPM)

34
Activity Network & Critical Path Method (CPM)

 The critical path is A-B-E-F-G-I-K-M-N-P-Q-R-Finish.

 The expected project completion time (p) is 44 weeks given by the


maximum EFT or LFT.

35
PERT Chart
 PERT (Program Evaluation and Review Technique) is a
variation of CPM:
 incorporates uncertainty about duration of tasks.
 Gantt charts can be derived automatically from PERT charts.
 Gantt chart representation of schedule is helpful in planning
the utilization of resources,
 while PERT chart is more useful for monitoring the timely
progress of activities.

36
PERT Chart

37
PERT Chart

38
PERT Chart

39
PERT Chart

WHAT IS THE PROBABILITY OF COMPLETING THE PROJECT WITHIN 46


WEEKS?

 The approach consists of converting X into a standard normal distribution


and determining the area under the normal curve.

 To that end the z value is computed as follows,

40
PERT Chart

41
PERT Chart

42
PERT Chart

WHAT WOULD BE THE COMPLETION TIME UNDER WHICH FOR THE


PROJECT HAS 95% PROBABILITY OF COMPLETING?

43
Risk Management
 Risk Management

44
Process and Project Metrics
 Software process and project metrics are the management tools
used for quantitative measures.
 They offer insight into the effectiveness of the software process and
the projects that are conducted using the process as a framework.
 Basic quality and productivity data are collected. These data are
analysed, compared against past averages, and assessed.
 The goal is to determine whether quality and productivity
improvements have occurred.
 The data can also be used to pinpoint problem areas.
 Remedies can then be developed and the software process can be
improved

45
Process and Project Metrics
 Measurement can be applied to the software process with the intent
of improving it on a continuous basis.
 Measurement can be used throughout a software project to assist in
estimation, quality control, productivity assessment, and project
control.
 Measurement can be used by software engineers to help assess the
quality of technical work products and to assist in tactical decision
making as a project proceeds.

46
Quote
 “When you can measure what you are speaking about and express
it in numbers, you know something about it; but when you cannot
measure, when you cannot express it in numbers, your knowledge is
of a meager and unsatisfactory kind; it may be the beginning of
knowledge, but you have scarcely, in your thoughts, advanced to the
stage of science.”

LORD WILLIAM KELVIN (1824 – 1907)

47
Why do we measure?
 To characterize in order to
 Gain an understanding of processes, products, resources, and
environments
 Establish baselines for comparisons with future assessments
 To evaluate in order to
 Determine status with respect to plans
 To predict in order to
 Gain understanding of relationships among processes and
products
 Build models of these relationships •
 To improve in order to
 Identify roadblocks, root causes, inefficiencies, and other
opportunities for improving product quality and process
performance
48
Indicator
 An indicator is a metric or combination of metrics that provide insight
into the software process, a software project, or the product itself. An
indicator provides insight that enables the project manager or
software engineers to adjust the process, the project, or the process
to make things better

49
Process and Project Metrics
 Metrics should be collected so that process and product indicators
can be ascertained
 Process metrics used to provide indictors that lead to long term
process improvement.
 Project metrics enable project manager to
 Assess status of ongoing project
 Track potential risks
 Uncover problem are before they go critical
 Adjust work flow or tasks
 Evaluate the project team’s ability to control quality of software
work products

50
Metrics in the Process Domain
 Process metrics are collected across all projects and over long
periods of time and are used for making strategic decisions
 The only way to know how/where to improve any process is to
 Measure specific attributes of the process
 Develop a set of meaningful metrics based on these attributes
 Use the metrics to provide indicators that will lead to a strategy
for improvement.
 Measure effectiveness based on outcome of process
 Errors uncovered before release of the software – Defects
delivered to and reported by the end users – Work products
delivered – Human effort expended – Calendar time expended –
Conformance to the schedule – Time and effort to complete
each generic activity

51
Etiquette of Process Metrics
 Use common sense and organizational sensitivity when interpreting
metrics data
 Provide regular feedback to the individuals and teams who collect
measures and metrics
 Don’t use metrics to evaluate individuals
 Work with practitioners and teams to set clear goals and metrics that
will be used to achieve them
 Never use metrics to threaten individuals or teams
 Metrics data that indicate a problem should not be considered
“negative”
 Such data are merely an indicator for process improvement
 Don’t obsess on a single metric to the exclusion of other important
metrics.
52
Statistical Software Process Improvement
 All errors and defects are categorized by origin (flaw in spec, flaw in
logic, non-conformance to standards).
 The cost to correct each error and defect is recorded.
 The number of errors and defects in each category is counted and
ranked in descending order.
 The overall cost of errors and defects in each category is computed.
 Resultant data are analyzed to uncover the categories that result in
the highest cost to the organization.
 Plans are developed to modify the process with the intent of
eliminating (or reducing the frequency of) the class of errors and
defects that is most costly.

53
Software Measurement
 Two categories of software measurement
 Direct measures of the
 Software process (cost, effort, etc.)
 Software product (lines of code produced, execution speed,
defects reported over time, etc.)
 Indirect measures of the
 Software product (functionality, quality, complexity,
efficiency, reliability, maintainability, etc.)
 Project metrics can be consolidated to create process metrics for an
organization

54
Size oriented Metrics
 Derived by normalizing quality and/or productivity measures by
considering the size of the software produced
 Thousand lines of code (KLOC) are often chosen as the
normalization value
 Metrics include
– Errors per KLOC - Errors per person-month
– Defects per KLOC - KLOC per person-month
– Dollars per KLOC - Dollars per page of documentation
– Pages of documentation per KLOC
 Size-oriented metrics are not universally accepted as the best way
to measure the software process

55
Size oriented Metrics

56
Function oriented Metrics
 Function-oriented metrics use a measure of the functionality
delivered by the application as a normalization value
 Most widely used metric of this type is the function point:
FP = count total * [0.65 + 0.01 * sum (value adj. factors)]
 Function point values on past projects can be used to compute, for
example, the average number of lines of code per function point
(e.g., 60)

57
Function oriented Metrics Controversy
 Proponents claim that
 FP is programming language independent
 FP is based on data that are more likely to be known in the early
stages of a project, making it more attractive as an estimation
approach
 Opponents claim that
 FP requires some “sleight of hand” because the computation is
based on subjective data
 Counts of the information domain can be difficult to collect after
the fact
 FP has no direct physical meaning…it’s just a number

58
Typical Function oriented Metrics
 errors per FP
 defects per FP
 $ per FP
 pages of documentation per FP
 FP per person-month

59
Extended Function oriented Metrics
 A function point extension called feature points, is a superset of the
function point measure that can be applied to systems and
engineering software applications.
 Accommodate applications in which algorithmic complexity is high.
 The feature point metric counts a new software characteristic
 algorithms.
 Another function point extension – developed by Boeing
 integrate data dimension of software with functional and control
dimensions. “3D function point”.
 Function points, feature points, and 3D point represent the same
thing – “functionality” or “utility” delivered by software.

60
Reconciling LOC an FP Metrics
 Relationship between LOC and FP depends upon
 The programming language that is used to implement the
software
 The quality of the design
 FP and LOC have been found to be relatively accurate predictors of
software development effort and cost
 However, a historical baseline of information must first be
established
 LOC and FP can be used to estimate object-oriented software
projects
 However, they do not provide enough granularity for the
schedule and effort adjustments required in the iterations of an
evolutionary or incremental process
61
Function oriented Metrics Controversy

62
Object-oriented Metrics
 Number of scenario scripts (i.e., use cases)
 This number is directly related to the size of an application and
to the number of test cases required to test the system
 Number of key classes (the highly independent components)
 Key classes are defined early in object-oriented analysis and are
central to the problem domain
 This number indicates the amount of effort required to develop
the software
 It also indicates the potential amount of reuse to be applied
during development

63
Object-oriented Metrics
 Number of support classes
 Support classes are required to implement the system but are
not immediately related to the problem domain (e.g., user
interface, database, computation)
 This number indicates the amount of effort and potential reuse
 Average number of support classes per key class
 Key classes are identified early in a project (e.g., at
requirements analysis)
 Estimation of the number of support classes can be made from
the number of key classes

64
Object-oriented Metrics
 GUI applications have between two and three times more
support classes as key classes
 – Non-GUI applications have between one and two times more
support classes as key classes
 Number of subsystems – A subsystem is an aggregation of classes
that support a function that is visible to the end user of a system

65
Use Case-Oriented Metrics
 Describe (indirectly) user-visible functions and features in language
independent manner
 Number of use case is directly proportional to LOC size of
application and number of test cases needed
 However use cases do not come in standard sizes and use as a
normalization measure is suspect
 Use case points have been suggested as a mechanism for
estimating effort

66
WebApp Project Metrics
 Number of static Web pages (Nsp)
 Number of dynamic Web pages (Ndp)
 Customization index: C = Nsp / (Ndp + Nsp)
 Number of internal page links
 Number of persistent data objects
 Number of external systems interfaced
 Number of static content objects
 Number of dynamic content objects
 Number of executable functions

67
Software Quality Metrics
 Correctness: defects per KLOC
 Maintainability: the ease that a program can be corrected, adapted,
and enhanced. Time/cost.
 Time-oriented metrics: Mean-time-to-change (MTTC)
 Cost-oriented metrics: Spoilage – cost to correct defects
encountered.
 Integrity: ability to withstand attacks
 Threat: the probability that an attack of a specific type will occur
within a given time.
 Security: the probability that the attack of a specific type will be
repelled.
Integrity = sum [(1 – threat)x(1 – security)]
68
Measuring Quality
 Usability: attempt to quantify “user-friendliness” in terms of four
characteristics:
 The physical/intellectual skill to learn the system
 The time required to become moderately efficient in the use of
the system
 The net increase of productivity
 A subjective assessment of user attitude toward the system
(e.g., use of questionnaire).

69
Defect Removal Efficiency
 DRE is a measure of filtering ability of quality assurance and control
activities as they applied throughout all process framework activities.
DRE = (errors) / (errors + defects)
where
errors = problems found before release
defects = problems found after release
 The ideal value for DRE is 1
no defects found.

70
Defect Removal Efficiency
 DRE is a measure of filtering ability of quality assurance and control
activities as they applied throughout all process framework activities.
DRE = (errors) / (errors + defects)
where
errors = problems found before release
defects = problems found after release
 The ideal value for DRE is 1
no defects found.

71
Software Metrics Baseline Process

72
Metrics for Small Organization
 Most software organizations have fewer than 20 software engineers.
 Best advice is to choose simple metrics that provide value to the
organization and don’t require a lot of effort to collect.
 Even small groups can expect a significant return on the investment
required to collect metrics, if this activity leads to process
improvement.

73
Establishing a Software Metrics Program
 Identify business goal
 Identify what you want to know
 Identify subgoals
 Identify subgoal entities and attributes
 Formalize measurement goals
 Identify quantifiable questions and indicators related to subgoals
 Identify data elements needed to be collected to construct the
indicators
 Define measures to be used and create operational definitions for
them
 Identify actions needed to implement the measures
 Prepare a plan to implement the measures
74
Thank You for Your Attention !

75

You might also like