Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 88

6.

Project Planning

1
Agenda
 Background
 Work breakdown structure (Task
Analysis)
 Software cost estimation
 Gantt Chart
 Pert PERT–Program Evaluation and
Review Technique

2
Software Project
 Goal: Build a software system to meet
commitments on cost, schedule, quality
 Worldwide - many projects fail
 one-third are runaways with cost or schedule
overrun of more than 125%

3
Project Failures
 Major reasons for project runaways
 unclear objectives
 bad planning
 no project management methodology
 new technology
 insufficient staff
 All of these relate to project management
 Effective project management is key to
successfully executing a project 4
Why improve PM?
 Better predictability leading to
commitments that can be met
 Lower cost through reduced rework,
better resource mgmt, better
planning,..
 Improved quality through proper quality
planning and control
 Better control through change control,
CM, monitoring etc. 5
Why improve PM ….
 Better visibility into project health and state
leading to timely intervention
 Better handling of risks reducing the
chances of failure
 All this leads to higher customer satisfaction
 And self and organization improvement

6
Work Breakdown Structure

 Is a depiction of the project in terms of


the discrete pieces of work needed to
complete the project and the ordering
of those pieces of work.
 Performed by software project team.
 Focuses on the tasks required to
produce the artifacts that are to be
delivered.
7
Work Breakdown Structure

 As the details of the software


development and support process are
realized subtasks are identified.
 There are a number of tools used to
support the WBS activity:
 Graphical – MS Visio, Smartdraw, ArgoUML.
 Scheduling – MS Project, Primavera
TeamPlay
8
Work Breakdown Structure: WBS
 Hierarchical list of project’s work activities
 2 Formats
 Outline (indented format)
 Graphical Tree (Organizational Chart)
 Uses a decimal numbering system
 Ex: 3.1.5
 0 is typically top level
 Includes
 Development, Mgmt., and project support tasks
 Shows “is contained in” relationships
 Does not show dependencies or durations 9
WBS: Contract Vs Project
 Contract WBS (CWBS)
 First 2 or 3 levels
 High-level tracking
 Project WBS (PWBS)
 Defined by PM and team members
 Tasks tied to deliverables
 Lowest level tracking

10
A Full WBS Structure
 Up to six levels (3-6 usually) such as

 Upper 3 can be used by customer for reporting


 (if part of RFP/RFQ)
 Different level can be applied to different uses
 Ex: Level 1: authorizations; 2: budgets; 3: schedules
11
WBS Chart Example

12
WBS Outline Example
0.0 Retail Web Site
1.0 Project Management
2.0 Requirements Gathering
3.0 Analysis & Design
4.0 Site Software Development
4.1 HTML Design and Creation
4.2 Backend Software
4.2.1 Database Implementation
4.2.2 Middleware Development
4.2.3 Security Subsystems
4.2.4 Catalog Engine
4.2.5 Transaction Processing
4.3 Graphics and Interface
4.4 Content Creation
5.0 Testing and Production
13
Sample WBS

14
WBS Types
 Process WBS
 a.k.a Activity-oriented

 Ex: Requirements, Analysis, Design, Testing

 Typically used by PM

 Product WBS
 a.k.a. Entity-oriented

 Ex: Financial engine, Interface system, DB

 Typically used by engineering manager

 Hybrid WBS: both above


 This is not unusual

 Ex: Lifecycle phases at high level with component

or feature-specifics within phases


 Rationale: processes produce products
15
Product WBS

16
Process WBS

17
outlined WBS

18
Outline WBS w/Gantt

19
WBS by Process Groups

20
WBS Example
 Show projectplan WBS pdf

21
Work Packages
 Generic term for discrete tasks with definable end
results
 Typically the “leaves” on the tree
 The “one-to-two” rule
 Often at: 1 or 2 persons for 1 or 2 weeks
 Basis for monitoring and reporting progress
 Can be tied to budget items (charge numbers)
 Resources (personnel) assigned
 Ideally shorter rather than longer
 Longer requires in-progress estimates
 These are more subjective than “done”
 2-3 weeks maximum for software projects
 1 day minimum (occasionally a half day)
 Not too small as to micro-manage
22
WBS
 List of Activities, not Things
 List of items can come from many sources
 SOW, Proposal, brainstorming, stakeholders, team
 Describe activities using “bullet language”
 Meaningful but terse labels
 All WBS paths do not have to go to the same
level
 Do not plan more detail than you can manage

23
WBS & Methodology
 PM must map activities to chosen lifecycle
 Each lifecycle has different sets of activities
 Integral process activities occur for all
 Planning, configuration, testing
 Operations and maintenance phases are not
normally in plan (considered post-project)
 Some models are “straightened” for WBS
 Spiral and other iterative models
 Linear sequence several times
 Deliverables of tasks vary by methodology
24
WBS Techniques
 Top-Down
 Bottom-Up
 Analogy
 Rolling Wave
 1st pass: go 1-3 levels deep
 Gather more requirements or data
 Add more detail later
 Post-its on a wall
25
WBS Techniques
 Top-down
 Start at highest level
 Systematically develop increasing level of
detail
 Best if
 The problem is well understood
 Technology and methodology are not new
 This is similar to an earlier project or problem
 But is also applied in majority of situations

26
WBS Techniques
 Bottom-up
 Start at lowest level tasks
 Aggregate into summaries and higher
levels
 Cons
 Time consuming
 Needs more requirements complete
 Pros
 Detailed
27
WBS Techniques
 Analogy
 Base WBS upon that of a “similar” project
 Use a template
 Analogy also can be estimation basis
 Pros
 Based on past actual experience
 Cons
 Needs comparable project

28
WBS Techniques
 Brainstorming
 Generate all activities you can think of that need
to be done
 Group them into categories
 Both Top-down and Brainstorming can be
used on the same WBS
 Remember to get the people who will be
doing the work involved (buy-in matters!)
29
WBS – Basis of Many Things
 Network scheduling
 Costing
 Risk analysis
 Organizational structure
 Control
 Measurement

30
WBS Guidelines Part 1
 Should be easy to understand
 Some companies have corporate standards
for these schemes
 Some top-level items, like Project Mgmt. are
in WBS for each project
 Others vary by project
 What often hurts most is what’s missing
 Break down until you can generate accurate
time & cost estimates
 Ensure each element corresponds to a
deliverable 31
WBS Guidelines Part 2
 How detailed should it be?
 Each level should have no more than 7 items
 It can evolve over time
 What tool should you use?
 Excel, Word, Project
 Org chart diagramming tool (Visio, etc)
 Specialized commercial apps
 Re-use a “template” if you have one

32
SOFTWARE COST ESTIMATION

33
Estimations
 Very difficult to do, but needed often
 Created, used or refined during
 Strategic planning
 Feasibility study and/or SOW
 Proposals
 Vendor and sub-contractor evaluation
 Project planning (iteratively)
 Basic process
1) Estimate the size of the product
2) Estimate the effort (man-months)
3) Estimate the schedule
 NOTE: Not all of these steps are always explicitly
performed
34
Estimations
 Remember, an “exact estimate” is an oxymoron
 Oxymoron loosely means "contradiction in terms”
 Estimate how long will it take you to get home from
class tonight
 On what basis did you do that?
 Experience right?
 Likely as an “average” probability
 For most software projects there is no such ‘average’
 Most software estimations are off by 25-100%
35
Estimation
 Target vs. Committed Dates
 Target: Proposed by business or marketing
 Do not commit to this too soon!
 Committed: Team agrees to this
 After you’ve developed a schedule

36
Cone of Uncertainty

In the beginning of the project life cycle (i.e. before gathering of


requirements) estimations have in general an uncertainty of factor 4 the
actual duration can be 4 times or 1/4th of the first estimations 37
Effect of Size in Estimation
 Size is also a factor:
 Small projects (10-99 FPs), variance of 7%
from post-requirements estimates
 Medium (100-999 FPs), 22% variance
 Large (1000-9999 FPs) 38% variance
 Very large (> 10K FPs) 51% variance

38
Estimation Methodologies
 Top-down
 Bottom-up
 Analogy
 Expert Judgment
 Priced to Win
 Parametric or Algorithmic Method
 Using formulas and equations

39
Top-down Estimation
 Based on overall characteristics of project
 Some of the others can be “types” of top-down (Analogy,
Expert Judgment, and Algorithmic methods)
 Advantages
 Easy to calculate
 Effective early on (like initial cost estimates)
 Disadvantages
 Some models are questionable or may not fit
 Less accurate because it doesn’t look at details

40
Bottom-up Estimation
 Create WBS
 Add from the bottom-up
 Advantages
 Works well if activities well understood
 Disadvantages
 Specific activities not always known
 More time consuming

41
Expert Judgment
 Use somebody who has recent
experience on a similar project
 You get a “guesstimate”
 Accuracy depends on their ‘real’
expertise
 Comparable application(s) must be
accurately chosen
 Systematic
 Can use a weighted-average of opinions 42
Estimation by Analogy
 Use past project
 Must be sufficiently similar (technology, type,
organization)
 Find comparable attributes (ex: # of inputs/outputs)
 Can create a function
 Advantages
 Based on actual historical data
 Disadvantages
 Difficulty ‘matching’ project types
 Prior data may have been mis-measured
 How to measure differences – no two exactly same
43
Priced to Win
 Just follow other estimates
 Save on doing full estimate
 Needs information on other estimates (or
prices)
 Purchaser must closely watch trade-offs
 Priced to lose?

44
Algorithmic Measures
 Lines of Code (LOC)
 Function points
 Feature points or object points
 Other possible
 Number of bubbles on a DFD
 Number of of ERD entities
 Number of processes on a structure chart
 LOC and function points most common
 (of the algorithmic approaches)
 Majority of projects use none of the above
45
ESTIMATION OF LINES OF CODE (LOC)
 The first step in LOC-based estimation is to estimate
the number of lines of code in the finished project.
 This can be done based on experience, size of
previous projects, size of a competitor’s solution, or
by breaking down the project into smaller pieces
and then estimating the size of each of the smaller
pieces.
 A standard approach is, for each piecei estimate
the maximum possible size, maxi, the minimum
possible size, mini, and the ‘‘best guess’’ size, besti
 The estimate for the whole project is 1/6 of the sum
of the maximums, the minimums, and 4 times the
best guess 46
Standard deviation
 Standard deviation of Size:

 Standard deviation of Effort


 E(i)=(max-min)/6
 NB:SD measures the amount of variation or
dispersion from the average
 A low standard deviation indicates that the data points
tend to be very close to the mean
 a high standard deviation indicates that the data points
are spread out over a large range of values 47
EXAMPLE
 Team WRT had identified seven subpieces to their project as
shown in Table below with their estimates of the size of each
subpiece.
Part Min Best Max Size
Size Guess
1 20 30 50
Subpiece 2 10 15 25
Size 3 25 30 45
Estimate
(in LOC)
4 30 35 40
5 15 20 25
6 10 12 14
7 20 22 25
48
Example:LOC Estimation Part Min Best Max
The estimate for the whole Size Guess Size
project is 1/6 of the sum of the 1 20 30 50
maximums, the minimums, and 2 10 15 25
4 times the best guess 3 25 30 45
4 30 35 40
• First find 1/6 of the sum of the 5 15 20 25
maximums, the minimums, and 4
times the best guess:
6 10 12 14
7 20 22 25
 P1(20 + 4 * 30 + 50)/6 = 190/6 = 31.6 The estimate for the
 P2(10 + 4 * 15 + 25)/6 = 95/6 =15.8 whole project is the
 P3(25 + 4 * 30 + 45)/6 = 190/6 = 31.6 sum of the estimates
for each section:
 P4(30 + 4 * 35 + 40)/6 = 220/6 = 36.7 Whole = 31.6 + 15.8
 P5(15 + 4 * 20 +25)/6 = 120/6 = 20 + 31.6 + 36.7 + 20 +
12 + 22.17 = 170.07
 P6(10 + 4 * 12 + 14)/6 = 72/6 = 12 LOC
 P7(20 + 4 * 22 + 25)/6 = 133/6 = 22.17 49
Estimate for the standard
deviation of the estimate

  
Standard deviation
= ((50- 20)2 + (25 – 10)2 + (45 – 25)2+
(40 – 30)2 + (25 – 15)2 + (14 – 10)2 +
(25 – 20)2)1/2
= (900 + 225 + 400 + 100 + 100 + 16 +
25)1/2
= = 42.03
50
LOC-BASED COST ESTIMATION

 The
  basic LOC approach is a formula that matches the
historical data. The basic formula has three
parameters:
 COST = where
 Alpha, , is the marginal cost per KLOC (thousand

lines of code).
 This is the added cost for an additional thousand lines of
code.
 Beta, , is an exponent that reflects the nonlinearity
of the relationship.
 A value of beta greater than 1 means that the cost per
KLOC increases as the size of the project increases. This
is a diseconomy of scale  at increased per-line of code
cost
51
LOC-BASED COST ESTIMATION
  
COST =
 A value of beta less than 1 reflects an economy of
scale.
 Economies of scale  cost per unit of output generally
decreasing with increasing scale as fixed costs are
spread out over more units of output
 Some studies have found betas greater than 1,
and other studies have betas less than 1.
 The parameter gamma, , reflects the fixed cost of
doing any project.
 Studies have found both positive gammas and

zero gammas
52
EXAMPLE
 Company LMN has recorded the following data
from previous projects.
 Estimate what the parameters for the cost
estimation formula should be and how much
effort a new project of 30 KLOC should take
Proj ID Size(KLOC) Effort(PM)
1 50 120
Historical
Data 2 80 192
3 40 96
4 10 24
5 20 48

53
200

180

160

140
E f f o rt

120

100

80

60

40

20
10 20 30 40 50 60 70 80
Size(KLOC)

 Analyzing or plotting this data would show a linear


relationship between size and effort
 The slope of the line is 2.4. This would be 𝜶 in the LOC-
based cost estimation formula.
 Since the line is straight (linear relationship), then 𝜷 is 1.
 The 𝜸 value would be zero. 54
Example

   COST =
=2.4*301+0
=2.4*30
=72

55
Example 2 of LOC-Based
Estimation
Function Estimated LOC
User interface and control facilities (UICF) 2,300
Two-dimensional geometric analysis (2DGA) 5,300
Three-dimensional geometric analysis (3DGA) 6,800
Database management (DBM) 3,350
Computer graphics display facilities (CGDF) 4,950
Peripheral control function (PCF) 2,100
Design analysis modules (DAM) 8,400

Estimated lines of code 33,200

56
Example 3 of LOC-Based
Estimation
 Estimated lines of code = W = 33,200
 Let,
 Average productivity = 620 LOC/pm = X
 Labor rate = $8,000 per month = Y
 So,
 Cost per line of code = Z = Y/X = $13 (approx.)
 Total estimated project cost = W*Z = $431,000
(approx.)
 Estimated effort = W/X = 54 person-months
(approx)

57
CONSTRUCTIVE COST MODEL
(COCOMO)
 COCOMO is the classic LOC cost-estimation
formula.
 It was created by Barry Boehm in the 1970s.
 He used thousand delivered source instructions
(KDSI) as his unit of size.
 KLOC is equivalent.
 His unit of effort is the programmer-month (PM).

58
COCOMO project types
 Boehm divided the historical project data
into three types of projects:
1. Application (separate, organic, e.g., data
processing, scientific)
2. Utility programs (semidetached, e.g.,
compilers, linkers, analyzers)
3. System programs (embedded)

59
COCOMO PARAMETERS
 He determined the values of the parameters
for the cost model for determining effort:
 1. Application programs:
 PM = 2.4 * (KDSI)1.05
 2. Utility programs:
 PM = 3.0 * (KDSI)1.12
 3. Systems programs:
 PM = 3.6 *(KDSI)1.20
60
EXAMPLE
 Calculate the programmer effort for projects
from 5 to 45 KDSI
 First Column Calcs:
1. Application programs:
 PM = 2.4 * (KDSI)1.05 = 2.4 * 51.05 = 13.0056= 13.0
2. Utility programs:
 PM = 3.0 * (KDSI)1.12= 3.0 * 51.12=18.1957 = 18.2
3. Systems programs:
 PM = 3.6 *(KDSI)1.20 =3.6 * 51.20 =24.8351 = 24.8

Size 5K 10K 15K 20K 25K 30K 35K 40K 45K


Appl 13.0 26.9 41.2 55.8 70.5 85.3 100.3 115.4 130.6
Util 18.2 39.5 62.2 86.0 110.4 135.3 160.8 186.8 213.2
61
Sys 24.8 57.1 92.8 131.1 171.3 213.2 256.6 301.1 346.9
Code-based Estimates
 LOC Advantages
 Commonly understood metric

 Permits specific comparison

 Actuals easily measured

 LOC Disadvantages
 Difficult to estimate early in cycle

 Counts vary by language

 Many costs not considered (ex: requirements)

 Code generators produce excess code


62
LOC Estimate Issues
 How do you know how many in advance?
 What about different languages?
 What about programmer style?
 Statistically: avg. programmer productivity:
3,000 LOC/yr
 Most algorithmic approaches are more
effective after requirements

63
Effort Estimation
 Now that you know the “size”, determine the
“effort” needed to build it
 Various models: empirical, mathematical,
subjective
 Expressed in units of duration
 Man-months (or ‘staff-months’ now)

64
Effort Estimation
 McConnell shows schedule tables for conversion of size to
effort
 As with parametric size estimation, these techniques
perform better with historical data
 Again, not seen in ‘average’ projects
 Often the size and effort estimation steps are combined
(not that this is recommended, but is what often is done)
 “Commitment-Based” Scheduling is what is often done
 Ask developer to ‘commit’ to an estimate (his or her own)

65
FUNCTION POINT ANALYSIS
 Software size is measured by number &
complexity of functions it performs
 More methodical than LOC counts
 House analogy
 House’s Square Feet ~= Software LOC
 # Bedrooms & Baths ~= Function points
 Former is size only, latter is size & function

66
FUNCTION POINT ANALYSIS
 The idea of function points is to identify and
quantify the functionality required for the
project.
 The idea is to count things that will require
processing.
 The classic items to count are as follows:
 Inputs, Outputs, Inquiries, Internal files,
External interfaces
67
Inquiries
 are request-response pairs that do not
change the internal data.
 For example, a request for the address of a
specified employee is an inquiry.
 The whole sequence of asking, supplying the
name, and getting the address would count
as one inquiry.

68
Inputs
 are items of application data that is
supplied to the program.
 The logical input is usually considered
one item and individual fields are not
usually counted separately.
 For example, the input of personal data
for an employee might be considered
one input
69
Outputs
 Outputs are displays of application data.
 This could be a report, a screen, display, or an error
message.
 Again, individual fields are usually not considered
separate outputs.
 If the report has multiple lines, for instance, a line
for each employee in the department, these lines
would all be counted as one output.
 However, some authorities would count summary
lines as separate outputs. 70
Internal files
 Internal files are the logical files that the customer
understands must be maintained by the system.
 If an actual file contained 1000 entries of personnel
data, it would probably be counted as one file.
 However, if the file contained personnel data,
department summary data, and other department
data, it would probably be counted as three separate
files for the purposes of counting function points

71
External interfaces
 External interfaces are data that is shared
with other programs.
 For example, the personnel file might be
used by human resources for promotion and
for payroll.
 Thus, it would be considered an interface in
both systems.

72
Counting Unadjusted Function
Points
 The individual function point items are
identified and then classified as simple,
average, or complex.
 The weights from Table on next slide are
then assigned to each item and the total is
summed.
 This total is called the unadjusted
function points.
73
Function Point Weights
Simple Average Complex
Outputs 4 5 7
Inquiries 3 4 6
Inputs 3 4 6
Files 7 10 15
Interfaces 5 7 10

74
An Example of FP-Based
Estimation (1)
Information Domain Count Weighting factor
Value Simple Average Complex

External Inputs (EIS) 3 X 3 4 6 = 9

External Outputs (EOs) 2 X 4 5 7 = 8

External Inquiries (EQs) 2 X 3 4 6 = 6

Internal Logical Files 1 X 7 10 15 = 7


(ILFs)
External Interface Files 4 X 5 7 10 = 20
(EIFs)
Count Total 50

Computing function points 75


An Example of FP-Based
Estimation (2)

76
EXAMPLE 3 of FP Estimation
How rate Functions
 The department wants a program that assigns
times and rooms for each course and creates a line
schedule for the courses.
 The department has a list of courses with the name
of the assigned professor and the anticipated size
 The department also has a list of rooms with the
maximum number of students each room will hold
 There are also sets of classes that cannot be taught
at the same time.
 Additionally, professors cannot teach two courses at
the same time. 77
EXAMPLE
 This program is much more difficult than the
complexity of the inputs and outputs.
 It has two main inputs:
 the file with the list of sections, assigned
professor, and anticipated size,
 and the file with the list of rooms with the
maximum size.
 These two files, although simple to read, will
be difficult to process, so they will be rated
complex 78
EXAMPLE
 There will be an additional file with the sets of
classes that cannot be taught at the same time.
 Again, this file is simple in structure but will be difficult to
process.
 The last line has a restriction that is not an input or
output.
 There is an output, the line schedule.
 This is a complex output.
 There are no inquiries or interfaces mentioned, nor
any mention about files being maintained
79
PRODUCTIVITY
 One of the important measures is the productivity of
the software developers.
 This is determined by dividing the total size of the
finished product by the total effort of all the
programmers.
 This has units of LOC/programmer-day.
 An alternative is to measure the productivity in terms
of function points per programmer-day.
 Note that productivity includes all the effort spent in
all phases of the software life cycle 80
EXAMPLE
 Company XYZ spent the following effort for
each life cycle phase of the latest project (see
Table next slide).
 Calculate the effort in terms of
LOC/programmer-day and in terms of function
points/programmer day.
 The function point estimate was 50 unadjusted
function points.
 The finished project included 950 lines of code81
EXAMPLE
Phase Programmer-Days
Requirements 20
Design 10
Implementation 10
Testing 15
Documentation 10
The total effort was 65 programmer-days.
This gives a productivity of:
950/65 = 14.6 lines of code/programmer-day.
Using unadjusted function points (fp),
the productivity is 50 fp/65 days = 0.77 fp/programmer-day. 82
AUTOMATED ESTIMATION
TOOLS
 Numerous tools are available on the Internet
that will calculate COCOMO or COCOMO2.
Most have very simple interfaces.
 Search for COCOMO using any browser, and
it should find multiple sites

83
Parametric Method Issues
 Remember: most projects you’ll run into don’t use
these
 Which is ‘normal’, so don’t be surprised
 Or come-in to new job and say “Hey, let’s use COCOMO”
 These are more effective on large projects
 Where a past historical base exists
 Primary issue for most projects are
 Lack of similar projects
 Thus lack of comparable data
 Catch-22: how to get started
84
Code Reuse & Estimation
 Does not come for free
 Code types: New, Modified, Reused
 If code is more than 50% modified, it’s “new”
 Reuse factors have wide range
 Reused code takes 30% effort of new
 Modified is 60% of new
 Integration effort with reused code almost as
expensive as with new code

85
Estimation Issues
 Quality estimations needed early but information is
limited
 Precise estimation data available at end but not
needed
 Or is it? What about the next project?
 Best estimates are based on past experience
 Politics of estimation:
 You may anticipate a “cut” by upper management
 For many software projects there is little or none
 Technologies change
 Historical data unavailable
 Wide variance in project experiences/types
 Subjective nature of software estimation 86
Over and Under Estimation
 Over estimation issues
 The project will not be funded
 Conservative estimates guaranteeing 100% success may mean
funding probability of zero.
 Parkinson’s Law: Work expands to take the time allowed
 Danger of feature and scope creep
 Be aware of “double-padding”: team member + manager
 Under estimation issues
 Quality issues (short changing key phases like testing)
 Inability to meet deadlines
 Morale and other team motivation issues

87
Estimation Guidelines
 Estimate iteratively!
 Process of gradual refinement
 Make your best estimates at each planning stage
 Refine estimates and adjust plans iteratively
 Plans and decisions can be refined in response
 Balance: too many revisions vs. too few

88

You might also like