SE403 Software Project Management Chapter 4 PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 59

SE403

SOFTWARE PROJECT
MANAGEMENT

CHAPTER 4
SOFTWARE SIZE &
EFFORT ESTIMATION
Assist. Prof. Dr. Volkan TUNALI
Faculty of Engineering / Maltepe University
Overview
2

 Introduction to Software Size & Effort Estimation


 Software Measurement
 Basics of Software Estimation
 Software Size Estimation Techniques
 Technical
 Functional
 Software Effort Estimation Techniques
 Algorithmic
 Non-Algorithmic
Introduction
3

 Size, effort and cost estimations are very hard and


important problems faced by developers and
managers in the beginning of software
development process.
 Software project management requires a good
planning of time & work based on measurements
& estimates.
Software Metrics & Measurement
4

 A software metric is any measurable property of a


software or software project.
 Use of software metrics has been gaining attention
and importance.
 Organizations use software metrics for:
 Understanding & modelling software projects
 Guiding management of software projects

 Guiding software process development & improvement


Software Metrics & Measurement
5

 A successful project is one delivered


 on time
 within budget,
 with required quality
 Realistic estimates are very crucial.
 Software measurement makes it possible to determine such
factors as elapsed time (duration), project size, and quality.
 Organizations are able to make estimations about future
projects based on these measurement data.
 Quality improvement in software projects is dependent on right
measurement methods.
 More than one estimation method can be used.
Five Essential Software Metrics
6

 Size
 Effort
 Cost
 Duration
 Quality
Software Size Estimation Techniques
7

 Divided into 2 categories


 Technical size estimation techniques
 Functional size estimation techniques
Technical Size Estimation Techniques
8

 Lines of Code – LOC


 Conventional & commonly used way of understanding the
size of a project
 Counts the number of code lines in a computer program

 Simple & directly measurable

 How to estimate:
 Divide the project into subunits
 Estimate LOC for each subunit
 Minimum, Average (most likely), Maximum
 Estimation = (Min + 4Avg + Max)/6
Technical Size Estimation Techniques
9

 Lines of Code – LOC


 1000 LOC of Java program is 10 times larger than 100 LOC
of Java program.
 What about comment lines?

 Effect of experience & expertise (same feature with less


code).
 Effect of programming language (Assembly vs. Java).

 Shall we count variable declarations?

 a.k.a. SLOC – Source Lines of Code


 Usually expressed as KLOC (Thousands of lines of
Code – K = Kilo = 1000)
Technical Size Estimation Techniques
10

 Lines of Code – LOC


2 types of LOC measurements
 Physical LOC
 Logical LOC

 Example 1:
 for (i=0; i<100; ++i) printf ("hello"); /* How many lines of code is
this? */
 1 Physical code line
 2 Logical code lines (for & printf)
 1 Comment line
Technical Size Estimation Techniques
11

 Lines of Code – LOC


 Example 2:
 We can write the same code as follows using different
coding style & layout
 for (i=0; i<100; ++i)
{
printf("hello");
} /* Now how many lines of code is this? */
 4 Physical code lines
 2 Logical code lines (for & printf)
 1 Comment line
Functional Size Estimation Techniques
12

 Functional Size Measurement (FSM) is based on


the functionality of the software delivered to the
customer.
 FSM is measured according to the complexity &
size of the software.
 Technical Size Metrics – from the viewpoint of
developer
 Functional Size Metrics – from the viewpoint of
user
Functional Size Estimation Techniques
13

 Function Points – FP
 IFPUG Function Points Analysis – IFPUG FPA
 Mark II Function Points – MK II FP
 Nesma Function Points
 Full Function Points – FFP
 COSMIC Full Function Points – COSMIC FFP
 Object Points
 Object-Oriented Function Points – OO FP
 Object-Oriented Method Function Points – OOmFP
Function Points – FP
14

 FP allows measurement of productivity according


to the function points in terms of man-month.
 If we can estimate the benefits of a software to
the user in terms of "inputs" & "outputs", we can
calculate a Function Point.
 Then we can convert FP to LOC.
 Then it is possible to estimate cost, effort & duration.
Function Points – FP
15

FP SLOC
Function Points Conversion to SLOC

•Number of External Inputs


•Number of External Outputs
•Number of External Inquiries For conversion of FP to
•Number of Logical Internal Files SLOC, factors determined
according to the
•Number of External Interface Files
programming languages
are used.
Adjustment with Weight Factors

Adjustment with
Technical Complexity Factors
Function Points – FP
16

UFP = ExtInputs x W(1) + ExtOutputs x W(2) + ExtInquiries x W(3) +


LogicalInternalFiles x W(4) + ExtInterfaceFiles x W(5)

Components Simple Average Complex


(1) External Inputs 3 5 6
(2) External Outputs 4 6 7
(3) External Inquiries 3 5 6
(4) Logical Internal Files 7 13 15
(5) External Interface Files 5 9 10

Difficulty level of each component can be measured as simple, average,


and complex using the weights given in the table. Summing these
measured values Unadjusted Function Points – UFP can be calculated.
Function Points – FP
17

 There are 14 factors that can influence the degree of


difficulty associated with implementing the system.
 0: does not exist or no influence,
 1: insignificant influence,
 2: low influence,
 3: medium level influence,
 4: high influence,
 5: strong influence
 DI = i=1.. 14 Answeri
 TCF = 0.65 + 0.01 x DI
 DI: Total Degree of Influence
 TCF: Technical Complexity Factor
Function Points – FP
18

General System Characteristic Brief Description


1. Data communications How many communication facilities are there to aid in the transfer or exchange of information with
the application or system?
2. Distributed data processing How are distributed data and processing functions handled?

3. Performance Was response time or throughput required by the user?

4. Heavily used configuration How heavily used is the current hardware platform where the application will be executed?

5. Transaction rate How frequently are transactions executed daily, weekly, monthly, etc.?

6. On-Line data entry What percentage of the information is entered On-Line?

7. End-user efficiency Was the application designed for end-user efficiency?

8. On-Line update How many ILF’s are updated by On-Line transaction?

9. Complex processing Does the application have extensive logical or mathematical processing?

10. Reusability Was the application developed to meet one or many user’s needs?

11. Installation ease How difficult is conversion and installation?


12. Operational ease How effective and/or automated are start-up, back-up, and recovery procedures?

13. Multiple sites Was the application specifically designed, developed, and supported to be installed at multiple sites
for multiple organizations?
14. Facilitate change Was the application specifically designed, developed, and supported to facilitate change?
Function Points – FP
19

 FP is calculated with the following formula:


 FP = UFP x TCF
 FP is converted to LOC as:
 LOC = FP x Programming Language LOC Coefficient
Programming Language LOC/FP
C ++ 53
COBOL 107
DELPHI 5 18
JAVA 2 46
VISUAL BASIC 6 24
SQL 13
Function Points – FP: Sample Project
20

1. In the same city there are 5 branches of a medical laboratory which performs blood analysis. There are about 10
data entry operators at each branch.
2. System will store the prices of the analyses.
3. System will store patient information.
4. New analyses can be added & updated.
5. Analysis result can be seen only after the authorized lab officer approves the results.
6. System will calculate the cost of the analyses requested and will print invoice.
7. System will print result reports. If there are previous records of the patient, then the report will include the
previous results.
8. System will allow customers to login the system over the Net using the passwords given to them, and allow them
to learn the analysis results using the analysis numbers.
9. System will have an interface to the blood analysis device, and results will be transferred to the system directly.
10. System will do material management, and stock of each material will be stored. Every lab will make a weekly
material request from the main store. The main store will reside in one of the labs. Every material movement in-
to-store & out-to-branches will be recorded. System will give warning message and give report for every material
that is under critical stock level. System will provide a monthly material report for each branch.
11. Main server of the labs will reside in one of the branches. In case of failure, the system will continue operation
seamlessly using the backup server which will reside in another branch. Communication will be provided over a
leased-line infrastructure.
12. The development team of the system have a medium-level experience with Java and system analysis. The team
has almost no experience with the functionality. The team has previously developed e few information systems.
Developers will use a CASE tool.
Sample Project: Architecture
21

E B Main Server

System
System Fiber
Optic

D Router

Switch C Backup Server


Sample Project: Lab System
22

Web Inquiry
Analysis Info Analysis Result Report
Password,
Result
Analysis Comparative
Patient
No Analysis Result
Info
Report

Analysis Invoice
Approval
Laboratory System
Monthly
Invoice Data
Material Report

Material Critical Stock Level


Request Warning
Result
Stock Data (Store
Critical Stock Level
In/Out)
Blood Analysis Device Report
Sample Project: Unadjusted FP
23

Inputs: 6 Complex UFP = ExtInputs x W(1) +


ExtOutputs x W(2) +
Outputs: 6 Complex ExtInquiries x W(3) +
Internal Files : 4 Average LogicalInternalFiles x W(4) +
– Patient File ExtInterfaceFiles x W(5)
– Analyses File
– Invoive File
– Material Stock File
External Inquiries: 1 Average
UFP = 6x6 + 6x7+ 4x13 + 1x5 + 1x9
= 144
External Interface Files: 1 Average
Components Simple Average Complex

(1) External Inputs 3 5 6

(2) External Outputs 4 6 7

(3) External Inquiries 3 5 6

(4) Logical Internal Files 7 13 15

(5) External Interface Files 5 9 10


Sample Project: Adjusted FP
24

1 How many communication facilities are there to aid in the transfer or exchange of information with the application or system? 3
2 How are distributed data and processing functions handled? 3
3 Was response time or throughput required by the user? 4
4 How heavily used is the current hardware platform where the application will be executed? 4
5 How frequently are transactions executed daily, weekly, monthly, etc.? 5
6 What percentage of the information is entered On-Line? 5
7 Was the application designed for end-user efficiency? 3
8 How many ILF’s are updated by On-Line transaction? 5
9 Does the application have extensive logical or mathematical processing? 2
10 Was the application developed to meet one or many user’s needs? 5
11 How difficult is conversion and installation? 3
12 How effective and/or automated are start-up, back-up, and recovery procedures? 5
13 Was the application specifically designed, developed, and supported to be installed at multiple sites for multiple 3
organizations?
14 Was the application specifically designed, developed, and supported to facilitate change? 3

DI = i=1.. 14 Answeri = 53
FP = UFP x (0.65 + 0.01 x DI) = 144 x (0.65 + 0.01 x 53) = 169.92
LOC = 46 x 169.92 = 7816.3
IFPUG Function Point Analysis
25

 IFPUG - International Function Point Users Group (1984)


 IFPUG encourages the use of FPA for managing application software development
and maintenance activities.
 Official IFPUG Measurement Application Guidelines are published in 1986, 1988,
1990, 1994, 1999, 2004 & 2009.
 IFPUG FPA is the most commonly used FSM method.

IFPUG Function Point Counting Procedure


Mark II Function Points
26

 Developed in 80s in the UK.


 It is essentially an improvement & replacement of the Albrecht FP method
(IFPUG).
 MK II recognizes that one system delivering the same functionality as another
may be more difficult to implement (but also more valuable to the users) because
of additional technical requirements.
 E.g. Additional security measure may increase the amount of effort to deliver the
system.
 MK II is focused on estimation of effort rather than estimation of SLOC.
 MK II FP is calculated as :
 (Wi x Input Data Elements) + (We x Entity Types Referenced) + (Wo x Output Data
Elements)
 Weights Wi , We , Wo are weightings derived by asking developers the proportions of
effort spent in previous projects developing the code dealing with respectively with
inputs, accessing and modifying stored data, and processing outputs.
 Weights are normalized to total of 2.5
 Industry averages are 0.58, 1.66, and 0.26
Nesma Function Point
27

 Netherlands Software Metrics Users Association – NESMA,


1989.
 NESMA is the largest FPA user group in Europe.
 First version of the FPA application guide containing
definitions and measurements was published in 1990.
 Nesma FP method is based on IFPUG FPA method. Like IFPUG
FPA, Nesma uses External Input, External Output, External
Inquiry, Internal Logical File, External Interface File for
measuring functionality.
COSMIC Full Function Points
28

 COSMIC - Common Software Measurement International


Consortium
 COSMIC FFP was published in November 1999 as a new
functional size measurement method.
 Previous methods were suitable for informations systems.
 But they are not helpful for real-time or embedded systems.
 COSMIC FFP measures the functional size of software
based on 4 data groups:
 Entries (E)
 Exits (X)
 Reads (R)
 Writes (W)
Effort Estimation
29

 Effort (work) is usually measured and expressed as


man-hour, man-day, or man-month.
 10 man-month can be thought as:
 10 people 1 month
 1 person 10 months

 2 people 5 months
Software Effort Estimation Techniques
30

Size Estimation Techniques: LOC, Function Points, Past Projects Data

SLOC
Techniques:
Using past projects data
Effort = Size x Productivity Rate

Using Models
Effort Estimation Constructive Cost Model (COCOMO) (Boehm)
Putnam’s Model (SLIM)
Use-case Points
Class Points
UML Points
Software Effort Estimation Techniques
31

 Algorithmic estimation techniques


 COCOMO (Constructive Costing Model)
 Use-Case Points

 Class Points

 UML Points

 Non-Algorithmic estimation techniques


 Expert judgement
 Estimation by analogy

 Comparison by size data


Algorithmic Effort Estimation Techniques
32

 These techniques use mathematical models


(mathematical formulae) for effort estimation.
 Project effort is computed based on estimates of
product attributes, such as size, and process
characteristics, such as experience of staff
involved.
 Some parameters of these models might need
"calibration" according to the environment.
COCOMO (Constructive Costing Model)
33

 COCOMO is an algorithmic software cost estimation


method developed by Barry Boehm in 1981.
 Basic model was built around the equation
 effort = c x sizek
 There are 3 models according to scope of calculations
 simple, medium, detailed.
 Systems are classified into 3 groups according to the
technical nature of the system and the development
environment:
 Organic mode
 Embedded mode
 Semi-detached mode
COCOMO (Constructive Costing Model)
34

 COCOMO is an open model; that is, equations and


assumptions are open to modification.
 Original COCOMO was based on a study of 63 models. Of
these only 7 were information systems, so the models
could be used with applications other than information
systems.

f o rt
Ef
LOC COCOMO
Model
Du
rat
ion
COCOMO – Project Modes
35

 Organic Mode
 Relatively small software with flexible interface
requirements
 Developed by a small team of developers
 In a highly familiar in-house environment
 Sofware like informations systems running on LAN
 Embedded Mode
 Software has to operate within very tight constraints
 Changes to the system are very costly
 Semi-detached Mode
 Combines elements of both organic and embedded modes.
COCOMO (Constructive Costing Model)
36

 Simple COCOMO model is suitable for quick


estimations for small/medium-size projects.

Project Effort Duration

Organic Effort = 2.4 (KLOC)1.05 Duration = 2.5 (Effort)0.38

Semi-detached Effort = 3.0 (KLOC)1.12 Duration = 2.5 (Effort)0.35

Embedded Effort = 3.6 (KLOC)1.20 Duration = 2.5 (Effort)0.32


COCOMO (Constructive Costing Model)
37

 Medium COCOMO model takes into consideration such factors of


the system as reliability, database size, execution & storage
constraints, personnel attributes, software tools, etc.
 Duration is calculated as in the Simple COCOMO.
 EAF = Effort Adjustment Factor

Project Effort

Organic Effort = 3.2 (KLOC)1.05 x EAF

Semi-detached Effort = 3.0 (KLOC)1.12 x EAF

Embedded Effort = 2.8 (KLOC)1.20 x EAF


COCOMO (Constructive Costing Model)
38

Ratings  Factors used for Effort


Cost Drivers
very low low nominal high very high extra high
Adjustment Factor are
divided into 4
Product Attributes categories.
RELY Required Software Reliability 0,75 0,88 1 1,15 1,4
DATA Database Size 0,94 1 1,08 1,16
CPLX Product Complexity 0,7 0,85 1 1,15 1,3 1,65
 EAF is used for medium
Computer Attributes
and detailed COCOMO.
TIME Execution Time Constraint 1 1,11 1,3 1,66
STOR Main Storage Constraint 1 1,06 1,21 1,56
VIRT Virtual Machine Volatility 0,87 1 1,15 1,3
TURN Computer Turnaround Time 0,87 1 1,05 1,15
Personnel Attributes
ACAP Analyst Capability 1,46 1,19 1 0,86 0,71
AEXP Application Experience 1,29 1,13 1 0,91 0,82
PCAP Programmer Capability 1,42 1,17 1 0,86 0,7
VEXP Virtual Machine Experience 1,21 1,1 1 0,9
Programming Language
LEXP 1,14 1,07 1 0,95
Experience
Project Attributes
MODP Modern Programming Practices 1,24 1,1 1 0,91 0,82
TOOL Use of Software Tools 1,24 1,1 1 0,91 0,83
SCED Schedule Constraints 1,23 1,08 1 1,04 1,1
COCOMO – Effort Adjustment Factor
39

 Product Attributes
 RELY – Required Software Reliability
 DATA – Database Size

 CPLX – Product Complexity

 Computer Attributes
 TIME – Execution Time Constraint
 STOR – Main Storage Constraint

 VIRT – Virtual Machine Volatility

 TURN – Computer Turnaround Time


COCOMO – Effort Adjustment Factor
40

 Personnel Attributes
 ACAP – Analyst Capability
 AEXP – Application Experience

 PCAP – Programmer Capability

 VEXP – Virtual Machine Experience

 LEXP – Programming Language Experience

 Project Attributes
 MODP – Modern Programming Practices
 TOOL – Use of Software Tools

 SCED – Schedule Constraints


COCOMO – Sample Project
41
Ratings Project
Cost Drivers
low very low nominal high very high extra high Rate
Product Attributes
RELY Required Software Reliability 0,75 0,88 1 1,15 1,4 1,4
DATA Database Size 0,94 1 1,08 1,16 1
CPLX Product Complexity 0,7 0,85 1 1,15 1,3 1,65 1
Computer Attributes
TIME Execution Time Constraint 1 1,11 1,3 1,66 1,11
STOR Main Storage Constraint 1 1,06 1,21 1,56 1,06
VIRT Virtual Machine Volatility 0,87 1 1,15 1,3 0,87
TURN Computer Turnaround Time 0,87 1 1,05 1,15 1
Personnel Attributes
ACAP Analyst Capability 1,46 1,19 1 0,86 0,71 1
AEXP Application Experience 1,29 1,13 1 0,91 0,82 1
PCAP Programmer Capability 1,42 1,17 1 0,86 0,7 1
VEXP Virtual Machine Experience 1,21 1,1 1 0,9 1

LEXP Programming Language Experience 1,14 1,07 1 0,95 1

Project Attributes
MODP Modern Programming Practices 1,24 1,1 1 0,91 0,82 0,91
TOOL Use of Software Tools 1,24 1,1 1 0,91 0,83 0,91
SCED Schedule Constraints 1,23 1,08 1 1,04 1,1 1,04
Effort Adjustment Factor - EAF 1,23
COCOMO – Sample Project
42

 Effort = 3.0 x (KLOC)1.12 x EAF


 Effort = 3.0 x (7816)1.12 x 1.23 = 36.9 man-month
 Duration = 2.5 x Effort 0.38= 2.5 x 36.90.38 = 9.84 month
(Development Time)
 N = Effort / Duration → (N: average personnel count)
 N = 36.9 / 9.84 = 3.75 ~ 4 people
Use-Case Points – UCP
43

 UCP technique was developed by Gustav


Karner in 1993 to solve for estimating the
software size of systems that were object
oriented.
 UCP is based on use-case count.

 In Object Oriented software development,

use-cases define the functional requirements.


Use-Case Points – UCP
44

 Use-Case Points – UCP can be obtained by analyzing the use-cases of the


system:
 1st step in use-case analysis is the classification of actors.

Actor Classification Type of Actor Weight


External system that must interact with the system
Simple 1
using a well-defined API
External system that must interact with the system
Average using standard communication protocols (e.g. 2
TCP/IP, FTP, HTTP, database)
Complex Human actor using a GUI application interface 3

 Unadjusted Actor Weight (UAW)


 UAW = (Total No. of Simple actors x 1) +
(Total No. of Average actors x 2) +
(Total No. of Complex actors x 3)
Use-Case Points – UCP
45

 2nd step in use-case analysis is the classification of use-cases.

Use-Case
No. of Transactions Weight
Classification

Simple 1 to 3 transactions 5
Average 4 to 7 transactions 10
Complex 8 or more transactions 15

 Unadjusted Use-Case Weight (UUCW)


 UUCW = (Total No. of Simple Use-Cases x 5) +
(Total No. of Average Use-Case x 10) +
(Total No. of Complex Use-Cases x 15)
 3rd step in use-case analysis is the calculation of Unadjusted Use-
Case Points – UUCP:
 UUCP = UAW + UUCW
Use-Case Points – UCP
46

 4th step in use-case analysis is the calculation of Technical Complexity Factor – TCF.

Factor Description Weight TCF = 0.6 + (0.01 x TF)


T1 Distributed system 2.0
T2 Response time/performance objectives 1.0 TF is calculated using the
T3 End-user efficiency 1.0
factors in table.
T4 Internal processing complexity 1.0
T5 Code reusability 1.0
T6 Easy to install 0.5
T7 Easy to use 0.5
T8 Portability to other platforms 2.0
T9 System maintenance 1.0
T10 Concurrent/parallel processing 1.0
T11 Security features 1.0
T12 Access for third parties 1.0
T13 End user training 1.0
Use-Case Points – UCP
47

 5th step in use-case analysis is the calculation of Environmental Complexity Factor – ECF.

Factor Description Weight ECF = 1.4 + (-0.03 x EF)


E1 Familiarity with development process used 1.5
EF is calculated using the
E2 Application experience 0.5
factors in table.
E3 Object-oriented experience of team 1.0
E4 Lead analyst capability 0.5
E5 Motivation of the team 1.0
E6 Stability of requirements 2.0
E7 Part-time staff -1.0
E8 Difficult programming language -1.0

 6th step in use-case analysis is the calculation of the Use-Case Point – UCP.
 UCP = UUCP x TCF x ECF
 Last step in use-case analysis is the calculation of Effort:
 Effort = UCP x PF
 PF: Productivity Factor (20 man-hour per Use-Case Point on the average)
PROJE
PROJE İLE İLGİLİ BİLGİLER A B C
Aktör Sayıları

Basit Tanımlı bir Uygulama Programlama Arayüzüne (API) sahip başka bir sistemi temsil eder. 0 1 0

Orta TCP/IP gibi bir protokol ile haberleşen başka bir sistemi temsil eder. 0 6 4

Karmaşık Bir web sayfası veya GUI aracılığıyla karşılıklı etkileşen bir kullanıcıyı temsil eder. 5 11 7

Düzeltilmemiş Aktör Ağırlıkları (DAA) 15 46 29


Use-Case Sayıları

Basit bir kullanıcı arayüzüne sahiptir. Tek bir veritabanı nesnesiyle iletişim kurar. Normal (başarılı) senaryosu 3 veya daha az basamaktan oluşur ve tasarımı 5 veya daha az
Basit 8 0 2
sınıf içerir.

Ortalama bir kullanıcı arayüzüne sahiptir. İki veya daha fazla veritabanı nesnesi ile iletişim kurar. Normal (başarılı) senaryosu 4 ile 7 arasında basamaktan oluşur ve tasarım 5
Orta 12 21 17
ile 10 arasında sınıf içerir.

Karmaşık bir kullanıcı arayüzüne sahiptir. Üç veya daha fazla veritabanı nesnesiyle iletişim kurar. Normal (başarılı) senaryosu 8 veya daha fazla basamaktan oluşur ve tasarımı
Karmaşık 5 63 8
11 veya daha fazla sınıf içerir.

Düzeltilmemiş Use-Case Ağırlıkları (DUCA) 235 1155 300


Teknik Faktörler
T1 Dağıtık Sistem 1 5 4
T2 Yanıt veya Çıktı Performans Hedefleri 3 3 3
T3 Son Kullanıcı Verimliliği 3 4 4
T4 Karmaşık Dâhili İşlem 3 3 1
T5 Kodun Yeniden Kullanılabilirliği 0 3 1
T6 Kurulum Kolaylığı 0 1 0
T7 Kullanım Kolaylığı 5 5 5
T8 Taşınabilirlik 0 3 0
T9 Değişim Kolaylığı 3 3 2
T10 Eş Zamanlılık 0 5 1
T11 Özel Güvenlik Özellikleri İçerme 0 5 1
T12 Üçüncü Parti Yazılımlar için Doğrudan Erişim Sağlama 3 3 1
T13 Kullanıcı Eğitim Gerekliliği 0 3 1
Teknik Karmaşıklık Faktörü (TKF) 0,795 1,11 0,855
Çevresel Faktörler
E1 UML ile Tanışıklık 5 4 3
E2 Uygulama Deneyimi 3 4 1
E3 Nesneye-Tabanı Deneyim 5 5 3
E4 Lider Analist Yeteneği 5 4 3
E5 Motivasyon 5 5 4
E6 Sabit Gereksinimler 3 2 3
E7 Yarı-Zamanlı Çalışanlar 0 0 0
E8 Zor Programlama Dili 0 4 0
Çevresel Karmaşıklık Faktörü (ÇKF) 0,575 0,8 0,815
Üretkenlik Faktörü 20 20 20
Use-Case Puanı 114 1066 229
Emek Tahmini (adam-saat) 2280 21320 4580
48
Harcanan Gerçek Emek (adam-saat) 3623 25686 5948
Fark (1 – Tahmin / Gerçek) 0,37 0,17 0,23
Class Points – CP
49

 FP-like approach
 Focus on the size of O-O system
 System-level estimation
 Focus on classes (Logical building blocks)
 Local methods
 Interaction of the class

 The attributes
Class Points – CP
50

 There are 4 main phases:


 Identification and classification of classes
 Evaluation of complexity level of each class

 Estimation of the Total Unadjusted Class Point

 Technical Complexity Factor estimation


Class Points – CP
51

1. Class Identification & Classification


 While analyzing the design document, 4 types of system
components are used:
 PDT (Problem Domain Type)
 Classes representing real-world entities
 RegisterCourses, ViewReportCard, MaintainClassStatus
 HIT (Human Interaction Type)
 Information visualization and human-computer interaction (e.g.
RegisterForm, RegisterConfirmButton)
 DMT (Data Management Type)
 Data storage and retrieval (e.g. CourseManagement subsystem )
 TMT (Task Management Type)
 Definition and control of tasks
 (e.g. ManageRegistrationControl, ClassMaintainControl )
Class Points – CP
52

2. Evaluation of complexity level of each class


 Assign a complexity level for each class
 CP1
 NEM (Number of External Methods)
 NSR (Number of Services Requested )
 A measure of the interconnection of system components

 CP2
 Above all & NOA (Number Of Attributes)
 Evaluate the complexity level
Class Points – CP
53

Evaluation of Complexity Level of a Class for CP1 Evaluation of Complexity Level of a Class for CP2

0 – 4 NEM 5 – 8 NEM ≥ 9 NEM 0 – 2 NSR 0 – 5 NOA 6 – 9 NOA ≥ 10 NOA

0 – 1 NSR Low Low Average 0 – 4 NEM Low Low Average


(a)
2 – 3 NSR Low Average High 5 – 8 NEM Low Average High

≥ 4 NSR Average High High ≥ 9 NEM Average High High

3 – 4 NSR 0 – 4 NOA 5 – 8 NOA ≥ 9 NOA

0 – 3 NEM Low Low Average


(b)
4 – 7 NEM Low Average High

≥ 8 NEM Average High High

≥ 5 NSR 0 – 3 NOA 4 – 7 NOA ≥ 8 NOA

0 – 2 NEM Low Low Average


(c)
3 – 6 NEM Low Average High

≥ 7 NEM Average Yüksek High


Class Points – CP
54

 Calculation of Total Unadjusted Class Point – TUCP


System Component
Description Complexity
Type

Low Average High Total

PDT Problem Domain …x3=… …x6=… … x 10 = … …

HIT Human Interaction …x4=… …x7=… … x 12 = … …

DMT Data Management …x5=… …x8=… … x 13 = … …

TMT Task Management …x4=… …x6=… …x9=… …

Total Unadjusted Class Point – TUCP

4 3  xij is the number of classes of component type i


TUCP   xij  wij with complexity level j
i 1 j 1
 wij is the weighting value for type i and
complexity level j
Class Points – CP
55

 Calculation of Technical Complexity Factor – TCF


System Characteristic DI System Characteristic DI

C1 Data Communication … C10 Reusability …


DI Values:
C2 Distributed Functions … C11 Installation Ease …
•Not present of no influence = 0
C3 Performance … C12 Operational Ease … •Insignificant Influence = 1
•Moderate Influence = 2
C4 Heavily Used Configuration … C13 Multiple Sites … •Average Influence = 3
•Significant Influence = 4
C5 Transaction Rate … C14 Facilitation of Change … •Strong Influence = 5
C6 Online Data Entry … C15 User Adaptivity …

C7 End-user Efficiency … C16 Rapid Prototyping …

C8 Online Update … C17 Multiuser Interactivity …

C9 Complex Processing … C18 Multiple Interfaces …

Total Degree of Influence – TDI


Non-Algorithmic Estimation Techniques
56

 Non-Algorithmic estimation techniques


 Expert judgement
 Estimation by analogy

 Comparison by size data


Expert Judgement
57

 Expert judgement is the most used effort


estimation technique in software industry.
 It is usually based on past experience with similar
projects.
 This method is also used for estimating the effort
needed to change an existing piece of software.
 Opinions of more than one expert may need to be
combined for more accurate estimation.
Estimation by Analogy
58

 In this method, many attributes of the projects are


identified.
 These attributes are used for selecting most similar
past projects to the one estimated.
 Effort estimation for the new project is done using
the differences between the new one and the past
ones.
 This approach requires a database of a large
number of past projects.
Comparison by Size Data
59

 Based on the idea that effort is a function of size


and delivery.
 Effort = Project Size / Delivery Rate

You might also like