Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

299 Proceedings of the IWSM - Mensura 2007

Why software projects fail?


Empirical evidence and relevant metrics

Maurizio Morisio, Evgenia Egorova, Marco Torchiano

Politecnico di Torino, Corso Duca degli Abruzzi 24,


10129 Torino, Italy
{maurizio.morisio, eugenia.egorova, marco.torchiano}@polito.it

Abstract. In this paper we present an empirical study on factors that cause the success
or failure of a software project. The study deals with the definition and validation of
metrics and indicators to make more objective the meaning of success (and conversely
failure) of a project, and correlates characteristics of a project (such as experience of the
staff and the project manager, techniques used for requirement engineering, and so on)
with its result. The study, performed on a number of Italian companies, can be easily
replicated in other countries or within a company, in order to assess its software
processes and steer its process improvement initiatives.

1 Introduction

Software metrics are meant to be measures in the scientific sense of the term, with
the goal of giving objective views on software related objects and phenomena.
The well known problem of software development is that there is very little
scientific foundation to the subject. Electrical engineers can use physics as the
foundation of their activities. As a result the measures they use in their daily work,
such as Volt and Ampere, correspond to well defined phenomena (electric potential,
electric current), and satisfy the rules and properties defined by theories of
measurement, such as the representational theory [11]. The models they use, such as
the Ohm’s law, are experimentally validated and expressed in mathematical form.
In software engineering we have a hard time in defining the objects and properties
(what is exactly the size of a software program? And the complexity?). Not
surprisingly measuring an ill defined property is a hard task (are Lines of Code a valid
measure of size?). Models relating properties are therefore hard to find and validate.
Some models, for instance the relationship between effort and size, correspond to the
common perception of engineers (larger programs require more effort) but are hard to
express in a general form (is the relationship linear, exponential?). Other models are
simply envisaged but still very fuzzy (what is the relationship between effort in
integration testing and post release defect rate?).
Initially, the software engineering community has attacked the problem of defining
software measures as a hard science problem. The Software Science approach by
Halstead [15] in the mid seventies could be mentioned as an example of the optimism
in that period. However, these approaches were completely missing the validation
part, and the importance of human factors.
300 Proceedings of the IWSM - Mensura 2007

The key point here is that software development is not a hard science, but it is a
human, brain intensive activity. Human factors play a key role (it is well known that
productivity varies of a factor of ten and more among developers [10]), therefore
software development should be regarded more as a soft science, such as sociology,
economics, and partially medicine. The misunderstanding here comes from several

• Software runs on computers, hard objects based on hard science and hard
context factors:

• A software program in itself is a hard object, always producing the same


technology.

• And finally software programs were developed, especially at the beginning


results with the same inputs (and we know this is not always true).

of the profession, by engineers trained in hard sciences.


But the process to develop a program is not a hard process, it is a design activity
mostly based on human activity. On the other hand, calling software development a
soft science may be a compliment to it, and a detriment to sociology and other soft
sciences. Sociologists and economists collect and process a huge amount of hard data,
that empirical software engineers simply dream of [7].
Recognizing these factors led some pioneer researchers (such as Vic Basili and
Barry Boehm) to adapt experimentation to the software field. In practice the idea is to
always start bottom up, observing software development in vivo (in industry) or in
vitro (experiments with students) to identify the key factors that drive the process and
possibly identify relationship among them. Seminal results include the Cocomo
model [4], based on developing a model of effort vs. size using statistical analysis on
data from a few dozens projects. And comparison of testing techniques [2].
In the ESE (Experimental Software Engineering) view, that simply uses the
scientific empirical method, a metrics or a theory (or model) should be validated to be
used successfully in industrial practice. In practice:
1. a metric (or a model) is proposed;
2. empirical studies are planned to validate it;
3. as evidence is built in favor, the metric or model can be used by other
researchers or practitioners.
The latter phase corresponds to general use of a metric/model in industrial practice,
what traditionally is the object of interest by the industrial metric community.
However, as for all scientific theories, a metric/model is considered valid temporarily,
until a new experiment possibly falsifies it.
One of the key questions for practitioners, and specifically for project managers, is
what makes a project successful? Are there factors that can help to devise if a project
will be successful? Research on success factors for software projects has been
performed, and it is a notable example of ESE activities that can help practitioners.
In this paper we present an example of an ESE study on success factors for
software projects. The goal is twofold, present the current status of ESE research on
this topic, and present and discuss the process to develop and perform an experiment.
While this study has been performed on projects from different companies, it could be
easily run by a software company on internal projects, as a way to start or assess a
process improvement initiative.
301 Proceedings of the IWSM - Mensura 2007

2 State of the art

Forty years after the start of the discipline of software engineering, software
projects still fail. And, according to Cusumano [9], they fail for reasons very similar
now and at the beginning of the history of the discipline, usually situated at the
Garmisch NATO software engineering conference in 1968.
While a plethora of papers have been written about the topic, very often we can
find a confusion between two quite different concepts: what we mean for a successful
project or product (we will call this success indicator) and what causes a project to be
successful (we will call these success or failure factors). Both indicators and success
factors are, in the end, measures, and should be defined and validated accordingly.
Despite the importance of analyzing key success and failure factors in software
projects, there are a limited number of studies on the topic. The Chaos report [25] is
the one with the largest sample of projects analyzed, and points out an impressive
failure rate. However, the methodology followed by the study has never been
published in detail. Glass [14] points out that the sample of projects considered could
be biased towards failures.
What is a project success? (or else, what are the indicators of a successful
project?) The most adopted definition is the one of the Standish Group [25], i.e.
within planned budget and schedule, and meeting business objectives. Kerzner [17]
adds to the previous definition meeting performance requirements and obtaining user
acceptance.
So what is a project failure? Is a project that is “one day late, or one dollar over
budget, or one requirement short” [20] an unsuccessful one? As Ryan Nelson [20]
states, definition of success or failure is a very subjective factor, and it depends on the
stakeholders.
Different authors have asked software developers, managers and other stakeholders
about their perception of success. Several studies have been made to analyze
successful/unsuccessful projects and to define key success factors.
Agarwal and Rathod [1] define three groups of stakeholders:
Programmers/Developers, Project Managers and Customer Account Managers. The
authors conclude that for these stakeholders “meeting the scope of software project,
which comprises the functionality and quality of the project outcome, is the highest
determinant of success”. Their study is based on 105 answered questionnaires from 19
organizations.
Procaccino et al. [23, 21, 22] have conducted in 1999, 2001 and 2003 three
empirical studies and have analyzed responses of 21, 31 and 76 IT professionals for
their perception of success. The later studies have confirmed the results of the earlier
ones. The findings suggest that the most important factors are mainly two: “namely
the importance of personal aspects of the work and customer/user involvement” [21].
Personal aspects include, for example, the feeling of doing a good job and
professional growth. Besides, there are some differences between the managers and
developers perception of project’s success. From the developers’ point of view the
most important factors are realistic expectations of customers and users, and well
defined scope of the project. Managers find most important participation of customers
and users in estimating project schedules [23]. User involvement is not only indicator
of the project success but also an important critical factor. Besides this one, critical
302 Proceedings of the IWSM - Mensura 2007

factors include sponsor support, project manager authority, enough time for
requirement elicitation [23], clear and understandable requirements, skilled team,
feedback from project manager and defined methodology [24].
Verner et al. [28] analyzed 102 projects in USA and Australia. They find as
success factors a well defined scope for the project, confidence of the customer in the
development team, correct management of the requirements, and effective risk
management by the project manager.
Verner and Evanco [27] surveyed 101 respondents about 122 projects. They found
the following factors explaining the major part of projects’ success: good project
manager and his communication skills, good requirements and project vision.
Berntsson-Svensson and Aurum [3] have performed an analysis on 27 projects in
15 Swedish and Australian companies. They find that key success factors are
complete and precise requirements, good size estimation, and involvement of the
customer. They also use a more comprehensive definition of success. Beyond the
standard, ‘internal’ definition of successful project (within schedule, within budget),
they add an ‘external’ view: adding value for the customer, measured in term of
customer satisfaction.
Taylor [26] analyzed 25 outsourced IT projects in Hong Kong. They identify as
success factors schedule and budget management, vendor staffing, understanding of
requirements, client expectations, and change management.
From a literature review Nah et al. [19] define top management support, business
plan and vision, effective communications as critical software project success factors.
Wohlin and Andrews [29] have added factors such as project planning and ability
to follow the plans, as crucial for the project success.
Besides, there are other studies, like NASA report [18] with DOs and DON’Ts for
the project success. In addition to what other authors have pointed out, they suggest
mainly minimizing the bureaucracy and not relaxing the quality goals. The last one
demotivates developers, since most of them are quality oriented.
In summary we have seen that the standard definition from project management
literature of the project success, such as “in time, within budget”, was extended by
many researchers to include also other stakeholders points of view. For developers, a
project is successful when they understand requirements, are not under schedule
pressure, do a good job and grow professionally. For the managers, user satisfaction is
an indicator of a project’s success. Robert Glass has proposed this formula to
summarize all: “User satisfaction = quality product + meets requirements + delivered
when needed + appropriate cost.” [12]. Critical factors that explain projects’ success
are mainly: customer trust and involvement; experienced managers that perform risk,
schedule and requirements management and negotiation; complete and precise
requirements; good estimation of effort and workload; support from sponsors and top
management.

3 The case study

Our goal is to gain more knowledge on the issue of failures and successes of
software projects. On one hand the previous studies have a limited scope, both in
303 Proceedings of the IWSM - Mensura 2007

number of projects considered and in their geographical location. On the other hand
software development has changed in the last years, because of outsourcing, off-
shoring, emphasis on web based, distributed applications, integration of open source
or commercial components.
Besides, from a methodological point of view the results of the previous studies
can only partially be merged. The model underlining all studies is the one reported in
Figure 1, but factors and projects success or failure are defined in different ways.

factor Software project

result = success
or failure
Figure 1. The model relating factors and result of software project.

Therefore, the starting point of any study could be a careful definition of success or
failure, and a comprehensive definition of factors.
As discussed in the previous section, the more general definition of the result of a
project considers two points of view
z internal, or strictly related to project management: satisfaction of budget,

z external, or more related to customer satisfaction: the project produces a


scope and schedule constraints

software product or service that is used by and is useful for the customer
Unfortunately there are too many different definitions of success (both internal and
external) therefore in our study we do not provide a definition of success. We prefer
to ask the respondents to point out whether a project is successful or not. In other
words here the goal is to develop a definition of success from the answers of the
interviewees, instead of imposing our definition.
Next, a study has to provide a list of factors that can influence the result of a
project towards success or failure. We use the same list of factors of [3] in order to be
able to combine the results form the two studies. The factors are listed in Table 1
together with a short description. Basically these factors cover the size of the project
(in term of effort spent and number of staff involved), the project manager (in terms
of his/her experience), the requirement engineering process, and the project
management process (including if the project ended up on time and within budget).

In clear, the research question underlying this study is:


“What is the effect of certain factors on the success or failure of projects?”
304 Proceedings of the IWSM - Mensura 2007

Metric Type Description


Dependent
S_F Nominal {FAILURE,
SUCCESS} Outcome of the project:
Independent
PM_EXP Ordinal: 6-points Likert scale Experience of PM
PM_INSIGHT Does PM understand client/market
Ordinal: 6-points Likert scale requests
PM_CHANGE Boolean Was PM changed
PM_EXTRA Boolean Did PM allow extra hours
STAFF_EXTRA Boolean Were extra hours paid
REQ_DEF Tools/methods used for requirements
Boolean elicitation and definition
REQ_INIT Requirements collected before
Boolean starting project
REQ_AFTER If not before, requirements completed
Boolean during project
REQ_RES_TIME Enough time devoted to requirements
Boolean definition
OBJ_DEF Boolean Objectives well defined
PROJ_PLAN Boolean Project plan was well defined
PLAN_QUAL Ordinal: 6-points scale Project plan quality
IN_BUDGET Boolean Project ended within budget
IN_TIME Boolean Project ended within schedule
STAFF_ADD Staff added in-course to keep up with
Boolean schedule
CUST_PAY Boolean Customer paid for development
CUST_INVOLV Customer was closely involved in the
Boolean project
RISK_IDENT Risks were identified at project
Boolean beginning
Table 1: Success factors.

In other words, are there any actions, behaviors, attitudes that are clearly different
in a successful project than in a failed one? To support this comparison of successful
and failed projects, each respondent was requested to consider two projects he knew
well, one failed and one successful; and answer twice to the questions, one per
project.
We designed the empirical investigation as a case control study: as already stated
each interviewee answers about a successful project and a failed one. With such a
design we will not be able to find out the percentage of projects that fail. That figure
is of course considered strictly confidential by software organization and it is outside
the scope of our study.
The questions are framed in a questionnaire, largely inspired by [3], that is made of
4 sections
z General information about respondent and company
z Information about a successful project
305 Proceedings of the IWSM - Mensura 2007

z Information about a failed project


z Features that a successful project or product should possess (in general)

The questionnaire was administered, over the period January 2007 to March 2007,
using telephone and personal interviews. The contacts selected were software related
companies located in north-west of Italy and sampled randomly from the Italian
yellow pages database.
Actually we collected many more measurements than presented here; for the sake
of space we focus only on metrics that are relevant to the research question under
consideration.
In particular (as shown in Table 1) we have one dependent variable (i.e. success of
failure of the project) and a set of independent variables representing mostly success
factors. Actually two of them (whether the project was completed within time or
budget) are success indicators.

3.1 Results

We received answers from 14 companies about 38 projects (20 successful and 18


failed). The staff size per project ranged from 2 to 20 with an average of 5.5 persons.
The effort ranged from 2 to 24 person-months with an average of 10.
The majority of projects were bespoke (25), some were mixed (6) or in-house (5),
and just a few market-driven (2). This distribution well reflects the Italian software
industry we know from our direct experience.
We performed statistical tests to identify which, if any, of the independent
variables could be a discriminant in determining success of failure of a project.
For variables measured on a 6-point scale we applied the Mann-Whitney test, for
boolean variables we built contingency tables and applied the Fisher exact test. In all
the tests we fixed an alpha level of 5%.
The analysis was performed using the R-statistical package1. The results from the
tests are summarized in Table 2. The statistically significant differences are
highlighted in bold and with colored background.
An important difference with the original work presented in [3] originates from the
structure of the questionnaire: we sampled one failed and one successful project per
respondent. This approach enabled us to meet the condition to apply statistical tests to
identify the discriminating factors.
The results from our analysis confirmed several laws and hypotheses presented in
the software engineering literature [10] and previously hinted or validated by other
studies cited in section 2.

1 http://www.r-project.org/
306 Proceedings of the IWSM - Mensura 2007

Median
Metric Test Success Failure p-value
PM_EXP M-W 5 4 0.28
PM_INSIGHT M-W 5 4 0.12
Only 1 in 38 projects
PM_CHANGE changed PM -
In all projects PMs allowed
PM_EXTRA extra hours -
STAFF_EXTRA Fisher 0.65
REQ_DEF Fisher 0.47
REQ_INIT Fisher 0.05
REQ_AFTER Fisher 0.01
REQ_RES_TIME Fisher 0.47
OBJ_DEF Fisher 0.16
PROJ_PLAN Fisher <0.01
PLAN_QUAL M-W 5 3 <0.01
IN_BUDGET Fisher 0.52
IN_TIME Fisher 0.18
STAFF_ADD Fisher 0.02
CUST_PAY Fisher 0.39
CUST_INVOLV M-W 5 4 0.05
RISK_IDENT M-W 4 3 0.01
Table 2: Test results.

3.2 Discussion

The definition of requirements before projects starts or, if not possible, their
completion in the initial phases is a factor (p=0.05 and p=0.01 respectively) of
success. This supports the Glass law: requirement deficiencies are the prime source of
project failures [13]. This finding is also in agreement with [3] and [28].
Another important factor for project success is performing project planning
(p<0.01) and doing it well (p<0.01). This finding supports the presence of project
planning (PP) as one of the key process areas defined for level 2 in the Capability
Maturity Model [16].
In the observed projects, we found that people added to the project led to failure
(p=0.02). We get yet another confirmation of Brooks Law: adding manpower to a late
software project makes it later [6]. Or even makes it fail, as we found.
Customer involvement is another feature that likely (p=0.05) favors project
success, in agreement with [3]. Such view is also supported in the Agile Software
Development Manifesto [8], where customer collaboration is valued over contract
negotiation.
307 Proceedings of the IWSM - Mensura 2007

A correct identification of risks at project start will likely (p=0.01) lead to a


project success. This finding supports the Boehm hypothesis: project risks can be
resolved or mitigated by addressing them early [5]. This is also in agreement with
[28].
From our data it is evident that being in time or in budget is not an indicator of
success. Actually 70% of the successful projects surveyed were not completed in time
or within budget or both. This is in plain contrast with the quite naive Standish Group
definition [25].

4 Conclusions

We have presented an empirical study on factors that may lead to the success or
failure of a software project. We start from a model with success / failure factors, as
attributes of a project, and success/failure indicators, as attributes to identify the
positive or negative outcome of a project.
At the level of success/failure factors the study confirms results of other studies
and common practices, such as the importance of sound requirements engineering and
project management, involvement of the customer and early identification of risks.
At the level of success/failure indicators, our study suggests that strict project
management metrics (such as on time, on budget) are not perceived by the
respondents as essential.
From a project management perspective this study adds more evidence to the
identification of successful practices for industry, and could be replicated within a
company to identify and support practices in a more specific, local way. From a
measurement perspective it can be seen as an example about how metrics can be
defined from empirical data, instead of from abstract models, to reflect knowledge
that is spread among software projects and developers.

Acknowledgements

We would like to thank Aybuke Aurum for her collaboration. And the master
students who worked on the field for collecting the data.

References

1. Agarwal, N. and Rathod, U.: Defining success for software projects: an exploratory
revelation. International Journal of Project Management, vol. 24, pp. 358-370, 2006.
2. Basili, V. and Selby, R.: Comparing the Effectiveness of Software Testing Strategies,
IEEE Transactions on Software Engineering, 13(12), 1987, pp. 1278-1296.
3. Berntsson-Svensson, R. and Aurum, A.: Successful Software Project and Products:
An Empirical Investigation. Proceedings Int. Symposium on Empirical Software
Engineering (ISESE 2006), pp.144-153, 2006.
4. Boehm, B.: Software Engineering Economics. Prentice Hall, 1981.
308 Proceedings of the IWSM - Mensura 2007

5. Boehm. B.: A Spiral Model of Software Development and Enhancement. IEEE


Computer 21(5), pp 61-72, 1988.
6. Brooks, F.: The Mythical Man-Month – Essays on Software Engineering. Addison-
Wesley, 1995.
7. Castells, M.: The Information Age: Economy, Society, and Culture. Blackwell Pub,
1999.
8. Cockburn, A.: Agile Software Development, Addison-Wesley, 2002.
9. Cusumano, M.: The business of software, Free Press, 2004.
10. Endres, A. and Rombach, D.: A Handbook of Software and System Engineering –
Empirical Observations, Laws and Theories, Addison-Wesley, 2003.
11. Fenton, N. and Pfleeger, S.: Software Metrics: A Rigorous and Practical Approach.
Second ed., Thomson Computer Press, 1998.
12. Glass, R.: Frequently forgotten fundamental facts about software engineering.
Software, IEEE, 18(3), 2001, pp. 110-112.
13. Glass, R.: Software Runaways. Lessons learned from massive project failures.
Prentice Hall, 1998.
14. Glass, R.: The Standish Report: Does It Really Describe a Software Crisis?
Communications of the ACM, 49(8), pp.15-16, 2006.
15. Halstead, M.: Elements of Software Science. Elsevier Science, 1977.
16. Humphey, W.: Managing the Software Process. Addison-Wesley, 1989.
17. Kerzner, H.: Project Management: A Systems Approach to Planning, Scheduling, and
Controlling. Van Nostrand Reinhold, United States of America, 1995.
18. Landis, L. et al.: Recommended Approach to Software Development. SEL-81-305,
Greenbelt, Maryland: NASA Goddard Space Flight Center, NASA, 1992.
19. Nah, F., Lau, J. and Kuang J.: Critical factors for successful implementation of
enterprise systems. Business Process Management Journal, 7(3) 2001, pp. 285-296.
20. Nelson, R.: Project Retrospectives: Evaluating Success, Failure and Everything in
Between. MIS Quarterly Executive, 4(3) 2005, pp. 361-372.
21. Procaccino, J. and Verner, J.: Software practitioner’s perception of project success: a
pilot study. International Journal of Computers, The Internet and Management 10(1)
2002, pp. 20-30.
22. Procaccino, J. and Verner, J.: Software project managers and project success: An
exploratory study. Journal of Systems and Software, 79(11), 2006, pp. 1541-1551.
23. Procaccino, J., Verner, J., Overmyer, S. and Darter, M.: Case study: Factors for early
prediction of software development success. Information and Software Technology,
44(1) 2002, 53-62.
24. Procaccino, J., Verner, J., Shelfer, J. and Gefen, D.: What do software practitioners
really think about project success: an exploratory study. Journal of Systems and
Software, 78(2) 2005, pp. 194-203.
25. Standish Group, “The Chaos report”, 1994, accessed at
http://www.standishgroup.com/sample_research/chaos_1994_1.php
26. Taylor, H.: Critical risks in outsourced IT projects: the intractable and the
unforeseen. Communication of ACM, 49(11), pp.74-79, 2006.
27. Verner, J. and Evanco, W.: In–house Software Development: What Software Project
Management Practices Lead to Success? IEEE Software, 22(1), 2005, pp. 86-93.
28. Verner, J., Cox, K. and Bleistein, K.: Predicting Good Requirements for In-house
Development Projects. Proceedings Int. Symposium on Empirical Software
Engineering (ISESE 2006), pp.154-163.
29. Wohlin, C. and Andrews, A.: Prioritizing and Assessing Software Project Success
Factors and Project Characteristics using Subjective Data. Empirical Software
Engineering, 8(3), 2003, pp. 285-308.

You might also like