Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/274077719

SOFTWARE PROJECT CHARACTERISTICS AND THEIR MEASURES: TOWARDS A


COMPREHENSIVE FRAMEWORK

Conference Paper · November 2014

CITATIONS READS

0 9,574

4 authors, including:

Manish Kumar Shilpi Jain


Infosys FORE School of Management
11 PUBLICATIONS   27 CITATIONS    29 PUBLICATIONS   77 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Adoption of Internet in Rural Areas and Policy Making View project

Indoor GeoLocalization using Wi-Fi View project

All content following this page was uploaded by Shilpi Jain on 27 March 2015.

The user has requested enhancement of the downloaded file.


SOFTWARE PROJECT CHARACTERISTICS AND THEIR
MEASURES: TOWARDS A COMPREHENSIVE FRAMEWORK

Manish Kumar, Infosys Limited, Electronic City, Bangalore, India,


Manish_Kumar28@infosys.com

Shilpi Jain, International Management Institute, New Delhi, India,


shilpijain@imi.edu

Sheetal Payyavula, Infosys Limited, Electronic City, Bangalore, India ,


sheetal_payyavula@infosys.com

Jude Fernandez, Infosys Limited, Electronic City, Bangalore, India,


judef@infosys.com

Avitash Purohit, Infosys Limited, Electronic City, Bangalore, India,


avitash_purohit@infosys.com

Abstract
In this paper we present a comprehensive list of project characteristics based on research conducted
in one of the largest software development and IT services organizations which has hundreds of
concurrent offshore outsourcing software projects at any time. This list of characteristics is based on
data from three sources: a) existing literature, b) internal company knowledge based on the
experience of past projects and 3) opinion of industry experts. Apart from aggregating 55 project
characteristics which were available in different research works, we added another 21 to it. We have
defined the characteristics and suggested suitable measures.
This research will help in understanding the outsourced software project characteristics better.
Correct understanding of project characteristics can aid in predicting project challenges which can
be proactively managed. The characteristics, their definitions, and measures will be useful in future
empirical research involving measurement of variables related to software projects.
Keywords: Software Engineering, Software Projects, Software Project Characteristics, Measurement
of Project Characteristics.
1 INTRODUCTION
1.1 Motivation for research
Software projects, specifically offshore outsourcing projects are faced with increasing demands from
their stakeholders to deliver consistently as planned in an environment which is getting more dynamic
and challenging. To meet these needs, one of the conundrums faced by software project managers is
on deciding on the most suitable set of processes for their project from the various options available.
Obviously this needs to be done in a structured way so as to enable the project meet its goals in the
best manner. While several process frameworks like the CMMi and others exist, project managers are
sometimes unable to decide on how to tailor their processes to suit specific situations.
Several researchers have accepted ‘software situational factors’ or ‘project characteristics’ (we will
use the term Project Characteristics abbreviated as ‘PC’ for this term in the rest of the paper) as
important guides for process tailoring (Kelly & Lee, 2010). In other words, a comprehensive detailing
of ‘situational characteristics’ or PC is of significant value during process planning for a successful
project (Ferratt & Mai, 2010; Kelly & Lee, 2010). Software project characteristics can be defined as
any characteristic or attribute of the project which has a bearing on the overall execution of the
project. These characteristics can cover different aspects such as nature of the team: size, capability,
location, levels of experience etc.; nature of the application being built: size, technology, complexity
etc.; nature of the customer: domain / technology capabilities, customer’s involvement in the project,
customer culture etc.
On analysing existing literature, we found significant attempts by different researchers to categorize
and define project characteristics.
However, most of these characteristics are studied to find out their influence on project success
especially dealing with in-house product development. They are not readily applicable for software
development organizations where development work is outsourced and offshored. We found little or
no studies in the area of IT outsourcing and offshoring. This paper presents the research we carried
out to build a PC framework which can be used in large software development organizations and
outsourced projects also. The PC framework builds on existing research, and leverages industry
learning and insights from experienced industry experts. In this framework we have retained 55
characteristics from past research and added 21 new characteristics. We developed definitions for all
the PC and also developed measures in the form of questions to measure them.
In section 2 we discuss the research design which defines the research questions and research method.
In section 3, we discuss the results where we examine in detail the PC taken from literature followed
by PC taken from the industry sources and finally, we present the results of categorization using KJ
analysis (Tomer, 2012). This is followed by conclusion in section 4, and limitations and future
research in section 5. References are given in section 6, and Appendix details the list of PC, their
definition, and measure.
2 LITERATURE REVIEW
A detailed literature analysis was conducted; articles were collected through research databases, such
as EBSCOHOST, PMI (Project Management Institute Journal), Informs, Emerald, ABI-Informs, and
Sage. Additionally, conference proceedings (published by PMI, IEEE, and ACM) were also read in
depth for relevant articles. We retrieved 47 articles, where software characteristics have been used as
a main parameter for research, however in many of these articles 4 characteristics (project size,
project type, and cost) were used repetitively. Therefore those articles were excluded and rests were
used in analysis, however those 4 parameters were added to the list. A number of researchers have
worked in this space, several of them indirectly as it impacted their main area of research, while a few
researchers have focused on PC as a focus area. For example, Boehm and Turner (Boehm & Turner,
2003) proposed five factors (size, criticality, dynamism, personnel, and culture) to enable the right
balance between agility and the software development process. Kelly and Lee (Butler & Fitzgerald,
1999) examined the impact of PCs (like project innovativeness, strategic relatedness, and resource
requirements) on the role of the direct manager. In another analytical research, Mclain (2009)
showcased how four PCs such as interdependencies among activities, limited information about
activity durations, unfamiliarity and variety in project work can quantify uncertainty. Barry et al.
(2002) developed a two-stage conceptual framework to indicate a positive and significant relationship
between project duration and project effort, controlling other characteristics like project size, team
skills, size of the project etc. In these research studies, the number of characteristics was contextual
and not comprehensive.
Five research studies need special attention for discussion as they developed a comprehensive set of
contextual PC. However each one of these studies identified the categories with different names.
Butler and Fitzgerald (1999) provide a set of factors affecting software development process, based on
their work limited to one organization. Bern et al (2007) developed a set of nine contextual factors
that have an impact on the software development process. This list provided us a good starting point
but we found it incomplete when we compared it with industry practices as it lacks factors like
‘project finance’, ‘business unit sponsorship’, ‘contract management’, etc. which may have a
significant impact on emergence of challenges, selection of the development process and project
performance. Such findings emerged in our interaction with industry experts. Similar effort was
extended by Bekkers et al (2008) in the context of software product management where they
investigated the most important situational factors (PC) influencing the selection of method for
software product management. Most of these characteristics were out of the current scope as they
were specific to agile projects, the industry and the country of origin. In an additional research, Dede
and Dede et al. (2010) adopted some of the characteristics proposed by both Bern et al (2007) and
Bekkers et al (2008) for their study. They evaluated the gap between acquired and deployed
technology for selective agile process implementations and identified situational factors which
influence software development process selection. Clarke and O’Connor (2012) conducted an
elaborate review of the existing literature in this space. They developed a reference framework of PC
consisting of 8 classifications and 44 factors using grounded theory.
A detailed analysis of literature reveals that the current literature, in general, agrees on the
characteristics that define a project. Yet, the grouping of the characteristics changes from one research
to another as shown in Table 1.

Category Variables
Contract Scope of Project (Bern et al., 2007),Contract Type (Clarke & O’Connor, 2012; Dede &
Management Lioufko, 2010), Duration of the Project (Barry et al., 2002, Dede & Lioufko, 2010),
Schedule Commitment (Barry et al., 2002)
Unit Sponsorship Unit Management Performance (Ferratt & Mai, 2010), Unit Management Experience
(Clarke & O’Connor, 2012), Stability of the Management (Wallace & Keil, 2004),
Influence of Organization’s Strategy (Bern et al., 2007), Governance Structure (Wallace &
Keil, 2004; Delany & Cunningham, 2010)
Project Finance Cost Estimate (Clarke & O’Connor, 2012), Available Budget (Delany & Cunningham,
2010)
Customer’s Type of Industry (Bekkers et al., 2008), Cooperativeness of the customer (Boehm &
Organization Turner, 2003; Pinto & Pinto, 1990; Schmid, 2006), Departmental Distance between the
Attributes Client Sponsor unit and implementation team (Lamersdorf & Münch, 2010), Customer
Culture (Boehm & Turner, 2003), Time Zone (Lamersdorf & Münch, 2010)
Customer Technical Knowledge (IT) of the Customer (Schmid, 2006), Domain Knowledge of the
Capability Customer (Bern et al., 2007)
Customer Type of Customer Involvement (Boehm & Turner, 2003; Dede & Lioufko, 2010), Client's
Relationship commitment on Intermittent Signoffs / Requirement Reviews (Bern et al., 2007),
Project Multi-Vendor Project (Bekkers et al., 2008; Dede & Lioufko, 2010)
Organization
Project Process Type of Methodology (Zumud, 1980), Nature of Agile Methodologies (Dyba & Dingsoyr,
2008), Programming Practices (Bern et al., 2007)
Infrastructure Communication Channels (Pinto & Pinto, 1990), Network Access (Lamersdorf & Münch,
2010), Availability of Required Software tools (Bern et al., 2007), Stability of Hardware
and Software (Bern et al., 2007)
Team Capability Technical Experience of the team (Wallace & Keil, 2004; Ferratt & Mai, 2010), Domain
Knowledge of the Team (Dede & Lioufko, 2010; Wallace & Keil, 2004), Team Size
(Boehm & Turner, 2003; Wallace & Keil, 2004), Team Structure (Yang & Tang, 2004),
Team Engagement Motivation of Team members (Schmid, 2006), Team Maturity (Clarke & O’Connor,
2012), Cooperation (Clarke & O’Connor, 2012; Pinto & Pinto, 1990), Commitment
(Wallace & Keil, 2004), Team Culture (Dede & Lioufko, 2010), Stability of Team (Ferratt
& Mai, 2010; Wallace & Keil, 2004), Multi Language Teams and Location (Lamersdorf &
Münch, 2010)
Application Application Type (Clarke & O’Connor, 2012), Technological Complexity (Wallace &
Characteristics Keil, 2004; Tani & Cimatti, 2008), System Complexity (Rasch, Cuccia, & Amer, 1995),
Type of Architecture (Bern et al., 2007)
Project Scope Quality of Requirements (Boehm & Turner, 2003; Clarke & O’Connor, 2012), Scope of
the Project (Abdel-Hamid, Sengupta, & SwettSource, 1999; Dede & Lioufko, 2010), Type
of Requirements (Zumud, 1980), Non Functional Requirements (Glinz, 2007),
Documentation of Requirements (Zumud, 1980; Wallace & Keil, 2004), Repeatability
Required (Zumud, 1980; Clarke & O’Connor, 2012), Stability of Requirements (Boehm &
Turner, 2003; Ferratt & Mai, 2010), Demanding Statutory and Regulatory Requirements
(Clarke & O’Connor, 2012), Project Type (Software Education, 2008)

Table 1. Dimensions of Project Characteristics (from Literature)


Post analysis, it was observed that the extant literature lack a comprehensive list of software project
characteristics which is typically used for the evaluation of any outsourcing – offshoring project. Most
importantly, the available evidences exhibit shortage of contribution from the industry. Hence the
current research was conceptualized with the following objectives:
1. What are the various PC identified in the academic literature and/or used in industry?
2. How are these characteristics defined?
3. How are these characteristics classified? And,
4. How can they be measured?
We would empirically validate the questionnaire with a large population of user in next phase of
research. At present the measures are not empirically validated.
3 RESEARCH DESIGN
Sequential mixed methods were used, where we conduct literature survey, interviews along with a
structured survey, and finally KJ analysis (Rasch, Cuccia, & Amer, 1995; Benaroch & Appari, 2010;
Spool, 2004). Literature survey of research articles and industry documents helped in developing an
exhaustive list of project characteristics (PC). Several industry documents and templates were read
that were used in project planning and risk analysis. It aided to add more PC to the list. At this point,
we avoided deletion of any characteristic unless it was similar in meaning. Next step was the
interviews with subject matter experts mainly to verify whether the available list is sufficient? And,
which all characteristics are missing. Finally KJ analysis helped to develop the classification of these
characteristics, and defining their measures. The classification was also triangulated with the
literature.
In summary, the study was designed to meet the requirements of positivism, subjectivism and
generalization. To meet the requirements of positivism, we observed the characteristics of several
ongoing and completed projects, reviewed their templates and risk assessment documents. Similarly,
to validate subjectivism we met several project managers to capture their opinions on available and
possible new PC. To meet the requirement of generalization, we followed a systematic scientific
process as discussed and shown in Figure 1.
Figure 1. Research Design

3.1 Exploratory Industry Research

We reviewed several process documents and templates used in the software organization which are
used in planning, executing, monitoring and analysing a project. It is important to mention that the
organization has executed thousands of projects. So the list of PC, as observed in these documents has
the accumulated organizational knowledge of such vast experience. In this section, we discuss some
of these PC briefly.
Many of the existing studies use data from product companies and do not seem to have considered
data from IT service companies, including IT offshore vendors. Considering the rapid growth in
outsourcing IT services and offshoring, it is important to include this dimension. In our exploratory
study, for example, respondents mentioned that factors such as ‘customer’s maturity with outsourcing
model’ and ‘customer’s project management maturity’ have an impact on the project execution and
management. Nevertheless, we could not find such propositions in the existing literature. Customers,
who are familiar with outsourcing both inshore and offshore, are less wary of vendors. They can
continue to focus on their core competency and comfortably outsource other non-core work like IT to
expert vendors whose core competency is software development. Customers with adequate project
management maturity are easier to work with as a certain process when followed, leads to better and
more predictable outcomes. E.g.: requirements when documented and signed off in a formal
requirements document are easier to manage and track than random requirements provided by various
members of customers team in e-mails. In scenarios where there are multiple teams working on a
project , having a common language for both verbal and written communication e.g. English, causes
less miscommunication and hence is seen as an important project characteristic.
Contract management is crucial and impacts more than just the immediate projects outcome. It may
affect the overall relationship between the outsourcing company and the vendors. Ensuring that the
Statement of Work is signed by all stake holders before the project commencement is very important
so that all expectations are set correctly at the start. The contract should contain clauses related to the
service level agreement where applicable. Legal compliance clauses should be included in the
contract as every country has its own laws with regards to working hour policies, visas, overtime etc.
Liability clauses should be included to protect interests of all stake holders.
Some project-intrinsic PCs were also identified, for e.g., project processes, application characteristics
like technology maturity and development language. Working with nascent technologies can throw
unforeseen challenges as typically seen by early adopters. It is also easy to source team members for
commonly used development languages like java or .net.
When requirements are communicated across organizations, e.g.: the bank ‘A needs to develop a
website to promote a product to its customers. Bank A gives the requirements to its technology arm B.
B outsources the IT project to an offshore company C whose teams are also distributed across two or
more locations. All the team members across these organizations and locations need to know the
criticality of the project to Bank A and the business value of the requirements. This results in a more
successful project delivery. The list of 21 PC derived from the industry documents are provided
below.
 Familiarity with Outsourcing / Global Delivery Model: Examines the familiarity of the
customer with offshoring and globally distributed teams
 Project Management Maturity: Assessment of the Customer's project management maturity
 Customer's Organization maturity for Outsourcing: Customers who have been doing IT
outsourcing for years have their organizational processes tuned to accommodate the special issues
of outsourcing. Customers new to outsourcing tend to pose additional difficulty in executing the
project successfully.
 Relationship Maturity: Measure of the length of the relationship between the Customer and the
vendor
 IP Usage Policies: Is the client willing to use vendor IP/ open source. The client sometimes feels
high risk in using Open source or the vendor IP. They like the IP but may ask for it to be
developed for them. This increases the effort in the project.
 Communication Language of the Customer
 IT Vendor’s Delivery Leadership’s sponsorship of project: Sponsorship of the project by
Delivery leadership indicates that the management is directly involved in the project.
 Statement of Work (SoW) available (contract): An agreed statement of work defines a
contract which will govern the project.
 Type of SLAs: Service Level Agreements for vendor as well as customer reinforce a mechanism
to monitor it better. Stringent SLA ensures better compliance.
 Legal Compliance Clauses: Examines the nature of the due diligence process followed for
ensuring legal compliance to all relevant local and national laws.
 Liability Clauses: Sometimes there are liability clauses in the Statement of Work which can
increase risk for the vendor
 Multi-unit Project Organization: Multiple units / verticals of the vendor working on parts of
the same program
 Project Profitability: This parameter examines the profit margin of the project
 Testing Scope: The scope of software testing includes examination and execution of the code in
various environments and conditions to ensure that the code does what it is supposed to do. It
elaborates the various types of testing needed along with test coverage. A well-defined test scope
saves time and effort by concentrating on what is important and what is not relevant.
 Testing Methods: Testing methods can cover functional testing approaches such as black box /
white box testing or other approaches such as static analysis / dynamic analysis etc. (These
methods are in addition to the software stage-wise testing, i.e., unit testing, integration testing
etc.).
 Design Methods: Design techniques could include formal methods (e.g., using rigorous
notations, mathematics etc.) and informal methods (e.g., using graphical notations) and could be
top-down or bottom-up approaches. Additionally, design patterns are also used in several
situations.
 Technology Maturity: This captures the aspect of whether the technologies used in the project
have been successfully used in application development across the world
 Development Language: Profile of the development languages needed for the development
 Hardware Constraints: Hardware infrastructure includes servers, client-server machines,
quality of network etc.
 Criticality of the Project for Customer: An IT system is more business critical if its failure
results in some serious catastrophe or loss of business. E.g. systems which support trading on
stock exchanges are more critical than the admission system of a university.
 Business Value of the Project to the Customer: What is the expected value of the project to the
customer? Business value also drives cooperativeness as if the customer anticipates high business
value of a project; it will increase his stake to see the project through.
4 DATA COLLECTION AND ANALYSIS
4.1 Interviews
In order to gain more insight about project characteristics and validate our list of PC, we conducted
interviews. The interviews were semi-structured as qualitative rich data was expected to help the
research team to identify a large number of PC based on project managers’ experiences and
observations. We sent invitation emails to 75 respondents who were evaluating or anchoring various
projects. Most of them were group project leaders, project managers, solution architect or belonged to
the client facing group (CFG). In total, we received 26 responses. The respondents have an average
experience of 15 years in project evaluation or management and have managed projects of diverse
nature with the minimum team size of 7. Out of 26, nine respondents have worked as risk analysts in
the past. We considered this favorable for our analysis as risk analysts need to deal with PC in detail
as part of their job. The average duration of each interview was 75 minutes. Two researchers
participated in the interview so that extensive notes could be taken. A small questionnaire was
prepared and during the interview session each interviewee was asked to rate the given PC on a 5
point Likert scale (from least important to highly important). Characteristics with an average score of
3 or above were retained. A few characteristics were renamed as per the suggestions from respondents
and a few were deleted if found similar. For example, the characteristic ‘development team’s ability’
was similar to ‘technical knowledge of team’. Two characteristics ‘DBMS type’ and ‘memory
constraints’ were cited as the low level details and hence deleted.
In the end of the questionnaire, one open-ended question was included to explore if anything was
missing and whether the respondents were willing to add any other characteristic. After deleting
synonyms, and mapping with the existing list, 21 new PC were added.

4.2 Data Analysis and findings

Since the list of project characteristics was huge and many characteristics looked similar, we planned
to conduct KJ analysis. The KJ-Method also known as ‘affinity diagram’, named for its inventor, Jiro
Kawakita (Rasch, Cuccia, & Amer, 1995; Benaroch & Appari, 2010; Spool, 2004). It is a group
process for establishing priorities or reach common consensus based on the subjective or qualitative
data. During the process, different groups can analyze same data and will often come to the same
results. The process is defined in brief as follows:

1. Appoint a facilitator
2. Put ideas / thoughts / parameters / keywords on post-it pads
3. Invite subject matter experts to view those post-its
4. Group / categories these post-it based on the discussion in different columns
5. Repeat the activities 3-4 times until the experts reach consensus
6. Nomenclature these categories with the common consensus
7. Ask respondents to identify top 3 or 5 groups that seem to be the most important and also
explore their order of preference.
For the exercise, we invited a team of 7 experts who were project managers with over 15 years of
experience in project management and share a good understanding of software projects. To facilitate
the session and avoid the possibilities of conflicts among the participants, two members from our
research team acted as anchors.
All characteristics were read one at a time in front of the managers. Each characteristic was rated on 3
point Likert scale (least important, somewhat important and highly important). They were bucketed
according to the consensus and scores. In this process, we removed 14 characteristics from the given
inventory. After that, each characteristic was discussed in detail for its classification and possible
measures. This whole exercise was completed in 6 sessions and each session lasted 60 minutes. The
final list comprised of 76 characteristics grouped in 14 categories. These categories are further
grouped as shown in the Figure 2. The detailed definitions and measures for the PC are provided in
the appendix.
The software PC derive some characteristics from the project sponsor organization (customer), some
from the IT vendor organization (IT implementation unit) and some related to the relationship
between the two, while there are some describing the intrinsic characteristics of the project. In the rest
of this section, we briefly describe the categories and their component PC.
The first group is named Customer Unit which contains the following categories: customer capability,
customer commitment and customer’s organizational attributes.
Customer capability category includes five characteristics, namely, customer’s technical knowledge,
domain knowledge, familiarity with global delivery modal, project management maturity, and
customer’s organizational maturity for outsourcing. It is easier to arrive at a common ground in
negotiations with a customer who has more knowledge about the technology, domain, and
outsourcing and who is more involved in project management.
Customer commitment category consists of four PC, namely, cooperativeness of the customer,
relationship maturity, type of customer involvement and Client's commitment on Intermittent Signoffs.

Customer Organizational Attributes category consists of six characteristics, namely, industry,


departmental distance of the client sponsor unit and implementation team, customer culture, IP Usage
policies, time zone/ customer location and communication language of the customer. The industry
domain of the customer also will have their specific characteristics which can affect the software
project. For example, the retail industry (domain) is very competitive and thrives on fast response and
flexibility, while government or defense attempts to employ more systematic approaches to problems
while emphasizing efficient and rule-based decision making. Some industries like retail are early
adopters of technology and need faster rollouts while others are regulation heavy like healthcare and
government. If the customer directly interacts with the development vendor (low customer distance)
then there is more clarity in requirements and it leads to timely sign-offs. If the customer culture is
similar to the vendor culture it leads to lesser misunderstandings and smoother project execution.
Communication overheads are reduced when the customer and vendor teams have time-zone over laps
and/or share a common communication language.
The second group, the IT Implementation unit comprises the following categories: team capability,
team engagement, and unit sponsorship.

Figure 2. Categorization of Project Characteristics


Team capability consists of four characteristics, namely, technical experience of the team, domain
knowledge of the team, team size, and role ratio (team structure). Lack of domain knowledge,
technical knowledge and lack of experienced team members affect the project goals. Role ratio
measures the ratio of the roles in the project team. It is important to strike an ideal role ratio for a team
as a top heavy team or a team comprising a majority of inexperienced team members may lead to cost
and schedule overruns.
Team Engagement consists of eight characteristics, namely Motivation of Team Members, Team
Maturity, Cooperation, Commitment, Team Culture, Stability of the team (Attrition), Multi-language
Teams, and Location (Geo Distribution). These eight characteristics influence the effectiveness of the
project team.
Unit Sponsorship refers to the IT vendor’s Delivery unit sponsorship of the project and contains six
PC, namely, Unit management performance (success rate), Unit management experience, Delivery
Leadership’s sponsorship of project, Stability of the management, Influence of Organization's strategy
on the project, Governance structure. Clear, capable and stable unit’s governance structure helps in
giving effective management support to a project.
The third group is named Relationship Properties and consists of three categories: contract
management, project organization and project finance.
Contract Management consist of eight PC, namely, Schedule of Work available (contract),
Management of change in Scope of project, Type of SLAs, Contract Type (type of billing), Legal
Compliance Clauses, Duration of the Project, Schedule Commitments, Liability Clauses. A clear and
complete contract which details interests, commitments and liabilities of all stakeholders involved
aids in project governance.
Project organization consists of two PC namely Multi-vendor project and Multi-unit Project
Organization. It is common to outsource projects to multiple vendors or to a single vendor with
multiple units within the vendor organization participating in the project. Well established
communication protocols and higher coordination effort are needed if the number of teams involved
in the project is high. Large engagements with very high dependency on customer and/or vendors
coupled with weak/non-existing program level oversight/monitoring mechanism at overall program
level are high risk and need due diligence.
Project Finance category consists of four PCs: Project profitability, Cost estimate, available budget
and effort estimate. Project financials directly affect decision making during the project. If there is a
vast difference between the cost estimate and available budget of the project, the scope may need to
be revised accordingly or project executed in a phased manner post requirement prioritization.
The fourth group is named Project Intrinsic properties and captures the categories Project Scope,
Project process, Application characteristics, Infrastructure, and Project Success Criteria.
Project Scope consists of nine PC: Quality of Requirements, Scope of the Project, Type of
requirement documents available, Non Functional Requirements (NFRs), Documentation of
Requirements, Testing Scope, Repeatability Required, Stability of Requirements, Demanding
Statutory and Regulatory Requirements. Volatile requirements or delayed scope sign-off may cause
revenue leakage. Stringent NFRs and strict statutory and legal requirements need extra due diligence
during the requirement gathering phase.
Project process consists of six characteristics namely Type of Methodology, Programming Practices,
Testing Methods, Design Methods and Project Type. Established project processes enable teams to
learn best practices from data of similar projects.
Application Characteristics category consists of seven PC, namely, Number of Technical Skills
Involved, Application Type, System Complexity, Type of Architecture, Technological Complexity,
Technology Maturity and Development Language. The larger the number of technical skills needed
for a project, the larger will be the team size. It is easier to staff teams to work on mature technologies
and commonly used development languages as compared to newer languages and technologies.
Infrastructure category consists of five PC namely Communication Channels, Network Access,
Availability of Required Software tools, Hardware Constraints, and Stability of Hardware and
Software. Availability of a reliable and accessible infrastructure is automatically assumed while
project planning. Thus any infrastructure glitches may throw the project plan off-track.
Project Success Criteria consists of two characteristics namely Criticality of the Project for
Customer and Business Value of the Project to the Customer. Highly critical projects need close
monitoring. Determining business value of requirements aids in prioritization and scoping of the
requirements when there is a limited budget and also in effective communication to all stake holders.
Definitions and Measures of PC

For the definitions and measures of these project characteristics, a structure was defined by the
authors and same is summarized:
1. Adopt definitions for various characteristics from available sources, namely, academic
literature and industry documents.
2. Generate consensus on these definitions from subject matter experts (SMEs) (In our case we
took advice from solution architects, group project managers and client facing groups). A few
of them were further modified based on their opinions.
3. Define scale for each characteristic; first through literature and then validate from SMEs.
Sometimes, the measures had company specific terminology which we modified to make
them more generic. A few measures from literature were difficult to understand by industry
experts, were thus also modified in consultation with them. The definitions and measures are
given in annexure.
Validity means that the data must be unbiased and relevant to the characteristic being measured. We
took special care for all the three types of validity, namely, content validity, criterion validity and
construct validity. Content validity is measured by personal judgment of the experts in the field. A
few steps were taken during this research to improve the content validity of the measurement
instrument. First, the instrument was made with the help of industry experts, second, to remove
chances of systemic bias this was validated by three external researchers, third, pre-testing was done
at a limited scale with a different set of industry experts. Criteria validity refers to validity of
instrument with respect to external criteria, which can be another measurement instrument. We did
not administer our instrument and external instrument with the same subjects but we did adopt the
questions available in academic literature and industry documents and templates. Here, we relied
more on web-based industry templates because the web-based tools have been used by hundreds of
industry experts and they have modified over a period of time to accommodate all types of software
projects executed in the company. Construct validity consists of convergent and discriminant
validity. Convergent validity refers to correspondence in results between attempts to measure the
same construct by two or more independent methods. In practice, when we measure a construct by
more than one question, both the questions should measure the same quantity.
It is important to mention here that since we had 76 PC, we measured one PC by one question only, so
that overall questionnaire length remains handy. Thus, within the instrument convergent validity was
not an issue. However, measures of some of the characteristics are contextual and can be extended as
per the previous studies.
Discriminant validity refers to the fact that different constructs should measure different things. In
practice, in multiple item constructs the items belonging to the construct should have high correlation
with its own construct and less with other constructs. In Single item construct this was not an issue.

5 CONCLUSION
First objective of this research was to understand and define the PC by assimilating the knowledge
from Industry practices and academic literature.
Modern software projects are commonly outsourced and executed by multiple teams belonging to
multiple vendors, located across multiple locations and time zones. In the context of outsourcing
offshoring projects, we observed that the existing set of PC from literature is not exhaustive and does
not comprehensively define characteristics of such projects. Consequently, the inputs from industry
are prevalent; how industry has been evaluating their projects. In this work, we have defined 76 PC
incorporating these aspects of modern outsourced software projects. The second objective was to
define them. Some are easy to define while for others, we consulted domain experts. The third
objective was to classify them. KJ method was used to classify them which resulted in 14
classifications. Finally, the fourth objective was to propose a scale to measure each PC. The scale of
measure is not provided in appendix due to page limitations we developed a questionnaire which
included the definition and measures for the PC. Pretesting was also done to check the understanding
of definition and scale of measure before finalizing it. The definitions and measures of the entire 76
PC are given in the annexure. This research contributes by providing a comprehensive listing of
project characteristics which contains specific new elements which are important in IT services and
outsourcing projects. This work can be used by researchers to build on and refine further and also add
value to practitioners during their project planning and management.

6 LIMITATIONS AND FUTURE RESEARCH DIRECTIONS


In the study the respondents belonged to only one large software development organization with
mature project processes. There may be some additional characteristics which we may observe in
smaller and more niche software product or services companies and startups. We aim to refine the
measurement scale after a survey of the characteristics. The PC defined in this paper can be used in
further studies of software projects where PC affect project success, project risk, useful project
management techniques, etc. We plan to use PC to predict project challenges so that suitable project
techniques can be suggested to project managers to achieve the project goals.

References
Abdel-Hamid, T. K., Sengupta, K., & SwettSource, C. (1999). The Impact of Goals on
Software Project Management: An Experimental Investigation. MIS Quarterly,
23(4), 531-555.
Barry, E. J., Mukhopadhyay, T., & Slaughter, S. A. (2002). Software Project Duration
and Effort: An Empirical Study. Information Technology and Management, 3(1-2),
113-137.
Bekkers, W., Van de Weerd, I., Brinkkemper, S., & Mahieu, A. (2008). The Influence
of Situational Factors in Software Product Management - An Empirical Study.
Second International Workshop on Software Product Management, 2008. IWSPM
'08. (pp. 1 - 8). Barcelona, Catalunya: IEEE Xplore.
Benaroch, M., & Appari, A. (2010). Financial Pricing of Software Development Risk
Factors. IEEE Software, 27, 65-73.
Bern, A., Pasi, S. J., Nikula, U., & Smolander, K. (2007). Contextual Factors
Affecting the Software Development Process – An Initial View. The Second AIS
SIGSAND: European Symposium on Systems Analysis and Design. Poland:
University of Gdansk Press,.
Boehm, B., & Turner, R. (2003). Observations on Balancing Discipline and Agility.
Agile Development Conference, 2003 (pp. 32-39). IEEE Xplore.
Butler, T., & Fitzgerald, B. (1999). Unpacking the Systems Development Process: An
Empirical Application of the CSF Concept in a Research Context. Journal of
Strategic Information Systems, 8, 351-371.
Clarke, P., & O’Connor, R. (2012). The situational factors that affect the software
development process: Towards a comprehensive reference framework. Journal of
Information Software and Technology, 54(5), 433-447.
Dede, B., & Lioufko, I. (2010). Situational factors Affecting Software Development
Process Selection. University of Gothenberg: Thesis work for Master of Science in
Software Engineering and Management.
Delany, S. J., & Cunningham, P. (2010). The application of case-based reasoning to
early software project cost estimation and risk assessment. Retrieved September
12, 2012, from A Citeseer Website:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.9932&rep=rep1&type
=pdf
Dyba, T., & Dingsoyr, T. (2008). Empirical studies of agile software development: A
systematic review. Information and Software Technology.
Ferratt, T., & Mai, B. (2010). Tailoring software development. SIGMIS-CPR '10,
Proceedings of the 2010 Special Interest Group on Management Information
System's 48th Annual Conference on Computer Personnel Research on Computer
Personnel Research, 20, pp. 165-170.
Glinz, M. (2007). On Non Functional Requirements. 5th IEEE International
Requirements Engineering Conference (pp. 21-27). IEEE.
Kelly, D., & Lee, H. (2010). Managing Innovation Champions: The Impact of Project
Characteristics on the Direct Manager Role. Journal of Product Innovation
Management, 27(7), 1007-1019.
Lamersdorf, A., & Münch, J. (2010). A multi-criteria distribution model for global
software development projects. Journal of the Brazillian Computer Society, 16(2),
97-115.
McLain, D. (2009). Quantifying project characteristics related to uncertainty. Project
Management Journal, 40(4), 60-73.
Pinto, M. B., & Pinto, J. K. (1990). Project Team Communication and Cross-
Functional Cooperation in New Program Development. Journal of Product
Innovation Management, 200-212.
Rasch, R. H., Cuccia, A. D., & Amer, T. (1995). (1995). The Relationship between
Software Project Characteristics, Case Technology and Software Development
Productivity. Journal of Information Technology Management, 6(1), 1-11.
Schmid, B. (2006). Motivation in Project Management: The Project Manager's
Perspective. Electronic Theses, Treatise and Dissertations. Florida, USA.
Retrieved June 15, 2012
Software Education. (2008, January). Project Classification. Retrieved September 14,
2012, from http://www.softed.com/resources/Docs/ProjectClassification.pdf
Spool, J. M. (2004, May 11). The KJ-Technique: A Group Process for Establishing
Priorities. Retrieved July 12, 2012, from A User Interface Engineering Website:
http://www.uie.com/articles/kj_technique/
Tani, G., & Cimatti, B. (2008). Technological Complexity: a Support to Management
Decisions for Product Engineering and Manufacturing. Proceedings of the 2008
IEEM (pp. 6-12). IEEE.
Tomer, S. (2012). It's Our Research-Getting Stakeholder Buy-in for User Experience
Research Projects. Elsevier Inc.
Wallace, L., & Keil, M. (2004). Software project risks and their effect on outcomes.
Communications of the ACM,, 47(4), 68-73.
Yang, H.-L., & Tang, J.-H. (2004). Team structure and team performance in IS
development: a social network perspective. Information & Management, 41(3),
335–349.
Zumud, R. W. (1980). Management of Large Software Development Efforts. MIS
Quarterly, 4(2), 45-55.
7 APPENDIX: CATEGORIES, VARIABLES, DEFINITIONS, AND
MEASURES
76 PC which are grouped in 14 categories are given here along with their definition, and measure.

Customer Capability

 Technical Knowledge (IT) of the Customer


Definition: This measures the technical knowledge of the customer in the projects' technology.
Measure: The customer is adequately knowledgeable on the relevant technology of the project?
 Domain Knowledge of the Customer
Definition: This measures the knowledge of the customer in the project's business domain.
Measure: The customer is adequately knowledgeable on the business domain of the project?
 Familiarity with the GDM
Definition: Examines the familiarity of the customer with offshoring and globally distributed
teams
Measure: Customer is familiar with offshoring and globally distributed teams
 Project Management Maturity
Definition: Assessment of the Customer's project management maturity
Measure: Customer is very process oriented and has appropriate project management processes
for managing outsourcing projects.
 Customer's Organization maturity for Outsourcing
Definition: Customers who have been doing IT outsourcing for years have their organizational
processes tuned to accommodate the special issues of outsourcing. Customers new to outsourcing
tend to pose additional difficulty in executing the project successfully.
Measure: How evolved are the processes in the customer organization to facilitate outsourcing?

Customer Commitment

 Cooperativeness of the customer


Definition: Examines how the customer teams responds to queries and provides reviews/ signoffs
in the stipulated time.
Measure: How cooperative is the customer in responding to queries and providing reviews and
signoffs in the stipulated time?
 Relationship Maturity
Definition: Measure of the length of the relationship between the Customer and the vendor
Measure: Number of years since first contract was signed with the customer
 Type of Customer Involvement
Definition: The customers sometimes micromanage the outsourced work and requires the vendor
to follow their stipulated project management process, while in other cases they may give the
requirements and get involved only during final testing stages.
Measure: How involved is the customer during project management?
 Client's commitment on Intermittent Signoffs / Requirement Reviews
Definition: Sometimes clients do not review / sign off on the requirements in a timely manner and
as a result, important feedback is not available.
Measure: Does the client review and sign off the requirements in stipulated time as per contract?

Customer's Organization Attributes

 Type of Industry
Definition: Industry domain e.g.: Banking, Retail
Measure: What is the industry domain of the customer?
 Departmental Distance between the Client Sponsor Unit and Implementation Team.
Definition: Examines how close or distant the implementation team is from the project sponsor.
Measure: What is the departmental distance between project sponsor and project implementation
team?
 Customer Culture
Definition: Indication of differences / issues that arise from the Customer's culture (language
issues, adherence to schedule etc.). E.g., mismatch in holidays, hours of work, language variations,
differences in focus on quality and perfection amongst others.
Measure: How similar or different is the Customer culture from the culture of the team executing
the project?
 IP Usage Policies
Definition: Is the client willing to use vendor IP/ open source. The client sometimes feels high
risk in using Open source or the vendor IP. They like the IP but may ask for it to be developed for
them. This increases the effort in the project.
Measure: Is client willing to use open source or Development Company’s IP?
 Time zone / Customer Location
Definition: Customer location/ number of time zones of the customer locations
Measure: What is the time zone of customer location? If there are multiple locations, choose all
the relevant ones.
 Communication Language of the Customer
Definition: Communication language of the customer
Measure: What is the normal communication language of the client?

Team Capability

 Technical Experience of the team


Definition: It designates the technical skill level of the project team members
Measure: The technology experience of the team members suited the needs of the project
 Domain Knowledge of the Team
Definition: It designates the business domain experience level of the project team members
Measure: The business domain experience of the team members suited the needs of the project
 Team Size
Definition: Average number of members in your project
Measure: What was the average size of your project team?
 Role Ratio (Team Structure)
Definition: Measures the ratio of the roles of different players in the project
Measure: Please specify the role ratio in the project for: Project Manager: Technology Lead:
Technology Analyst: Senior Software Engineer + Software Engineer: Others.

Team Engagement

 Motivation of Team members


Definition: Motivation implies the excitement level and the drive to work in the project.
Stimulating and challenging work, fair compensation and suitable rewards and recognition usually
lead to higher motivation.
Measure: Team members in the project were highly motivated
 Team Maturity
Definition: Indicates how long the team members have worked together in the past
Measure: A majority of the team members have worked together in the past on at least one project
 Cooperation
Definition: Cooperation implies the quality of interpersonal relations at work. Highly cooperative
teams resolve conflicts quickly, help each other in addressing problems, and openly share ideas
and resources to complete their work.
Measure: The cooperation in the team was very high
 Commitment
Definition: Commitment to the project among team members
Measure: Team members were highly committed to their work
 Team Culture
Definition: Commonly shared values and behaviors that have an impact on the project.
Measure: The team members shared a strong work culture.
 Stability of the team (Attrition)
Definition: Turnover of personnel, either in the form of moving out or resources shifting from the
project due to changes in organizational priorities
Measure: Attrition in the project was less than 10%.
 Multi-language Teams
Definition: Multi-lingual teams need to have a single, common shared language for ease of
communication at work
Measure: Team members shared a common working language which was comfortable for all of
them.
 Location (Geo Distribution)
Definition: Explores the extent of dispersion of the project execution team.
Measure: Please provide the list of locations where team members were located.

Vendor Unit Sponsorship

 Unit Management Performance (success rate)


Definition: This parameter measures the vendor’s Delivery leadership's capability in achieving
unit's goals. It gives an indication of management capabilities and performance in past projects.
Measure: How do you rate your Delivery unit's performance in similar agile projects in the past?
 Unit Management Experience
Definition: This measures the maturity of the vendor’s Delivery leadership in terms of past
experience in similar (Agile) projects
Measure: How do you rate your Delivery unit's experience in similar agile projects in the past?
 Delivery Leadership’s Sponsorship of project
Definition: Sponsorship of the project by Delivery leadership indicates that the management is
directly involved in the project.
Measure: How will you rate the sponsorship of the project by Delivery leadership?
 Stability of the Management
Definition: If the delivery leadership structure is going through a re-organization, points of
handoffs and responsibilities become ambiguous and decision making becomes tentative.
Measure: Is the Delivery unit leadership structure stable or is it going through an organizational
change (at any point during project execution)?
 Influence of Organization's strategy on the project
Definition: Examines if the project is being taken up to fulfill strategic motives of the vendor. E.g.
if the vendor wants to take over from competition / expand portfolio in certain geographies /
domains etc., then it is easier to get additional resources and / or accept the project at lower
margins.
Measure: Is the project important for the vendor from strategic reasons, like entering a new
domain, a new client, a new geography, or taking over from competition?
 Governance Structure
Definition: Examines how well the governance structure has been defined across the program /
project. E.g. are the points of contacts / relationship managers identified between the client and
the vendor?
Measure: Is the governance structure for project execution clear to the client as well as the
company team?

Contract Management

 Schedule of Work Available (contract)


Definition: An agreed statement of work defines a contract which will govern the project.
Measure: Was a Statement of Work document prepared by the vendor detailing the type of work
and payment terms and conditions reviewed by all stakeholders and signed in agreement?
 Management of change in Scope of Project
Definition: In software engineering it is a common practice to define a threshold for scope change
Measure: Is there a defined threshold for scope change in your project?
 Type of SLAs
Definition: Service Level Agreements for vendor as well as customer reinforce a mechanism to
monitor it better. Stringent SLA ensures better compliance.
Measure: Please indicate the nature of the SLA with the client for this project
 Contract Type (type of billing)
Definition: This parameter seeks to identify the nature of the contract for this project e.g. fixed
price, time and material etc.
Measure: Please indicate the type of contract for the project
 Legal Compliance Clauses
Definition: Examines the nature of the due diligence process followed for ensuring legal
compliance to all relevant local and national laws?
Measure: For ensuring legal compliance for the project on various dimensions, please indicate the
due diligence process followed in the project:
 Duration of the Project
Definition: Captures the duration of the project in calendar months
Measure: Please mention the duration of the project in months.
 Schedule Commitments
Definition: After the estimation of the project size in terms of effort, an optimal project schedule
is charted taking into account the effort estimate, the resource availability etc. Sometimes this
schedule is crashed based on the customer needs. Stringent schedules can also become unrealistic
if crashing is more than 20% from the optimal schedule.
Measure: Has the project committed to a stringent schedule 20% tighter than optimal?
 Liability Clauses
Definition: Sometimes there are liability clauses in the Statement of Work which can increase risk
for the vendor
Measure: Did the project face high risk exposure due to liability clauses?

Project Organization

 Multi-vendor project
Definition: Multiple vendors working on parts of the same program
Measure: How many vendors are working on parts of the same project? Write one if only single
vendor is involved.
 Multi-unit Project Organization
Definition: Multiple units / verticals of the vendor working on parts of the same program
Measure: How many units / verticals of the company are working on parts of the same program?

Project Finance

 Project Profitability
Definition: This parameter examines the profit margin of the project
Measure: What was the initial estimated margin (%)
 Cost Estimate
Definition: This is the cost of the project estimated by software Development Company (in dollar
terms)
Measure: Please provide the approximate cost of the project in USD.
 Available Budget
Definition: Approximate Budget available with the customer for this project in dollars
Measure: How much is the budget allocated for the project?
 Effort Estimate
Definition: Effort in terms of estimated person months required for the project
Measure: What was the initial estimate of the effort in terms of person months required for the
project?

Project Scope

 Quality of Requirements
Definition: This examines whether the requirements are clearly defined and understood. A well-
defined requirements model focuses on adequacy, consistency, verifiability amongst others.
Measure: The requirements of the project were of good quality
 Scope of the Project
Definition: Project scope includes goals, costs, requirements, tasks, and deliverables which are
defined by the work breakdown structure and WBS Dictionary.
Measure: The scope of the project in consideration was well defined
 Type of Requirement Documents Available
Definition: This is to understand the type of requirement documents (detailed, short summary etc.)
were available with the team. Detailed requirements documents received at project start are
helpful in project planning
Measure: A detailed requirement document was available with the team before commencing the
project.
 NFRs
Definition: Non-functional requirements such as availability SLAs, response time, security
features, portability etc. are clearly known to the teams.
Measure: Our customer had explicitly stated about the Non Functional Requirements (e.g.
availability, security, data privacy etc.)
Scale: Yes/no
 Documentation of Requirements
Definition: All requirements (including changes) are documented in the standard format with
customer signoffs
Measure: We had documented every set of requirements and customer sign offs obtained
 Testing Scope
Definition: The scope of software testing includes examination and execution of the code in
various environments and conditions to ensure that the code does what it is supposed to do. It
elaborates the various types of testing needed along with test coverage. A well-defined test scope
saves time and effort by concentrating on what is important and what is not relevant.
Measure: The scope of testing for the project was well-defined
Scale: Yes/No
 Repeatability Required
Definition: The project is required to be launched in multiple versions. The customer will
introduce more features in new versions and will require reuse of project components
Measure: Is the project required to be released subsequently in multiple versions requiring reuse
of project components?
 Stability of Requirements
Definition: This parameter seeks to assess the extent of changes in baseline requirements during
the project / sprint
Measure: Please comment on the extent of requirements changes during the project / sprint
 Demanding statutory and regulatory requirements
Definition: This parameter explores the impact of statutory regulations on the business
requirements of the project
Measure: Please mention whether the requirements of the project had certain specific regulatory
requirements.
Project Process

 Type of Methodology
Definition: It is a framework that is used to structure, plan, and control the process of developing
an application (e.g., Waterfall, Agile, etc.)
Measure: Please specify the methodology used for the development
 Nature of Agile Methodology
Definition: Examines the Agile methodology used for project execution
Measure: Please mention the specific agile methodology used in your project
 Programming Practices
Definition: A set of informal rules to improve the quality of applications and simplify their
maintenance (e.g., coding standards, defect logging and tracking, file storage and management
etc.)
Measure: The teams followed standard programming practices in the project
 Testing Methods
Definition: Testing methods can cover functional testing approaches such as black box / white
box testing or other approaches such as static analysis / dynamic analysis etc. (These methods are
in addition to the software stage-wise testing, i.e., unit testing, integration testing etc.).
Measure: Please list the different testing techniques employed in the project
 Design Methods
Definition: Design techniques could include formal methods (e.g., using rigorous notations,
mathematics etc.) and informal methods (e.g., using graphical notations) and could be top-down
or bottom-up approaches. Additionally, design patterns are also used in several situations.
Measure: Please outline the different design techniques used in the project.
 Project Type
Definition: Project types can be Greenfield application development, legacy migration,
enhancements, package implementation, etc.
Measure: Please mention the type of project.

Application Characteristics

 Number of Technical Skills Involved


Definition: Diversity of technology skills required for the project
Measure: Please specify how many technologies were required to be known before commencing
the project
 Application Type
Definition: Examines the nature of the application being developed. E.g. business process
automation, database migration etc.
Measure: Please mention the type of application developed by your project.
 System Complexity
Definition: Examines the complexity of the system along possible dimensions. The complexity
could range from simple code to a highly complex code which requires re-entrant coding and
recursive coding and multiple resources scheduling with dynamic changing priorities
Measure: The system complexity was high
 Type of Architecture
Definition: Type of architecture required to develop the project (e.g. client server, n-tier or app
development architecture)
Measure: Please specify the type of architecture used in the project
 Technological Complexity
Definition: Technological complexity means that the technology cannot be understood and
designed by a single technologist, cannot be expressed in detail, and exchanged spanning time and
space.
Measure: The underlying technologies used to develop the project was of a high complexity
 Technology Maturity
Definition: This captures the aspect of whether the technologies used in the project have been
successfully used in application development across the world
Measure: The technology used in the project was mature enough
 Development Language
Definition: Profile of the development languages needed for the development
Measure: Please specify the main development language(s) used in the project.

Infrastructure

 Communication Channels
Definition: Communication channels imply mechanisms to cater to the communication needs of
the project (video conference, telephone, VC, shareable editing tools etc.)
Measure: Please mention all communication channels used by the team in the project
 Network Access
Definition: Implies a common network platform available to all team members for the project
development
Measure: Please specify the network on which the project was executed.
 Availability of Required Software tools
Definition: Software tools are used to create, debug, maintain, or otherwise support applications
(e.g. Quality Center, Rational suite etc.)
Measure: The project was provided with the required tools for efficient execution
 Hardware Constraints
Definition: Hardware infrastructure includes servers, client-server machines, quality of network
etc.
Measure: The project was always provided with the hardware of required specifications and good
network connectivity
 Stability of Hardware and Software
Definition: This explores the stability of all hardware and software resources used in the project
Measure: The hardware and software used in the project were stable and remain unchanged
throughout the project.

Project Success Criteria

 Criticality of the Project for Customer


Definition: An IT system is more business critical if its failure results in some serious catastrophe
or loss of business. E.g. systems which support trading on stock exchanges are more critical than
the admission system of a university.
Measure: How serious will be the impact for the client if the system does not work as intended?
 Business Value of the Project to the Customer
Definition: What is the expected value of the project to the customer? Business value also drives
cooperativeness as if the customer anticipates high business value of a project; it will increase his
stake to see the project through
Measure: What is the business value of the project for the customer?

View publication stats

You might also like