Professional Documents
Culture Documents
Software Project Characteristics and Their Measures: Towards A Comprehensive Framework
Software Project Characteristics and Their Measures: Towards A Comprehensive Framework
net/publication/274077719
CITATIONS READS
0 9,574
4 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Shilpi Jain on 27 March 2015.
Abstract
In this paper we present a comprehensive list of project characteristics based on research conducted
in one of the largest software development and IT services organizations which has hundreds of
concurrent offshore outsourcing software projects at any time. This list of characteristics is based on
data from three sources: a) existing literature, b) internal company knowledge based on the
experience of past projects and 3) opinion of industry experts. Apart from aggregating 55 project
characteristics which were available in different research works, we added another 21 to it. We have
defined the characteristics and suggested suitable measures.
This research will help in understanding the outsourced software project characteristics better.
Correct understanding of project characteristics can aid in predicting project challenges which can
be proactively managed. The characteristics, their definitions, and measures will be useful in future
empirical research involving measurement of variables related to software projects.
Keywords: Software Engineering, Software Projects, Software Project Characteristics, Measurement
of Project Characteristics.
1 INTRODUCTION
1.1 Motivation for research
Software projects, specifically offshore outsourcing projects are faced with increasing demands from
their stakeholders to deliver consistently as planned in an environment which is getting more dynamic
and challenging. To meet these needs, one of the conundrums faced by software project managers is
on deciding on the most suitable set of processes for their project from the various options available.
Obviously this needs to be done in a structured way so as to enable the project meet its goals in the
best manner. While several process frameworks like the CMMi and others exist, project managers are
sometimes unable to decide on how to tailor their processes to suit specific situations.
Several researchers have accepted ‘software situational factors’ or ‘project characteristics’ (we will
use the term Project Characteristics abbreviated as ‘PC’ for this term in the rest of the paper) as
important guides for process tailoring (Kelly & Lee, 2010). In other words, a comprehensive detailing
of ‘situational characteristics’ or PC is of significant value during process planning for a successful
project (Ferratt & Mai, 2010; Kelly & Lee, 2010). Software project characteristics can be defined as
any characteristic or attribute of the project which has a bearing on the overall execution of the
project. These characteristics can cover different aspects such as nature of the team: size, capability,
location, levels of experience etc.; nature of the application being built: size, technology, complexity
etc.; nature of the customer: domain / technology capabilities, customer’s involvement in the project,
customer culture etc.
On analysing existing literature, we found significant attempts by different researchers to categorize
and define project characteristics.
However, most of these characteristics are studied to find out their influence on project success
especially dealing with in-house product development. They are not readily applicable for software
development organizations where development work is outsourced and offshored. We found little or
no studies in the area of IT outsourcing and offshoring. This paper presents the research we carried
out to build a PC framework which can be used in large software development organizations and
outsourced projects also. The PC framework builds on existing research, and leverages industry
learning and insights from experienced industry experts. In this framework we have retained 55
characteristics from past research and added 21 new characteristics. We developed definitions for all
the PC and also developed measures in the form of questions to measure them.
In section 2 we discuss the research design which defines the research questions and research method.
In section 3, we discuss the results where we examine in detail the PC taken from literature followed
by PC taken from the industry sources and finally, we present the results of categorization using KJ
analysis (Tomer, 2012). This is followed by conclusion in section 4, and limitations and future
research in section 5. References are given in section 6, and Appendix details the list of PC, their
definition, and measure.
2 LITERATURE REVIEW
A detailed literature analysis was conducted; articles were collected through research databases, such
as EBSCOHOST, PMI (Project Management Institute Journal), Informs, Emerald, ABI-Informs, and
Sage. Additionally, conference proceedings (published by PMI, IEEE, and ACM) were also read in
depth for relevant articles. We retrieved 47 articles, where software characteristics have been used as
a main parameter for research, however in many of these articles 4 characteristics (project size,
project type, and cost) were used repetitively. Therefore those articles were excluded and rests were
used in analysis, however those 4 parameters were added to the list. A number of researchers have
worked in this space, several of them indirectly as it impacted their main area of research, while a few
researchers have focused on PC as a focus area. For example, Boehm and Turner (Boehm & Turner,
2003) proposed five factors (size, criticality, dynamism, personnel, and culture) to enable the right
balance between agility and the software development process. Kelly and Lee (Butler & Fitzgerald,
1999) examined the impact of PCs (like project innovativeness, strategic relatedness, and resource
requirements) on the role of the direct manager. In another analytical research, Mclain (2009)
showcased how four PCs such as interdependencies among activities, limited information about
activity durations, unfamiliarity and variety in project work can quantify uncertainty. Barry et al.
(2002) developed a two-stage conceptual framework to indicate a positive and significant relationship
between project duration and project effort, controlling other characteristics like project size, team
skills, size of the project etc. In these research studies, the number of characteristics was contextual
and not comprehensive.
Five research studies need special attention for discussion as they developed a comprehensive set of
contextual PC. However each one of these studies identified the categories with different names.
Butler and Fitzgerald (1999) provide a set of factors affecting software development process, based on
their work limited to one organization. Bern et al (2007) developed a set of nine contextual factors
that have an impact on the software development process. This list provided us a good starting point
but we found it incomplete when we compared it with industry practices as it lacks factors like
‘project finance’, ‘business unit sponsorship’, ‘contract management’, etc. which may have a
significant impact on emergence of challenges, selection of the development process and project
performance. Such findings emerged in our interaction with industry experts. Similar effort was
extended by Bekkers et al (2008) in the context of software product management where they
investigated the most important situational factors (PC) influencing the selection of method for
software product management. Most of these characteristics were out of the current scope as they
were specific to agile projects, the industry and the country of origin. In an additional research, Dede
and Dede et al. (2010) adopted some of the characteristics proposed by both Bern et al (2007) and
Bekkers et al (2008) for their study. They evaluated the gap between acquired and deployed
technology for selective agile process implementations and identified situational factors which
influence software development process selection. Clarke and O’Connor (2012) conducted an
elaborate review of the existing literature in this space. They developed a reference framework of PC
consisting of 8 classifications and 44 factors using grounded theory.
A detailed analysis of literature reveals that the current literature, in general, agrees on the
characteristics that define a project. Yet, the grouping of the characteristics changes from one research
to another as shown in Table 1.
Category Variables
Contract Scope of Project (Bern et al., 2007),Contract Type (Clarke & O’Connor, 2012; Dede &
Management Lioufko, 2010), Duration of the Project (Barry et al., 2002, Dede & Lioufko, 2010),
Schedule Commitment (Barry et al., 2002)
Unit Sponsorship Unit Management Performance (Ferratt & Mai, 2010), Unit Management Experience
(Clarke & O’Connor, 2012), Stability of the Management (Wallace & Keil, 2004),
Influence of Organization’s Strategy (Bern et al., 2007), Governance Structure (Wallace &
Keil, 2004; Delany & Cunningham, 2010)
Project Finance Cost Estimate (Clarke & O’Connor, 2012), Available Budget (Delany & Cunningham,
2010)
Customer’s Type of Industry (Bekkers et al., 2008), Cooperativeness of the customer (Boehm &
Organization Turner, 2003; Pinto & Pinto, 1990; Schmid, 2006), Departmental Distance between the
Attributes Client Sponsor unit and implementation team (Lamersdorf & Münch, 2010), Customer
Culture (Boehm & Turner, 2003), Time Zone (Lamersdorf & Münch, 2010)
Customer Technical Knowledge (IT) of the Customer (Schmid, 2006), Domain Knowledge of the
Capability Customer (Bern et al., 2007)
Customer Type of Customer Involvement (Boehm & Turner, 2003; Dede & Lioufko, 2010), Client's
Relationship commitment on Intermittent Signoffs / Requirement Reviews (Bern et al., 2007),
Project Multi-Vendor Project (Bekkers et al., 2008; Dede & Lioufko, 2010)
Organization
Project Process Type of Methodology (Zumud, 1980), Nature of Agile Methodologies (Dyba & Dingsoyr,
2008), Programming Practices (Bern et al., 2007)
Infrastructure Communication Channels (Pinto & Pinto, 1990), Network Access (Lamersdorf & Münch,
2010), Availability of Required Software tools (Bern et al., 2007), Stability of Hardware
and Software (Bern et al., 2007)
Team Capability Technical Experience of the team (Wallace & Keil, 2004; Ferratt & Mai, 2010), Domain
Knowledge of the Team (Dede & Lioufko, 2010; Wallace & Keil, 2004), Team Size
(Boehm & Turner, 2003; Wallace & Keil, 2004), Team Structure (Yang & Tang, 2004),
Team Engagement Motivation of Team members (Schmid, 2006), Team Maturity (Clarke & O’Connor,
2012), Cooperation (Clarke & O’Connor, 2012; Pinto & Pinto, 1990), Commitment
(Wallace & Keil, 2004), Team Culture (Dede & Lioufko, 2010), Stability of Team (Ferratt
& Mai, 2010; Wallace & Keil, 2004), Multi Language Teams and Location (Lamersdorf &
Münch, 2010)
Application Application Type (Clarke & O’Connor, 2012), Technological Complexity (Wallace &
Characteristics Keil, 2004; Tani & Cimatti, 2008), System Complexity (Rasch, Cuccia, & Amer, 1995),
Type of Architecture (Bern et al., 2007)
Project Scope Quality of Requirements (Boehm & Turner, 2003; Clarke & O’Connor, 2012), Scope of
the Project (Abdel-Hamid, Sengupta, & SwettSource, 1999; Dede & Lioufko, 2010), Type
of Requirements (Zumud, 1980), Non Functional Requirements (Glinz, 2007),
Documentation of Requirements (Zumud, 1980; Wallace & Keil, 2004), Repeatability
Required (Zumud, 1980; Clarke & O’Connor, 2012), Stability of Requirements (Boehm &
Turner, 2003; Ferratt & Mai, 2010), Demanding Statutory and Regulatory Requirements
(Clarke & O’Connor, 2012), Project Type (Software Education, 2008)
We reviewed several process documents and templates used in the software organization which are
used in planning, executing, monitoring and analysing a project. It is important to mention that the
organization has executed thousands of projects. So the list of PC, as observed in these documents has
the accumulated organizational knowledge of such vast experience. In this section, we discuss some
of these PC briefly.
Many of the existing studies use data from product companies and do not seem to have considered
data from IT service companies, including IT offshore vendors. Considering the rapid growth in
outsourcing IT services and offshoring, it is important to include this dimension. In our exploratory
study, for example, respondents mentioned that factors such as ‘customer’s maturity with outsourcing
model’ and ‘customer’s project management maturity’ have an impact on the project execution and
management. Nevertheless, we could not find such propositions in the existing literature. Customers,
who are familiar with outsourcing both inshore and offshore, are less wary of vendors. They can
continue to focus on their core competency and comfortably outsource other non-core work like IT to
expert vendors whose core competency is software development. Customers with adequate project
management maturity are easier to work with as a certain process when followed, leads to better and
more predictable outcomes. E.g.: requirements when documented and signed off in a formal
requirements document are easier to manage and track than random requirements provided by various
members of customers team in e-mails. In scenarios where there are multiple teams working on a
project , having a common language for both verbal and written communication e.g. English, causes
less miscommunication and hence is seen as an important project characteristic.
Contract management is crucial and impacts more than just the immediate projects outcome. It may
affect the overall relationship between the outsourcing company and the vendors. Ensuring that the
Statement of Work is signed by all stake holders before the project commencement is very important
so that all expectations are set correctly at the start. The contract should contain clauses related to the
service level agreement where applicable. Legal compliance clauses should be included in the
contract as every country has its own laws with regards to working hour policies, visas, overtime etc.
Liability clauses should be included to protect interests of all stake holders.
Some project-intrinsic PCs were also identified, for e.g., project processes, application characteristics
like technology maturity and development language. Working with nascent technologies can throw
unforeseen challenges as typically seen by early adopters. It is also easy to source team members for
commonly used development languages like java or .net.
When requirements are communicated across organizations, e.g.: the bank ‘A needs to develop a
website to promote a product to its customers. Bank A gives the requirements to its technology arm B.
B outsources the IT project to an offshore company C whose teams are also distributed across two or
more locations. All the team members across these organizations and locations need to know the
criticality of the project to Bank A and the business value of the requirements. This results in a more
successful project delivery. The list of 21 PC derived from the industry documents are provided
below.
Familiarity with Outsourcing / Global Delivery Model: Examines the familiarity of the
customer with offshoring and globally distributed teams
Project Management Maturity: Assessment of the Customer's project management maturity
Customer's Organization maturity for Outsourcing: Customers who have been doing IT
outsourcing for years have their organizational processes tuned to accommodate the special issues
of outsourcing. Customers new to outsourcing tend to pose additional difficulty in executing the
project successfully.
Relationship Maturity: Measure of the length of the relationship between the Customer and the
vendor
IP Usage Policies: Is the client willing to use vendor IP/ open source. The client sometimes feels
high risk in using Open source or the vendor IP. They like the IP but may ask for it to be
developed for them. This increases the effort in the project.
Communication Language of the Customer
IT Vendor’s Delivery Leadership’s sponsorship of project: Sponsorship of the project by
Delivery leadership indicates that the management is directly involved in the project.
Statement of Work (SoW) available (contract): An agreed statement of work defines a
contract which will govern the project.
Type of SLAs: Service Level Agreements for vendor as well as customer reinforce a mechanism
to monitor it better. Stringent SLA ensures better compliance.
Legal Compliance Clauses: Examines the nature of the due diligence process followed for
ensuring legal compliance to all relevant local and national laws.
Liability Clauses: Sometimes there are liability clauses in the Statement of Work which can
increase risk for the vendor
Multi-unit Project Organization: Multiple units / verticals of the vendor working on parts of
the same program
Project Profitability: This parameter examines the profit margin of the project
Testing Scope: The scope of software testing includes examination and execution of the code in
various environments and conditions to ensure that the code does what it is supposed to do. It
elaborates the various types of testing needed along with test coverage. A well-defined test scope
saves time and effort by concentrating on what is important and what is not relevant.
Testing Methods: Testing methods can cover functional testing approaches such as black box /
white box testing or other approaches such as static analysis / dynamic analysis etc. (These
methods are in addition to the software stage-wise testing, i.e., unit testing, integration testing
etc.).
Design Methods: Design techniques could include formal methods (e.g., using rigorous
notations, mathematics etc.) and informal methods (e.g., using graphical notations) and could be
top-down or bottom-up approaches. Additionally, design patterns are also used in several
situations.
Technology Maturity: This captures the aspect of whether the technologies used in the project
have been successfully used in application development across the world
Development Language: Profile of the development languages needed for the development
Hardware Constraints: Hardware infrastructure includes servers, client-server machines,
quality of network etc.
Criticality of the Project for Customer: An IT system is more business critical if its failure
results in some serious catastrophe or loss of business. E.g. systems which support trading on
stock exchanges are more critical than the admission system of a university.
Business Value of the Project to the Customer: What is the expected value of the project to the
customer? Business value also drives cooperativeness as if the customer anticipates high business
value of a project; it will increase his stake to see the project through.
4 DATA COLLECTION AND ANALYSIS
4.1 Interviews
In order to gain more insight about project characteristics and validate our list of PC, we conducted
interviews. The interviews were semi-structured as qualitative rich data was expected to help the
research team to identify a large number of PC based on project managers’ experiences and
observations. We sent invitation emails to 75 respondents who were evaluating or anchoring various
projects. Most of them were group project leaders, project managers, solution architect or belonged to
the client facing group (CFG). In total, we received 26 responses. The respondents have an average
experience of 15 years in project evaluation or management and have managed projects of diverse
nature with the minimum team size of 7. Out of 26, nine respondents have worked as risk analysts in
the past. We considered this favorable for our analysis as risk analysts need to deal with PC in detail
as part of their job. The average duration of each interview was 75 minutes. Two researchers
participated in the interview so that extensive notes could be taken. A small questionnaire was
prepared and during the interview session each interviewee was asked to rate the given PC on a 5
point Likert scale (from least important to highly important). Characteristics with an average score of
3 or above were retained. A few characteristics were renamed as per the suggestions from respondents
and a few were deleted if found similar. For example, the characteristic ‘development team’s ability’
was similar to ‘technical knowledge of team’. Two characteristics ‘DBMS type’ and ‘memory
constraints’ were cited as the low level details and hence deleted.
In the end of the questionnaire, one open-ended question was included to explore if anything was
missing and whether the respondents were willing to add any other characteristic. After deleting
synonyms, and mapping with the existing list, 21 new PC were added.
Since the list of project characteristics was huge and many characteristics looked similar, we planned
to conduct KJ analysis. The KJ-Method also known as ‘affinity diagram’, named for its inventor, Jiro
Kawakita (Rasch, Cuccia, & Amer, 1995; Benaroch & Appari, 2010; Spool, 2004). It is a group
process for establishing priorities or reach common consensus based on the subjective or qualitative
data. During the process, different groups can analyze same data and will often come to the same
results. The process is defined in brief as follows:
1. Appoint a facilitator
2. Put ideas / thoughts / parameters / keywords on post-it pads
3. Invite subject matter experts to view those post-its
4. Group / categories these post-it based on the discussion in different columns
5. Repeat the activities 3-4 times until the experts reach consensus
6. Nomenclature these categories with the common consensus
7. Ask respondents to identify top 3 or 5 groups that seem to be the most important and also
explore their order of preference.
For the exercise, we invited a team of 7 experts who were project managers with over 15 years of
experience in project management and share a good understanding of software projects. To facilitate
the session and avoid the possibilities of conflicts among the participants, two members from our
research team acted as anchors.
All characteristics were read one at a time in front of the managers. Each characteristic was rated on 3
point Likert scale (least important, somewhat important and highly important). They were bucketed
according to the consensus and scores. In this process, we removed 14 characteristics from the given
inventory. After that, each characteristic was discussed in detail for its classification and possible
measures. This whole exercise was completed in 6 sessions and each session lasted 60 minutes. The
final list comprised of 76 characteristics grouped in 14 categories. These categories are further
grouped as shown in the Figure 2. The detailed definitions and measures for the PC are provided in
the appendix.
The software PC derive some characteristics from the project sponsor organization (customer), some
from the IT vendor organization (IT implementation unit) and some related to the relationship
between the two, while there are some describing the intrinsic characteristics of the project. In the rest
of this section, we briefly describe the categories and their component PC.
The first group is named Customer Unit which contains the following categories: customer capability,
customer commitment and customer’s organizational attributes.
Customer capability category includes five characteristics, namely, customer’s technical knowledge,
domain knowledge, familiarity with global delivery modal, project management maturity, and
customer’s organizational maturity for outsourcing. It is easier to arrive at a common ground in
negotiations with a customer who has more knowledge about the technology, domain, and
outsourcing and who is more involved in project management.
Customer commitment category consists of four PC, namely, cooperativeness of the customer,
relationship maturity, type of customer involvement and Client's commitment on Intermittent Signoffs.
For the definitions and measures of these project characteristics, a structure was defined by the
authors and same is summarized:
1. Adopt definitions for various characteristics from available sources, namely, academic
literature and industry documents.
2. Generate consensus on these definitions from subject matter experts (SMEs) (In our case we
took advice from solution architects, group project managers and client facing groups). A few
of them were further modified based on their opinions.
3. Define scale for each characteristic; first through literature and then validate from SMEs.
Sometimes, the measures had company specific terminology which we modified to make
them more generic. A few measures from literature were difficult to understand by industry
experts, were thus also modified in consultation with them. The definitions and measures are
given in annexure.
Validity means that the data must be unbiased and relevant to the characteristic being measured. We
took special care for all the three types of validity, namely, content validity, criterion validity and
construct validity. Content validity is measured by personal judgment of the experts in the field. A
few steps were taken during this research to improve the content validity of the measurement
instrument. First, the instrument was made with the help of industry experts, second, to remove
chances of systemic bias this was validated by three external researchers, third, pre-testing was done
at a limited scale with a different set of industry experts. Criteria validity refers to validity of
instrument with respect to external criteria, which can be another measurement instrument. We did
not administer our instrument and external instrument with the same subjects but we did adopt the
questions available in academic literature and industry documents and templates. Here, we relied
more on web-based industry templates because the web-based tools have been used by hundreds of
industry experts and they have modified over a period of time to accommodate all types of software
projects executed in the company. Construct validity consists of convergent and discriminant
validity. Convergent validity refers to correspondence in results between attempts to measure the
same construct by two or more independent methods. In practice, when we measure a construct by
more than one question, both the questions should measure the same quantity.
It is important to mention here that since we had 76 PC, we measured one PC by one question only, so
that overall questionnaire length remains handy. Thus, within the instrument convergent validity was
not an issue. However, measures of some of the characteristics are contextual and can be extended as
per the previous studies.
Discriminant validity refers to the fact that different constructs should measure different things. In
practice, in multiple item constructs the items belonging to the construct should have high correlation
with its own construct and less with other constructs. In Single item construct this was not an issue.
5 CONCLUSION
First objective of this research was to understand and define the PC by assimilating the knowledge
from Industry practices and academic literature.
Modern software projects are commonly outsourced and executed by multiple teams belonging to
multiple vendors, located across multiple locations and time zones. In the context of outsourcing
offshoring projects, we observed that the existing set of PC from literature is not exhaustive and does
not comprehensively define characteristics of such projects. Consequently, the inputs from industry
are prevalent; how industry has been evaluating their projects. In this work, we have defined 76 PC
incorporating these aspects of modern outsourced software projects. The second objective was to
define them. Some are easy to define while for others, we consulted domain experts. The third
objective was to classify them. KJ method was used to classify them which resulted in 14
classifications. Finally, the fourth objective was to propose a scale to measure each PC. The scale of
measure is not provided in appendix due to page limitations we developed a questionnaire which
included the definition and measures for the PC. Pretesting was also done to check the understanding
of definition and scale of measure before finalizing it. The definitions and measures of the entire 76
PC are given in the annexure. This research contributes by providing a comprehensive listing of
project characteristics which contains specific new elements which are important in IT services and
outsourcing projects. This work can be used by researchers to build on and refine further and also add
value to practitioners during their project planning and management.
References
Abdel-Hamid, T. K., Sengupta, K., & SwettSource, C. (1999). The Impact of Goals on
Software Project Management: An Experimental Investigation. MIS Quarterly,
23(4), 531-555.
Barry, E. J., Mukhopadhyay, T., & Slaughter, S. A. (2002). Software Project Duration
and Effort: An Empirical Study. Information Technology and Management, 3(1-2),
113-137.
Bekkers, W., Van de Weerd, I., Brinkkemper, S., & Mahieu, A. (2008). The Influence
of Situational Factors in Software Product Management - An Empirical Study.
Second International Workshop on Software Product Management, 2008. IWSPM
'08. (pp. 1 - 8). Barcelona, Catalunya: IEEE Xplore.
Benaroch, M., & Appari, A. (2010). Financial Pricing of Software Development Risk
Factors. IEEE Software, 27, 65-73.
Bern, A., Pasi, S. J., Nikula, U., & Smolander, K. (2007). Contextual Factors
Affecting the Software Development Process – An Initial View. The Second AIS
SIGSAND: European Symposium on Systems Analysis and Design. Poland:
University of Gdansk Press,.
Boehm, B., & Turner, R. (2003). Observations on Balancing Discipline and Agility.
Agile Development Conference, 2003 (pp. 32-39). IEEE Xplore.
Butler, T., & Fitzgerald, B. (1999). Unpacking the Systems Development Process: An
Empirical Application of the CSF Concept in a Research Context. Journal of
Strategic Information Systems, 8, 351-371.
Clarke, P., & O’Connor, R. (2012). The situational factors that affect the software
development process: Towards a comprehensive reference framework. Journal of
Information Software and Technology, 54(5), 433-447.
Dede, B., & Lioufko, I. (2010). Situational factors Affecting Software Development
Process Selection. University of Gothenberg: Thesis work for Master of Science in
Software Engineering and Management.
Delany, S. J., & Cunningham, P. (2010). The application of case-based reasoning to
early software project cost estimation and risk assessment. Retrieved September
12, 2012, from A Citeseer Website:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42.9932&rep=rep1&type
=pdf
Dyba, T., & Dingsoyr, T. (2008). Empirical studies of agile software development: A
systematic review. Information and Software Technology.
Ferratt, T., & Mai, B. (2010). Tailoring software development. SIGMIS-CPR '10,
Proceedings of the 2010 Special Interest Group on Management Information
System's 48th Annual Conference on Computer Personnel Research on Computer
Personnel Research, 20, pp. 165-170.
Glinz, M. (2007). On Non Functional Requirements. 5th IEEE International
Requirements Engineering Conference (pp. 21-27). IEEE.
Kelly, D., & Lee, H. (2010). Managing Innovation Champions: The Impact of Project
Characteristics on the Direct Manager Role. Journal of Product Innovation
Management, 27(7), 1007-1019.
Lamersdorf, A., & Münch, J. (2010). A multi-criteria distribution model for global
software development projects. Journal of the Brazillian Computer Society, 16(2),
97-115.
McLain, D. (2009). Quantifying project characteristics related to uncertainty. Project
Management Journal, 40(4), 60-73.
Pinto, M. B., & Pinto, J. K. (1990). Project Team Communication and Cross-
Functional Cooperation in New Program Development. Journal of Product
Innovation Management, 200-212.
Rasch, R. H., Cuccia, A. D., & Amer, T. (1995). (1995). The Relationship between
Software Project Characteristics, Case Technology and Software Development
Productivity. Journal of Information Technology Management, 6(1), 1-11.
Schmid, B. (2006). Motivation in Project Management: The Project Manager's
Perspective. Electronic Theses, Treatise and Dissertations. Florida, USA.
Retrieved June 15, 2012
Software Education. (2008, January). Project Classification. Retrieved September 14,
2012, from http://www.softed.com/resources/Docs/ProjectClassification.pdf
Spool, J. M. (2004, May 11). The KJ-Technique: A Group Process for Establishing
Priorities. Retrieved July 12, 2012, from A User Interface Engineering Website:
http://www.uie.com/articles/kj_technique/
Tani, G., & Cimatti, B. (2008). Technological Complexity: a Support to Management
Decisions for Product Engineering and Manufacturing. Proceedings of the 2008
IEEM (pp. 6-12). IEEE.
Tomer, S. (2012). It's Our Research-Getting Stakeholder Buy-in for User Experience
Research Projects. Elsevier Inc.
Wallace, L., & Keil, M. (2004). Software project risks and their effect on outcomes.
Communications of the ACM,, 47(4), 68-73.
Yang, H.-L., & Tang, J.-H. (2004). Team structure and team performance in IS
development: a social network perspective. Information & Management, 41(3),
335–349.
Zumud, R. W. (1980). Management of Large Software Development Efforts. MIS
Quarterly, 4(2), 45-55.
7 APPENDIX: CATEGORIES, VARIABLES, DEFINITIONS, AND
MEASURES
76 PC which are grouped in 14 categories are given here along with their definition, and measure.
Customer Capability
Customer Commitment
Type of Industry
Definition: Industry domain e.g.: Banking, Retail
Measure: What is the industry domain of the customer?
Departmental Distance between the Client Sponsor Unit and Implementation Team.
Definition: Examines how close or distant the implementation team is from the project sponsor.
Measure: What is the departmental distance between project sponsor and project implementation
team?
Customer Culture
Definition: Indication of differences / issues that arise from the Customer's culture (language
issues, adherence to schedule etc.). E.g., mismatch in holidays, hours of work, language variations,
differences in focus on quality and perfection amongst others.
Measure: How similar or different is the Customer culture from the culture of the team executing
the project?
IP Usage Policies
Definition: Is the client willing to use vendor IP/ open source. The client sometimes feels high
risk in using Open source or the vendor IP. They like the IP but may ask for it to be developed for
them. This increases the effort in the project.
Measure: Is client willing to use open source or Development Company’s IP?
Time zone / Customer Location
Definition: Customer location/ number of time zones of the customer locations
Measure: What is the time zone of customer location? If there are multiple locations, choose all
the relevant ones.
Communication Language of the Customer
Definition: Communication language of the customer
Measure: What is the normal communication language of the client?
Team Capability
Team Engagement
Contract Management
Project Organization
Multi-vendor project
Definition: Multiple vendors working on parts of the same program
Measure: How many vendors are working on parts of the same project? Write one if only single
vendor is involved.
Multi-unit Project Organization
Definition: Multiple units / verticals of the vendor working on parts of the same program
Measure: How many units / verticals of the company are working on parts of the same program?
Project Finance
Project Profitability
Definition: This parameter examines the profit margin of the project
Measure: What was the initial estimated margin (%)
Cost Estimate
Definition: This is the cost of the project estimated by software Development Company (in dollar
terms)
Measure: Please provide the approximate cost of the project in USD.
Available Budget
Definition: Approximate Budget available with the customer for this project in dollars
Measure: How much is the budget allocated for the project?
Effort Estimate
Definition: Effort in terms of estimated person months required for the project
Measure: What was the initial estimate of the effort in terms of person months required for the
project?
Project Scope
Quality of Requirements
Definition: This examines whether the requirements are clearly defined and understood. A well-
defined requirements model focuses on adequacy, consistency, verifiability amongst others.
Measure: The requirements of the project were of good quality
Scope of the Project
Definition: Project scope includes goals, costs, requirements, tasks, and deliverables which are
defined by the work breakdown structure and WBS Dictionary.
Measure: The scope of the project in consideration was well defined
Type of Requirement Documents Available
Definition: This is to understand the type of requirement documents (detailed, short summary etc.)
were available with the team. Detailed requirements documents received at project start are
helpful in project planning
Measure: A detailed requirement document was available with the team before commencing the
project.
NFRs
Definition: Non-functional requirements such as availability SLAs, response time, security
features, portability etc. are clearly known to the teams.
Measure: Our customer had explicitly stated about the Non Functional Requirements (e.g.
availability, security, data privacy etc.)
Scale: Yes/no
Documentation of Requirements
Definition: All requirements (including changes) are documented in the standard format with
customer signoffs
Measure: We had documented every set of requirements and customer sign offs obtained
Testing Scope
Definition: The scope of software testing includes examination and execution of the code in
various environments and conditions to ensure that the code does what it is supposed to do. It
elaborates the various types of testing needed along with test coverage. A well-defined test scope
saves time and effort by concentrating on what is important and what is not relevant.
Measure: The scope of testing for the project was well-defined
Scale: Yes/No
Repeatability Required
Definition: The project is required to be launched in multiple versions. The customer will
introduce more features in new versions and will require reuse of project components
Measure: Is the project required to be released subsequently in multiple versions requiring reuse
of project components?
Stability of Requirements
Definition: This parameter seeks to assess the extent of changes in baseline requirements during
the project / sprint
Measure: Please comment on the extent of requirements changes during the project / sprint
Demanding statutory and regulatory requirements
Definition: This parameter explores the impact of statutory regulations on the business
requirements of the project
Measure: Please mention whether the requirements of the project had certain specific regulatory
requirements.
Project Process
Type of Methodology
Definition: It is a framework that is used to structure, plan, and control the process of developing
an application (e.g., Waterfall, Agile, etc.)
Measure: Please specify the methodology used for the development
Nature of Agile Methodology
Definition: Examines the Agile methodology used for project execution
Measure: Please mention the specific agile methodology used in your project
Programming Practices
Definition: A set of informal rules to improve the quality of applications and simplify their
maintenance (e.g., coding standards, defect logging and tracking, file storage and management
etc.)
Measure: The teams followed standard programming practices in the project
Testing Methods
Definition: Testing methods can cover functional testing approaches such as black box / white
box testing or other approaches such as static analysis / dynamic analysis etc. (These methods are
in addition to the software stage-wise testing, i.e., unit testing, integration testing etc.).
Measure: Please list the different testing techniques employed in the project
Design Methods
Definition: Design techniques could include formal methods (e.g., using rigorous notations,
mathematics etc.) and informal methods (e.g., using graphical notations) and could be top-down
or bottom-up approaches. Additionally, design patterns are also used in several situations.
Measure: Please outline the different design techniques used in the project.
Project Type
Definition: Project types can be Greenfield application development, legacy migration,
enhancements, package implementation, etc.
Measure: Please mention the type of project.
Application Characteristics
Infrastructure
Communication Channels
Definition: Communication channels imply mechanisms to cater to the communication needs of
the project (video conference, telephone, VC, shareable editing tools etc.)
Measure: Please mention all communication channels used by the team in the project
Network Access
Definition: Implies a common network platform available to all team members for the project
development
Measure: Please specify the network on which the project was executed.
Availability of Required Software tools
Definition: Software tools are used to create, debug, maintain, or otherwise support applications
(e.g. Quality Center, Rational suite etc.)
Measure: The project was provided with the required tools for efficient execution
Hardware Constraints
Definition: Hardware infrastructure includes servers, client-server machines, quality of network
etc.
Measure: The project was always provided with the hardware of required specifications and good
network connectivity
Stability of Hardware and Software
Definition: This explores the stability of all hardware and software resources used in the project
Measure: The hardware and software used in the project were stable and remain unchanged
throughout the project.