Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Downloaded from https://iranpaper.

ir
https://www.tarjomano.com https://www.tarjomano.com

Computers & Industrial Engineering 183 (2023) 109505

Contents lists available at ScienceDirect

Computers & Industrial Engineering


journal homepage: www.elsevier.com/locate/caie

A comparison of activity ranking methods for taking corrective actions


during project control
Forough Vaseghi a , Mario Vanhoucke a,b,c ,∗
a Faculty of Economics and Business Administration, Ghent University, Tweekerkenstraat 2, 9000 Ghent, Belgium
b
Operations and Technology Centre, Vlerick Business School, Reep 1, 9000 Ghent, Belgium
c
UCL School of Management, University College London, 1 Canada Square, London E14 5AA, United Kingdom

ARTICLE INFO ABSTRACT

Keywords: Monitoring and controlling projects in progress is key to support corrective actions in case of delays and
Activity ranking to deliver these projects timely to the client. Various project control methodologies have been proposed in
Activity sensitivity literature to include activity variability in the project schedule and measure the performance of projects in
Corrective actions
progress. Much of these studies rely on a schedule risk analysis to rank activities according to their time
Project control
sensitivity and expected impact on the total project duration.
This paper compares two classes of activity ranking methods to improve the corrective action process
of projects under uncertainty. Each method ranks activities based on certain criteria and places the highest
ranked activity in a so-called action set that is then used to take certain corrective actions. The first method is
the analytical based ranking method which relies on exact or approximate analytical calculations to provide a
ranking of activities. This analytical ranking method will be compared with a second simulation-based ranking
method that relies on Monte Carlo simulations to measure the sensitivity of each activities.
Results on a set of artificial projects show that the analytical ranking method and one specific simulation-
based ranking outperform all other methods, not only for predicting the contribution of actions on the expected
project duration and its variability, but also in the efficiency of the project manager’s control.

1. Introduction from the initial plan. Since uncertainty and variability are inevitable
during the execution of projects, these deviations are unavoidable, and
Project management entails planning, executing and controlling when they occur, the project manager should take corrective actions
projects such that the project objectives can be achieved on time to get the project back on track (in case delays or budget overruns are
and within budget. The concept of dynamic scheduling integrates three observed) or exploit potential opportunities (in case the project is ahead
main components of quantitative project management, namely baseline of schedule or under budget).
scheduling, risk analysis and project control (Vanhoucke, 2012a). The In the field of schedule risk analysis, previous research has primarily
first component is the construction of a baseline schedule, which includes focused on introducing various sensitivity metrics for activities. These
the start and end times of each activity, the precedence relations metrics, often derived by Monte Carlo simulations, provide a quantifi-
between them and their allocated budget and resources. This baseline cation of activity criticality or sensitivity, typically represented as a
schedule acts as an initial plan and point of reference for the two percentage, with higher percentages denoting more sensitive activities
other components. The second component, referred to as schedule risk
(e.g., Van Slyke, 1963 and Williams, 1992). In a recent study of Vaseghi
analysis, is determining the risks and uncertainties in the baseline
et al. (2022), a novel analytical approach is employed to determine
schedule to analyze to which extent each activity might affect the
activity sensitivity, as an alternative to Monte Carlo simulations. This
project outcome. While these two components are executed before the
approach measures the impact of changes in activity durations on the
project start, the third and last component is referred to as project
total project duration. Both simulation-based and analytical methods
control and is relevant during the project’s progress. During project
control, the actual progress of the project in terms of time and cost is aim to enhance our understanding of the activities that significantly
monitored and compared to the baseline schedule to identify deviations influence project outcomes. Such insights can assist project managers

∗ Corresponding author at: Faculty of Economics and Business Administration, Ghent University, Tweekerkenstraat 2, 9000 Ghent, Belgium.
E-mail addresses: forough.vaseghi@ugent.be (F. Vaseghi), mario.vanhoucke@ugent.be, mario.vanhoucke@vlerick.com, m.vanhoucke@ucl.ac.uk
(M. Vanhoucke).

https://doi.org/10.1016/j.cie.2023.109505
Received 23 March 2023; Received in revised form 26 June 2023; Accepted 31 July 2023
Available online 11 August 2023
0360-8352/© 2023 Elsevier Ltd. All rights reserved.
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

in making more informed decisions regarding corrective actions during • Control strategy: The strategy used to control a project can be split
project execution, where actions targeting the most sensitive activities up into a so-called top-down project control strategy or a bottom-
may yield greater impact compared to those with low sensitivity values. up control strategy, as originally proposed by Vanhoucke (2011).
Furthermore, extensive research in the field of project control has In the current study, the focus lies on the bottom-up control
examined the implementation of corrective actions during project exe- strategy which makes use of activity sensitivity information to
cution. These studies have used diverse monitoring methods to assess steer the corrective actions. The alternative approach, which is
the project’s performance at specific time points, such as identifying not studied in the current study, makes use of earned value
delays or cost overruns. The primary objective of these monitoring management (Fleming & Koppelman, 2010) metrics to monitor
approaches is to provide early warning signals, alerting the project and control the project’s performance.
manager to the need for corrective actions. One widely used method in • Corrective actions: The corrective actions taken on activities to
these studies is earned value management (Fleming & Koppelman, 2010), bring the project back on track can consist of three different
which has been investigated across various project settings such as types, known as activity crashing, variability reduction and fast
different project network structures and degrees of activity variability. tracking, which will be reviewed in Section 2.2.
Despite the extensive body of research in both fields, there are • Activity selection: Some studies select activities for corrective
still several unresolved questions that we aim to address in this study. actions based on values for sensitivity metrics (as is also the case
Our study focuses on two key aspects: Firstly, project managers face in the current study) while others take a much simpler approach
limitations in the number of actions they can undertake throughout the and only take actions when an activity is lying on the critical path.
project’s lifecycle, constrained by factors such as budget and resources. • Sensitivity: The sensitivity of activities is measured in different
While a warning system can indicate the need for action, it often studies in three fundamentally different ways. Some studies made
provides insufficient guidance on which specific activities should be use of analytical calculations to propose new metrics to measure
targeted. Given the constraints on action budgets, the accurate selection an activity’s sensitivity, while others proposed a new sensitivity
of activities becomes crucial for the project’s progression. In this study, metric based on Monte Carlo simulations. A third class of studies
we utilize simulation and analytical sensitivity metrics to establish an performed a detailed sensitivity analysis to measure the impact of
activity ranking that enhances the decision-making process in selecting changes in the activity estimates on the project objective, without
appropriate actions. Secondly, the impact of corrective actions on the explicitly proposing a new metric. As will be discussed later, this
project outcome is not solely dependent on the chosen activities but study will compare both the analytical and sensitivity metrics in
also on the nature of the action itself. However, there is limited research our computational experiments.
exploring the relationship between activity uncertainty and the type • Methodology: Most studies use a control system to measure the
of action employed. In our study, we examine two classes of actions: performance of a project by using action thresholds that specify a
certain value after which, once exceeded, an action must be taken.
one primarily targeting the average duration of selected activities, and
Only one study by Vaseghi et al. (2022) defines a so-called action
another aiming to reduce variability in activity durations. By investi-
set that rank activities according the criteria and then determines
gating this relationship, we contribute to a better understanding of how
which activities are most suitable for actions.
different types of actions interact with activity uncertainty.
• Topology: Only a few studies measure the relation between the
The contribution of the paper therefore lies in conducting a com-
sensitivity of each activity and the structure of the network and
prehensive comparison between simulation and analytical methods,
show that the network topology has a significant influence on
utilizing them to rank activities based on their expected impact on the
the accuracy of the results. In our study, the network structure
project objective. More specifically, we employ and evaluate these two
will be measured by the serial/parallel indicator (SP) proposed
activity ranking procedures through an extensive computer experiment
by Vanhoucke et al. (2008) (although SP was called I2 in this
using artificial project data. The best ranking must provide valuable
study).
insights for project managers in identifying activities that warrant
corrective actions with significant impacts on the project outcome. Fur- In order to provide a comprehensive understanding, Sections 2.1
thermore, we explore the relationship between the number of actions and 2.2 present an overview of the literature on schedule risk analy-
taken and the project outcome, allowing project managers to make in- sis and project control, respectively. Further, Section 2.3 outlines the
formed decisions regarding the prioritization of activities for corrective research framework for our study.
actions. By establishing this link, our study provides a comprehensive
analysis of activity ranking methods and their implications for project 2.1. Risk analysis
management.
The outline of this paper is as follows. Section 2 reviews the most Since the baseline schedule is constructed using point estimates for
important research efforts on risk analysis and corrective action taking the activity durations, while in reality these durations are subject to
process in project control and gives the research outline for this study. uncertainty and variability, the baseline project duration is an under-
Section 3 presents the research methodology of this study in which two estimation of the actual project duration. Therefore, determining the
project control strategies are introduced. In Section 4, the parameter amount of uncertainty for each project activity and analyzing its impact
settings and the computational experiments are discussed in detail. on the project outcome is essential to help project managers to focus on
Finally, the conclusions of this research are summarized in Section 5. those parts of the project that need their attention. In this section, the
sensitivity measures introduced in the literature will be reviewed. Some
2. Literature study of these metrics are based on Monte Carlo simulation while others are
based on analytical and approximation methodologies. These studies
This section provides a literature review of the schedule risk analysis are shown at the first part of Table 2 (the rows labeled with ‘‘Risk
and project control components of the dynamic scheduling framework, analysis’’).
which are the main focus of our paper. The literature review is sup- Simulation-based sensitivity metrics: Schedule Risk Analysis (SRA,
ported by a summary given in Table 2. The table is split up in two Hulett (1996) consists of calculating the simulation-based sensitivity
blocks, indicated by the rows ‘‘Risk analysis’’ and ‘‘Project control’’ metrics by defining distributions to activity durations and then per-
which will be explained in Sections 2.1 and 2.2, respectively. The forming the Monte Carlo simulation runs. The sensitivity metrics from
table shows for each reference the main characteristics relevant for our SRA which have been proposed in different studies are as follows. The
research, which is briefly outlined along the following lines: Criticality Index (CI) measures the probability that an activity lies on the

2
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Table 1 been conducted to analyze the criticality and sensitivity of the mean
Activity-based SRA metrics.
and variance of project completion time in response to changes in
Abbreviation Metric the mean and variance of individual activities. Elmaghraby (1999)
𝐶𝐼 Criticality Index investigated the impact of changing the mean duration of activity on
𝑆𝐼 Significance Index
the variability of the project duration. They have concluded that in-
𝐶𝑅𝐼(𝑟) Cruciality Index based on Pearson productmoment
𝐶𝑅𝐼(𝜌) Cruciality Index based on Spearman’s rank creasing and decreasing the mean duration of each activity could either
𝐶𝑅𝐼(𝜏) Cruciality Index based on Kendall’s rank increase or decrease the variance of the project duration. Building on
𝑆𝑆𝐼 Schedule Sensitivity Index this, Gutierrez and Paul (2000) presented an analytical approach to
𝑀𝑂𝐼 Management-Oriented Index
measure the changes in expected project duration following changes in
𝐶𝑆𝑆 Criticality-Slack-Sensitivity index
the activity variance. Cho and Yum (2004) assessed the magnitude of
change in expected project completion time with respect to change in
expected activity duration time. Furthermore, Elmaghraby (2000) con-
critical path. This sensitivity metric was initially introduced by Martin ducted an evaluation on the existing research on sensitivity, focusing on
(1965) and then it has been extended by other studies (e.g. Dodin & the impact of changes in the mean and variability of activity duration
Elmaghraby, 1985; Bowman, 1995; Fatemi Ghomi & Teimouri, 2002). on the mean and variability of the project duration. To bridge these
It has also been studied by other authors such as (Van Slyke, 1963) studies, Vaseghi et al. (2022) developed an analytical procedure to rank
and Kulkarni and Adlakha (1986). To address the limitations of the activities according to their expected impact on the project duration
CI metric, Williams (1992) introduced the Significance Index (SI) and distribution when subjected to corrective action. This activity ranking
Cruciality Index (CRI) metrics. The SI measures the relative importance serves as a basis for determining the number of actions to be taken and
and impact of individual activities on the project outcome and the CRI selecting the set of activities that will be controlled.
measures the correlation between the activity duration and the total
project duration, in three different ways: Pearson’s product-moment 2.2. Project control
correlation coefficient (CRI(r)), Spearman’s rank correlation coefficient
(CRI(𝜌)) and Kendall’s tau rank correlation coefficient (CRI(𝜏)). The During the project execution, the actual progress of the project must
CRI differs from the other simulation-based sensitivity metrics since be monitored, and when the performance of the project is below a cer-
it simply measures the linear relationship between two variables and tain threshold, the project manager needs to take corrective actions to
does not explicitly use the network structure in its calculations. These bring the project back on track. In this section, studies that investigate
metrics have demonstrated their capability to provide more reliable the corrective action taking process are reviewed (cf. bottom part of
information on the relative importance of an activity compared to Table 2, category ‘‘Project control’’).
the CI metric (Vanhoucke, 2010b). Following these studies, PMBOK Three types of corrective actions have been studied in the lit-
(2004) proposed to combine the standard deviation of the activity erature, referred to as activity crashing, variability reduction and fast
duration and the project duration with the CI, which is exactly a tracking. First, activity crashing is a technique in which the duration
merge of the impact of uncertainty and the probability of activity of the activity will be reduced by increasing the effort in order to
criticality. Vanhoucke (2010b) referred to such a metric as the Schedule reduce the project duration. Hegazy and Petzold (2003) proposed a
Sensitivity Index (SSI), which measures the relative importance of each practical model that considers time, cost, and resource constraints to
activity taking the CI into account. Further, motivated by this idea determine an optimized approach for activity crashing during project
that the network topological structure can also be an important fac- control. Bowman (2006) utilized activity crashing as a corrective action
tor (Tavares et al., 2002), Madadi and Iranmanesh (2012) proposed the to optimize activity tolerance limits, aiming to attain specific on-time
Management-Oriented Index (MOI) to combine the activities’ variability probabilities while minimizing costs. In Vanhoucke (2010b, 2011),
and the effect of activities on project mean duration with topological the implemented corrective actions have been modeled as an activity
network information. Finally, Ballesteros-Pérez et al. (2019) proposed
duration reduction of 50% of the planned duration. Hu et al. (2016)
the Criticality-Slack-Sensitivity index (CSS), which is an improvement
used the sensitivity metrics information to improve the project schedule
of the SSI and MOI metrics by adding a third term considering the
performance during the corrective action process. Furthermore, Song
difference between the activity’s slack when all activity durations are
et al. (2020) investigated a limited budget to implement activity crash-
stochastic versus deterministic. The eight abovementioned SRA metrics
ing. Given that activity crashing in resource-constrained projects might
that will be used during the computational experiments of this paper
cause an increase in the resource usage, potentially causing conflicts
are abbreviated in Table 1. An overview and detailed discussion of
and project delays, Song et al. (2021) proposed a sequential strategy
the formulas of each sensitivity metric is already presented in many
for selecting activities that minimize the project’s makespan through
existing research studies such as Vanhoucke (2010a) and Ballesteros-
effective crashing. Second, in the variability reducing technique, the goal
Pérez et al. (2019). In order to not necessarily increase the length of
is to reduce the variability of the project by reducing the variability and
this paper, they are not again repeated here.
uncertainty of activities. In Madadi and Iranmanesh (2012), variabil-
Analytical sensitivity metrics: In addition to simulation-based sensi- ity reduction has been investigated deterministically, i.e. by reducing
tivity metrics from SRA, some activity based sensitivity metrics have the variability of the project activities once, before the start of the
been proposed based on an analytical analysis or other approximation project. Further, Martens and Vanhoucke (2019) applied this type of
methodologies. Cho and Yum (1997) argue that the relationship be- action during the project progress and quantified the relation between
tween activity duration and project duration often follows a non-linear the effort and the amount of reduction. Finally, in fast tracking, the
pattern in contrast to the CRI which focuses on the linear relationship. project network structure is overruled by executing partial precedence-
Further, they introduced the Uncertainty Importance Measure (UIM) to related activities in parallel in order to reduce the project completion
measure the effect of the variability in activity durations on the vari- time. Krishnan et al. (1997) introduced one of the first mathematical
ability of the project completion time. They used Taguchi’s sampling formulations for fast tracking and discussed the difficulties of overlap-
method with some modifications and orthogonal array designs. The ping product development activities. Further, Vanhoucke and Debels
UIM can be used for projects in which the activity distributions are (2008) investigated the impact of fast tracking subparts of activities
symmetric but its implementation in large projects is demanding and for the Resource-Constrained Project Scheduling Problem (RCPSP).
even impossible due to limitations in the size of Taguchi and the Finally, a stochastic model for schedule fast tracking has been proposed
orthogonal array designs. Following this study, several studies have by Ballesteros-Pérez (2017).

3
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Table 2
Literature list.
References Control strategy Corrective actions Activity selection Sensitivity Methodology Topology

Top-down Bottom-up Crashing Variability Fast tracking Sensitive Critical Analytical Simulation Analysis Threshold Set SP

Van Slyke (1963) – – – – – – – – ✓ – – – –


Williams (1992) – – – – – – – – ✓ – – – –
Vanhoucke (2010b) – ✓ ✓ – – ✓ – – ✓ – ✓ – ✓
Madadi and Iranmanesh (2012) – – – ✓ – ✓ – – ✓ – – – –
Risk analysis

Ballesteros-Pérez et al. (2019) – – – – – – – – ✓ – – – ✓


Cho and Yum (1997) – – – ✓ – – – ✓ – ✓ – – –
Elmaghraby et al. (1999) – – ✓ – – – – – – ✓ – – –
Gutierrez and Paul (2000) – – – ✓ – – – – – ✓ – – –
Elmaghraby (2000) – – ✓ ✓ – – – – – ✓ – – –
Cho and Yum (2004) – – ✓ – – – – – – ✓ – – –
Vaseghi et al. (2022) – – ✓ ✓ – – – ✓ – ✓ – – ✓

Hegazy and Petzold (2003) – – ✓ – – ✓ – – – – – – –


Bowman (2006) – – ✓ – – ✓ – – – – – – –
Vanhoucke (2011) ✓ ✓ ✓ – – ✓ ✓ – – – ✓ – ✓
Project control

Vanhoucke (2012b) ✓ – ✓ – – ✓ – – – – – – –
Hu et al. (2016) ✓ – ✓ – – ✓ – – – – ✓ – –
Song et al. (2020) ✓ – ✓ – – – ✓ – – – ✓ – ✓
Song et al. (2021) – ✓ ✓ – – ✓ – – – – ✓ – ✓
Martens and Vanhoucke (2019) ✓ – – ✓ – ✓ – – – – ✓ – ✓
Krishnan et al. (1997) – – – – ✓ ✓ – – – – – – –
Vanhoucke and Debels (2008) – – – – ✓ ✓ – – – – – – –
Ballesteros-Pérez (2017) – – – – ✓ ✓ – – – – – – –

# studies 5 3 12 6 3 12 2 2 5 6 6 – 7

Current study – ✓ ✓ ✓ – ✓ – ✓ – ✓ – ✓ ✓

2.3. Research outline simulation-based sensitivity metrics which rank the activities on their
potential impact on the project objective. Based on such ranking, a pre-
Although the risk analysis and project control components of dy- defined action threshold value must be set which makes a distinction
namic scheduling have been investigated by several research studies between sensitive activities (that must be subject to control and actions)
(cf. Sections 2.1 and 2.2), little research has been conducted on the and insensitive activities (which do not require much attention and will
combined implementation of these components. Some studies measured not be subject to actions). In the study of Vanhoucke (2011), it has been
and compared the performance of simulation-based sensitivity metrics argued that setting such action threshold value is important to keep
during the corrective action taking process. In particular, Vanhoucke the project manager’s effort as low as possible while still guaranteeing
(2010b, 2011) proposed an activity-based bottom-up project control ap- that the impact of the actions is high enough to bring the endangered
proach, in which corrective actions are taken when a sensitivity metric project back on the right path. It has been shown that such approach
exceeds a pre-defined action threshold. These studies reviewed the works better for parallel projects than for serial projects.
sensitivity metrics CI, SI, SSI, CRI(r), CRI(𝜌), and CRI(𝜏) and considered This study builds further on that principle of selecting a subset of the
activity crashing for taking corrective actions. The results showed that project activities to take corrective actions, but implements the activity
a corrective action approach based on activity sensitivity metrics is threshold in a fundamentally different way. When a fixed action thresh-
relevant and sensitivity information reduces the effort that has to be old value is set in advance, the number of activities exceeding this value
spend by the project manager, while improving the results for the can differ substantially between different projects, making it harder
project. In addition, the SSI and some versions of the CRI provide to define the effort of control and the number of activities subject
relatively better results than the CI and SI when evaluating the impact to corrective actions. Therefore, this study introduces a new concept,
of the corrective actions taken during project tracking. Further, Madadi called the action set, which defines the most sensitive activities by two
and Iranmanesh (2012) compared the performance of their proposed different activity ranking methodologies, and then selects the highest
MOI metric to the CI, SI, and CRI(r) throughout a variability reduction ranked activities to be subject to control. The two ranking methods
corrective action taking process and showed that the MOI was the top to design the action set consist of the traditional Monte Carlo based
performing sensitivity metric. Finally, Ballesteros-Pérez et al. (2019) sensitivity metrics, which will be compared with the recently presented
measured the performance of the seven abovementioned metrics and analytical ranking method proposed by Vaseghi et al. (2022). The detail
their proposed CSS. They showed that most sensitivity metrics do not of the new action set concept is the subject of the next section.
perform well unless they are applied iteratively.
While previous studies only integrated the use of simulation-based 3.1. Action set
sensitivity metrics into the corrective action taking process, this study
will implement both simulation-based sensitivity metrics and analytical As discussed earlier, the activity ranking can be based on
ranking procedures and compare their performance in an activity-based simulation-based sensitivity metrics or an analytical ranking method.
bottom-up project control approach. The considered simulation-based The simulation-based sensitivity metrics have been introduced in Sec-
metrics are introduced in Section 2.1 and summarized in Table 1. tion 2.1 and rely on Monte Carlo simulations to get values for all
The analytical measures, which only have been briefly discussed in
sensitivity measures of Table 1. These sensitivity metrics have been
Section 2.1, will be reviewed in greater detail in Section 3.1.
used in several studies in the past decade, and it has been shown that
the schedule sensitivity index (SSI) performs best for controlling projects
3. Research methodology
under uncertainty. The analytical ranking method has only been recently
proposed by Vaseghi et al. (2022) and has, to the best of our knowledge,
This section introduces the general approach of our study using a
not been applied in other risk and control studies. Given the novelty of
new activity ranking system (Section 3.1) and then provides a sum-
this method, its main approach which consists of three main steps will
mary of the methodology used for our computational experiments
be briefly summarized along the following lines.
(Section 3.2). As already briefly mentioned in Section 2.3, this study
relies on a so-called activity-based bottom-up project control approach Step 1. Project distribution: In the first step, the probability dis-
for integrating risk analysis and project control which has been initially tribution for the project duration is determined based on lognormal
presented by Vanhoucke (2011). In this approach, the selection of activity distributions for activity durations. This project distribution is
the activities on which corrective actions must be taken are based on determined using analytical calculations, which can be done in two

4
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

ways, depending on the value of the Complexity Index1 of the net- The size of the action set (|AS|) indicates the number of activities
work (Bein et al., 1992). If the complexity index of the project network included in the set. In order to test the relevance of this new concept,
is equal to zero, the network is said to be a series/parallel graph, two fundamentally different simulation experiments will be used as
and the exact distribution of the project makespan can be derived by explained in Section 3.2.
using a sequence of convolution operators for the series reductions
and product operators for the parallel reductions (which corresponds 3.2. Research instrument
to Algorithm 1 in Vaseghi et al. (2022). However, for project with
strictly positive complexity index values, the project network is said The aim of this study is to compare the two ranking methods to con-
to be an irreducible graph, and determining the exact project duration struct the action set, and measure their impact on the project objectives
distribution is a #p-complete problem (Hagstrom, 1988). Therefore, after taking corrective actions. To that purpose, two control strategies
approximation methods must be used to obtain lower and upper bounds are defined that will be used in the computational experiments of
for the project completion time distribution. The main approach of Section 4. These two strategies will be discussed along the following
these bounding methods is to transform the irreducible network into lines, and are illustrated graphically in Fig. 1.
an serial/parallel graph for which the makespan distribution can be The preventive control strategy stipulates that the corrective actions
computed easily. An upper bound on the project completion time are taken on the activities in the action set in advance before the project
overestimates the duration of the project (pessimistic case) and a lower has even started yet. This strategy creates an action set by ranking
bound on the project completion time underestimates the duration of the activities by either the analytical approach or by the simulation-
the project (optimistic case). By determining the tightest upper and based metrics, and selects the highest ranked activities which are the
lower bounds using several approximation methods, the uncertainty activities that must be changed preventively. This strategy makes use of
area in which the actual distribution is situated is reduced, and Vaseghi one Monte Carlo simulation to determine the values of the simulation-
et al. (2022) have shown that the average of the upper and lower based metrics, but measures the performance of the corrective actions
bounds provides a good approximation of the actual distribution of the in an analytical way. More precisely, the impact of these preventive
project makespan. actions is then measured analytically using the performance metric total
Step 2. Corrective actions: In the second step, corrective actions on an contribution of Vaseghi et al. (2022) and other performance metrics
activity are modeled as modifications of the original activity duration (will be reviewed later in Eqs. (3) to (8) of Section 4.1.2). The purpose
distributions to account for the inherent activity variability. More of using the preventive control strategy is to compare the analytical
precisely, the original activity duration distributions are modified by ranking method with the simulation-based ranking method to inves-
adjusting the original lognormal parameters, either the average value tigate whether they lead to similar results in a static environment
or the standard deviation, which corresponds to two fundamentally dif- (i.e. without assuming that actions are taking dynamically during the
ferent actions. These two actions, known as activity crashing (changing project progress). Only when the results between the two alternative
the mean) and variability reduction (changing the standard deviation) methods are not significantly different, the methods can be used in a
will also be used in the computational experiments of Section 4. dynamic environment which will be analyzed in the protective control
strategy.
Step 3. Ranking metrics: In the third step, the new probability distri-
The protective control strategy is used as a periodic monitoring pro-
bution of the project is determined after taking the actions of Step 2
cess of taking corrective actions on activities in the action set along the
in a similar way than was done in Step 1. It is therefore important to
project progress. This strategy reviews the activities of the action set
note that the impact of the corrective actions of Step 2 are performed
that are in progress at regular moments in time. More specifically, the
iteratively for each individual activity, i.e. by taking an action on one
review is made at equal intervals from 0% to 100% completion in steps
activity while leaving all other activities unchanged (no action). Based
of 5%, and when a delay is observed in one of these activities, correc-
on this new project probability distribution, the authors have proposed
tive actions are taken to bring the project back on track. This strategy
two ranking metrics that measure the total contribution of individual
is very similar to the bottom-up project control approach of Vanhoucke
activities when their distribution parameters are changed due to the
(2011), but the threshold values for the sensitivity metrics of this study
action. More specifically, a first version using the total contribution
are replaced by the action set to determine which activities require
measures the impact of a change in the mean of the distribution of each
actions. Unlike the preventive control strategy, the protective control
activity on the total project duration (activity crashing ) while keeping
strategy requires two Monte Carlo simulations (in case of using the
all other activity distributions unchanged (no action). This analytical
simulation-based metrics). The first Monte Carlo simulation is identical
ranking method is further abbreviated as the AR𝑚 method. Likewise,
to the simulation of the preventive control strategy and serves to
a second version of the total contribution measures the impact of a
calculate the values for the activity sensitivity metrics. We will refer to
reduction in the variance of the distribution of each activity (variability
such simulation as the static simulation to express that it is done prior
reduction), while also keeping the distribution of all other activities
to the project start. Such simulation is obviously not necessary for the
unchanged. This method is further abbreviated as the AR𝑠 method.
analytical ranking method. The second simulation is used for both the
The authors have shown that ranking activities according to these two
simulation-based ranking and analytical ranking and aims at imitating
ranking measures allows project managers to better focus on only a
the project progress to take possible corrective actions at each review
subset of the highest ranked activities, which corresponds to the action
period (i.e. at each multiple of 5% completion). Since this simulation is
set concept introduced in our current paper. The activities with the
used to imitate the dynamic progress of the project, it will be referred
highest values for the AR𝑚 and AR𝑠 ranking are therefore added to the
to as the dynamic simulation. The right part of Fig. 1 shows the iterative
action set.
nature of this control strategy, as the process will be repeated until all
The general idea of the action set concept is that only the 𝑥% highest
activities are finished.
ranked activities are considered to be the most sensitive activities
The goal of this protective control strategy is to investigate whether
having the biggest impact on the project duration after taking actions.
the analytical ranking method can compete with, or outperform the
existing simulation-based sensitivity metrics for dynamic project control
1
Bein et al. (1992) introduced the term Reduction Complexity to denote the (i.e. for projects in progress). Most of the academic studies in bottom-
minimum number of node reduction that are required to reduce a network 𝐷 up control rely on these sensitivity metrics for taking corrective actions
to a single arc. Later, De Reyck and Herroelen (1996) renamed it to Complexity under different settings (under a restricted budget (Song et al., 2020),
index. with and without resource constraints (Song et al., 2021), incorporation

5
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Fig. 1. Preventive (left) and protective (right) control strategies.

of SRA into buffer management (Hu et al., 2016)) and aim at maxi- Table 3
Overview of parameters.
mizing the impact of the actions with the minimum effort. However,
Parameters Standard parameter values
none of these studies explicitly take the action set into account to a-
priori restrict the number of activities under control. Consequently, Network data # activities 30
little is known about the ideal size of the action set to maximize the SP {0.1, 0.2, … , 0.9}

impact of actions, nor about how to best rank the activities in this set. Distribution data Lognormal 𝜇 = 0.0594 + ln(𝑑̂𝑖 ), 𝜎 = 0.2679
The procedure for ranking activities and defining the action set will Action data 𝜇 reductions 𝛼 = 0.6, 𝛽 = 1
be detailed by a numerical example in the appendix section for further 𝜎 reductions 𝛼 = 1, 𝛽 = 0.6
clarification. |AS| {2, 4, 6, 8, 10, 12, 18, 24, 30}

4. Computational experiment
4.1.1. Data parameter settings
The experiments of this paper make use of randomly generated
This section presents the experimental results that validate the project network dataset, extended with activity distributions and cor-
action set concept and compare the analytical and simulation-based rection actions using parameter values described along the following
ranking methods. Section 4.1 provides an overview of the parameter lines, and summarized in Table 3.
settings and performance metrics used in the analysis. The experiments Network data. The project network data is generated with the RanGen1
are divided into four sections. In Section 4.2, the relevance of the network generator initially proposed by Demeulemeester et al. (2003)
action set is demonstrated by comparing three different ways of ranking and later updated by Vanhoucke et al. (2008). The generation process
activities. Sections 4.3 and 4.4 focus on comparing simulation-based makes use of two input parameters to randomly generate projects. The
sensitivity metrics and analytical ranking measures using the preven- number of project activities (# activities) that has been set to 30, and
tive and protective control strategies, respectively. The robustness of the order strength (OS, Mastor (1970)) that has been varied from 0 to
the analysis is assessed in Section 4.5 by varying the values of distribu- 1 in steps of 0.01. For each of these OS values, 1,000 networks have
tion and action parameters. Lastly, Section 4.6 presents an approach for been generated for which the Complexity Index of Bein et al. (1992)
determining a single ranking measure based on the proposed analytical is automatically calculated. It is known from Vaseghi et al. (2022), the
analytical ranking method can only provide exact results for project
ranking measures.
networks with a Complexity Index value equal to zero, while it must be
replaced by approximated methods when the value is strictly positive.
4.1. Design of experiments For each project, the series/parallel indicator (SP) has been calculated
that is an alternative network parameter representing the closeness to
a completely serial (SP = 1) or parallel (SP = 0) network. Since SP has
This section gives an overview of the project data parameters used shown a clear relation with the risk analysis performance and project
for the experiments (Section 4.1.1) followed by the performance met- control methodologies (Vanhoucke, 2011) and Vaseghi et al. (2022)
rics to discuss the results of each experiment (Section 4.1.2). have shown that it has a bigger impact than OS or Complexity Index

6
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

for determining the project duration distribution, the analysis in the on mean TC𝑚 measures the relative impact of the actions on the mean
remaining sections will only report values for the SP. The tested dataset project duration as shown in Eq. (3)
consists of 50 projects for each value of SP, resulting in 9 × 50 = 450 𝑚𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟
projects in total. 𝑇 𝐶𝑚 = × 100 (3)
𝑚𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒
Distribution data. Since the activity durations are assumed to be stochas- with 𝑚𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒 (𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟 ) the mean of the project duration before (after)
tic variables instead of deterministic values, the activity duration dis- the actions. Likewise, the total contribution on standard deviation TC𝑠
tributions must be define in advance. Given that activity durations measures the relative impact of the actions on the standard deviation
cannot have negative values and are assumed to be right-skewed, their of the project duration as shown in Eq. (4)
variability is modeled using the lognormal distribution in this paper, 𝑠𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑠𝑝,𝑎𝑓 𝑡𝑒𝑟
which has been used to model the activity durations in several studies 𝑇 𝐶𝑠 = × 100 (4)
𝑠𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒
(e.g., Bie et al., 2012; Hu et al., 2016). Trietsch et al. (2012) have given
some sound theoretical arguments why such distribution might hold for with 𝑠𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒 (𝑠𝑝,𝑎𝑓 𝑡𝑒𝑟 ) the standard deviation of the project duration
many projects and assumed that the relative durations 𝑑𝑖 ∕𝑑̂𝑖 (where 𝑑𝑖 before (after) the actions. Obviously, higher values are preferred, as
and 𝑑̂𝑖 represent the actual duration and the baseline estimate duration the higher the values for the two metrics, the more the mean 𝑚𝑝 and
of activity 𝑖) to be lognormally distributed. Therefore, in this paper, the standard deviation 𝑠𝑝 of the project have been reduced, and hence, the
relative durations 𝑑𝑖 ∕𝑑̂𝑖 is assumed to be lognormally distributed with higher the total contribution of the corrective actions.
𝜇 = 0.0594 and 𝜎 = 0.2679 (where 𝜇 and 𝜎 represent the geometric It has been argued earlier that the size of the action set allows the
mean and standard deviation of activity duration distribution), which project manager to better control the number of activities subject to po-
result in an arithmetic mean (m) and standard deviation (s) of 1.1 tential corrective actions, and hence, the effort of control. This concept
and 0.3, respectively. An arithmetic mean of 1.1 for this distribution has initially been introduced by Vanhoucke (2010a) who argued that
implies that the actual durations can vary around the baseline estimate. the impact of the corrective actions should always been measured in
Accordingly, the actual duration of an activity 𝑖 (𝑑𝑖 ) is assumed to be relation to the effort of control. This so-called control efficiency concept
lognormally distributed with 𝜇 = 0.0594 + ln(𝑑̂𝑖 ) and 𝜎 = 0.2679. In stud- is also applied here by defining either the size of the action set of the
ies by Colin and Vanhoucke (2016) and Vanhoucke (2019), it has been size of the corrective actions as a proxy for the effort of control, leading
shown that the lognormality distribution of the 𝑑𝑖 ∕𝑑̂𝑖 can be verified on to two different formulas.
a wide set of empirical project data. Note that this parameter setting has In the first definition of control efficiency, the number of activities
been applied in previous research studies (e.g., Martens & Vanhoucke, in the action set are used to define the effort of control, which can
2019 and in Section 4.5, we will perform various robustness checks to range from one single activity to the complete activity set of the project.
test the reliability of our analysis by employing different values of 𝜇 The control efficiency will therefore be express as the total contribution
and 𝜎. divided by the effort of control presented in Eqs. (5) and (6). This
definition of control efficiency is very similar to the definition proposed
Action data. In order to test the impact of the corrective actions, the
by Vanhoucke (2011) who uses the NEA as the number of evaluated
original distributions of the activity durations will be modified in two
activities in the denominator and measures the contribution as the
different ways (corresponding to two types of actions). Eqs. (1) and (2)
absolute decrease in the project duration after actions in the numerator.
show the distribution parameter modifications for action types 1 and 2,
We have replaced the numerator by the relative decrease in the mean or
respectively. Action 1 (activity crashing ) modifies the 𝜇 parameter of the
standard deviation since we assume probability distributions on activity
distribution, by setting parameters 𝛼 and 𝛽 to 0.6 and 1, respectively.
durations instead of single-point estimates, but the general idea is the
This results in an activity duration distribution with 𝜇 ′ = 0.0594+ln(𝑑̂𝑖 )+
same.
ln(0.6) = −0.4514 + ln(𝑑̂𝑖 ) and 𝜎 ′ = 1 × 0.2679 = 0.2679. Similarly,
𝑇 𝐶𝑚
for Action 2 (variability reduction), a modification of 𝜎 with 𝛽 = 0.6 𝐶𝐸𝑚1 = (5)
|𝐴𝑆|
while setting 𝛼 to 1 results in a modified activity duration distribution
with 𝜇 ′ = 0.0594 + ln(𝑑̂𝑖 ) and 𝜎 ′ = 0.6 × 0.2679 = 0.1607. Note that 𝑇 𝐶𝑠
𝐶𝐸𝑠1 = (6)
in the experiments of this paper, values of 𝛼 and 𝛽 are assumed to be |𝐴𝑆|
0.6 since this modifies the 𝜇 and 𝜎 parameters of the activity duration with |𝐴𝑆| equal to the number of activities put in the action set (effort )
distribution with a medium size of action. In Section 4.5, however, a and the total contribution for Action 1 (TC𝑚 ) or Action 2 (TC𝑠 ) in the
series of robustness checks will be conducted to assess the reliability numerator. When the action set is too small, only a few activities will
of our analysis. These checks will involve the utilization of different be subject to control and corrective actions, and it is then not very
values of 𝛼 and 𝛽. Such approach has been taken in previous studies likely that project problems will be solved easily. Likewise, when the
by Martens and Vanhoucke (2019) and Vaseghi et al. (2022). In all action set is equal to the total activity set, all activities will be subject
experiments, the size of the action set (|AS|) is varied from 2 to 30, as to corrective actions, which most likely will solve all project problems
shown in Table 3, to test the validity of our research for different levels but will be too time-consuming for the project manager who has to
of control effort. continuously check all activities in progress (or generally, will require
𝜇 ′ = 𝜇 + 𝑙𝑛(𝛼), with 𝛼 = 0.6, 𝛽 = 1 (1) too much effort).
The control efficiency of Eqs. (5) and (6) only measures the number
of activities under control as the effort of control, which can be consid-
𝜎 ′ = 𝛽𝜎, with 𝛼 = 1, 𝛽 = 0.6, (2) ered as a proxy for the time a project manager spends on controlling
a subset of the project (i.e. the activities in the action set). However,
4.1.2. Performance metrics the definition does not measure anything about the size of the actions
Three types of performance metrics are used in the computational taken on these activities. Therefore, an alternative control efficiency
experiments taken from previous research studies and known as the definition will take this new effort into account by comparing the total
total contribution and two definitions of control efficiency. The total contribution with the total size of the actions taken on the activities in
contribution metrics of Vaseghi et al. (2022) measure the relative the action set. Consequently, the effort of control is no longer measured
impact (contribution) of the actions taken on the selected activities on as the time the project manager spends on controlling the project, but
the two parameter values of the project duration. The total contribution rather measured by the cost of the actions (i.e. the bigger the size of

7
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Fig. 2. Comparison of good, bad and random activity ranking (Actions 1 and 2).

actions, the higher the cost). The formula is shown in Eqs. (7) and (8). impact. However, this concept only has relevance if the analytical or
simulation-based ranking is indeed able to select the most sensitive
𝑚𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟 activities first, and this experiment aims at investigating which ranking
𝐶𝐸𝑚2 = ∑ × 100 (7)
system performs best. To that purpose, the best-performing simulation-
𝑖 (𝑚𝑖,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑚𝑖,𝑎𝑓 𝑡𝑒𝑟 )
𝑠𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑠𝑝,𝑎𝑓 𝑡𝑒𝑟 based sensitivity metric and the two analytical based rankings are
𝐶𝐸𝑠2 = ∑ × 100 (8) tested under three different settings for both types of corrective ac-
𝑖 (𝑠𝑖,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑠𝑖,𝑎𝑓 𝑡𝑒𝑟 )
tions. The simulation-based ranking method only relies on the SSI
with 𝑚𝑖,𝑏𝑒𝑓 𝑜𝑟𝑒 (𝑠𝑖,𝑏𝑒𝑓 𝑜𝑟𝑒 ) and 𝑚𝑖,𝑎𝑓 𝑡𝑒𝑟 (𝑠𝑖,𝑎𝑓 𝑡𝑒𝑟 ) the mean (standard de- metric since it has been shown that this metric outperforms the other
viation) of the activity duration before and after the actions. This simulation-based sensitivity metrics as will be shown in our later
definition of control efficiency is somewhat similar to the so-called experiments. This was also already been concluded in a handful of
time efficiency in the study of Song et al. (2020) and unit contribution other studies. Vanhoucke (2011) was the first to compare different
in the study of Vanhoucke (2010a) and measures the average return simulation-based sensitivity metrics, and could show on a set of arti-
of all actions taken on the selected activities from the action set on ficial projects that the SSI outperforms all others. In later experiments,
the parameter values of the project duration. The formula has been this observation was confirmed in other project control studies, such
adapted, similar to the previous control efficiency formula, to copy with as (Ballesteros-Pérez et al., 2019), although other studies argue that
the probability distribution parameters (𝜇 and 𝜎) instead of using the it is not better than other sensitivity metrics (Elshaer, 2013). For the
single-point estimates. analytical ranking method, the AR𝑚 and AR𝑠 metrics are used for Action
It should be noted that the control efficiency metrics will be used 1 (activity crashing ) and Action 2 (variability reduction), respectively.
as secondary metrics and only show relevance for comparing methods For each of the three ranking metrics (SSI, AR𝑚 and AR𝑠 ), the action
with equal (and preferably high) values for the total contribution (as set has been constructed in three different ways, each time consisting
the maximization of the impact on the project duration (mean and stan- of up to six activities (experiments with other values of the action set
dard deviation) is the main objective and much more important than size show similar results). First and foremost, the good ranking displays
optimizing the control efficiency of the project manager’s time and/or the highest values in the action set, as they are assumed to be the
cost. We will nevertheless show that the results of the computational most sensitive activities. This approach is the normal approach and
experiments show some strong similarities with previous studies, which should work better than the two other approaches. More specifically,
illustrates the relevance of our new action set concept. the random ranking approach randomly selects up to six activities
and therefore completely ignores the activity ranking. Finally, the bad
4.2. Experiment 1. Action set relevance ranking approach selects the lowest ranked activities and should work
very badly in our study.
The concept of an action set is based on a ranking system to The results are shown in Fig. 2 for Action 1 (left) and Action 2
obtain the 𝑥 highest ranked activities that are said to be the most (right) and confirm our statements. The graphs show the total con-
sensitive activities for which corrective actions have a bigger potential tribution (for the mean and the variance for Action 1 and Action 2,

8
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Table 4
Preventive strategy — Actions 1 and 2.
Performance Analytical Simulation-based
measures 𝐴𝑅𝑚 𝐴𝑅𝑠 𝐶𝐼 𝑆𝐼 𝑆𝑆𝐼 𝐶𝑅𝐼(𝑟) 𝐶𝑅𝐼(𝜌) 𝐶𝑅𝐼(𝜏) 𝑀𝑂𝐼 𝐶𝑆𝑆
Action 1
𝑇 𝐶𝑚 20.29 – 15.52 15.88 20.29 18.26 18.08 16.33 18.71 13.19
𝐶𝐸 1𝑚 3.65 – 2.69 2.77 3.63 3.29 3.27 3.01 3.29 2.29
𝐶𝐸 2𝑚 83.27 – 84.81 84.30 82.85 80.54 80.57 80.64 77.99 70.78
𝑇 𝐶𝑠 22.60 – 15.00 15.63 22.85 20.31 19.80 18.12 20.96 12.76
𝐶𝐸 1𝑠 4.07 – 2.57 2.69 4.11 3.70 3.61 3.38 3.67 2.20
𝐶𝐸 2𝑠 26.94 – 20.79 20.98 27.05 25.62 25.28 25.47 24.42 16.65
Action 2
𝑇 𝐶𝑠 – 25.98 17.52 18.09 25.68 22.90 22.70 20.69 22.72 14.94
𝐶𝐸 1𝑠 – 4.77 3.01 3.15 4.67 4.17 4.15 3.85 4.03 2.68
𝐶𝐸 2𝑠 – 27.87 21.50 21.62 27.13 25.37 25.49 25.41 23.81 18.09

respectively) on the y-axis for an increasing number of activities in metrics. The results not only show the relevance and accuracy of the
the action set (x-axis) up to six. The results show that the good ranking analytical method, but also confirm the previous studies that the SSI has
outperforms the two other approaches, which illustrates the importance a much better potential to focus on the most sensitivity activities during
of ranking the activities prior to the project start. Moreover, the more bottom-up project control (Vanhoucke, 2011). It is also interesting to
actions in the action set, the bigger the total contribution (obviously) see that the control efficiency values are almost the highest for both
but also the higher the importance of a good ranking system. The results the analytical and SSI-based ranking method, which confirms earlier
also indicate that the simulation-based SSI indicator can compete with findings in the literature (about the SSI). The ranking metrics CI and SI
the analytical indicators, both for activity crashing (Action 1) and have often identical values for most activities in the project (certainly
variability reduction (Action 2). for serial projects) and are therefore not able to differentiate the activ-
A detailed look at the shows that for Action 1, selecting the lowest ities. Without such clean differentiation, they are not able to provide a
ranked activities as candidates for the action set does not result in well-performing ranking list to construct the action set. As a result, they
any good value for TC𝑚 and it even sometimes results in negative show a low total contribution (TC) and control efficiency in terms of
values. Thus, choosing these activities for corrective actions could number of actions (CE1𝑚 and CE1𝑠 ). However, the results for the CI and SI
possibly have an inverse effect and increase the mean project duration. have a high control efficiency on mean project duration for the average
Moreover, using the random selection only increases the TC𝑚 to around return of all actions taken on the selected activities (CE2𝑚 ) for Action
4%, while it increases to up to 13% when the good ranking method 1. This can be explained by the fact that these metrics only consider
is used. Similarly, for Action 2 of Fig. 2, selecting the lowest ranked the mean of activity duration distributions and consequently, propose
activities for action has a negligible and even a negative impact on TC𝑠 . an activity ranking without taking into account the stochastic nature
The random method with 6 activities increases the TC𝑠 to around 5%, of the project network measured by the variability. As a result, the
while the good method reaches values up to 25%. amount of reduction in the mean project duration (𝑚𝑝,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟 )
is closer to the amount of reductions in the mean of activity durations

4.3. Experiment 2. Preventive control strategy ( 𝑖 (𝑚𝑖,𝑏𝑒𝑓 𝑜𝑟𝑒 − 𝑚𝑖,𝑎𝑓 𝑡𝑒𝑟 )) and the control efficiency (CE2𝑚 ) is closer to
100%. The analytical ranking measures and SSI, on the other hand, take
The simulation-based ranking method of the previous experiment into account both dimensions of stochastic project networks (mean and
relied on the schedule sensitivity index (SSI) and did not make any standard deviation), which might explain why they provide a better
comparison with the other well-known simulation-based metrics. Even ranking with the highest total contribution. Therefore, we argue that
though it is known from the literature that the SSI outperforms the the selection of a good ranking method should be done using the three
other sensitivity metrics, no results are available that validate this types of performance metrics (TC, CE1 and CE2 ) for the mean (𝑚) and
better performance for corrective actions on activity distributions. As standard deviation (𝑠).
a matter of fact, all previous studies assume a deterministic baseline While these results experimentally show that the analytical and
schedule using single-point estimates (instead of distributions), and simulation-based ranking methods using the SSI provide high-quality
only add uncertainty for calculating the sensitivity metrics and correc- rankings to construct an action set, they do not allow us to conclude
tive actions during project progress. However, the analytical ranking anything about the relevance of this action set concept for monitor-
method of our study already incorporates both probability distributions ing and controlling projects in progress. This will be tested by the
and corrective actions during the construction of the ranking method, protective control strategy in the next experiment.
which is, to the best of our knowledge, only been done before in the
study by Vaseghi et al. (2022). 4.4. Experiment 3. Protective control strategy
In order not to mix the impact of uncertainty for constructing the
action set and using the uncertainty for imitating project control, this This section discusses the results of protective control strategy that
assumes that corrective actions are taken on the action set in a dynamic
section makes use of the preventive control strategy which assumes that
way at periodic time intervals by monitoring the project performance
all actions are taken in advance, prior to the project start. Consequently,
and only in case the project is in danger. This section not only measures
this strategy can be considered as a ‘‘pure’’ control approach since it
the impact of the network topology and the size of the action set on the
cannot be influenced by the quality of the simulations for imitating
total contribution of the project, but also puts this total contribution
project progress (not existing), only by the simulations to obtained
in the right perspective by taking the control efficiency into account.
the sensitivity metrics. Table 4 displays the results of the preventive
The results of the analytical ranking method will be compared with
control strategy for both action types, and compares the two total
all sensitivity-based sensitivity metrics to confirm the results found in
contribution metrics of the analytical method with the eight sensitivity
earlier studies in which the protective control strategy is used.
metrics obtained from Monte Carlo simulations. The results indicate
that the SSI shows a similar high performance to the AR𝑚 (Action 1) and Network structure: In order to test the impact of the network structure
AR𝑠 (Action 2) approaches, and outperform all other simulation-based on the performance of the two ranking methods, the results of the

9
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Fig. 3. Relative performance of all ranking methods split in 3 groups (Actions 1 and 2).

experiments have been analyzed for different values of the network • Group 3: Three sensitivity-based metrics, SI, CI and CSS show
structure. It has been shown in other studies that the serial/parallel in- a strange behavior, and perform relatively well for parallel net-
dicator (SP) is a main driver to predict the accuracy of time predictions works (as they can compete with the best methods) but their good
and control efficiency (Martens & Vanhoucke, 2019; Vanhoucke, 2011). performance drops quickly when projects are more serial in their
These results have been verified and confirmed in extended project structure. At the very serial network side, these sensitivity metrics
control studies using budget constraints (Song et al., 2020) and re- have a very weak performance compared to the other ranking
source constraints (Song et al., 2021). For this reason, we have analyzed methods, and should therefore not be used at all for activity
the results of our experiment in two different ways. Fig. 3 shows the ranking.
relative performance (total contribution, 𝑦-axis) of all ranking methods
compared to the best performing methods for increasing values for the In order to visualize the absolute performance of the ranking methods
SP (x-axis) for Action 1 (left) and Action 2 (right). and measure the impact of the serial/parallel indicator, the results of all
Our results showed that the SSI metric and the two analytical ranking methods have been shown for the three groups separately in 6
methods performed best and almost equally well, which is why they graphs. Fig. 4 shows the performance of the grouped ranking methods
are used as benchmark methods in Fig. 3 (top line at 100%). The by the total contribution and the two definitions for control efficiency
results indeed show the better performance of the SSI lies close or is (as presented in Section 4.1.2). The graphs show that the performance
almost identical to the analytical method, and since this latter makes of all methods depend on the serial/parallel indicator SP, both for
use of an exact method (for s/p reducible graph networks) and very Action 1 and Action 2, and deteriorates for increasing values. This is,
good approximations for irreducible networks, it indicates that the SSI’s once again, completely in line with the literature, as it has been shown
good performance is hard to further improve. As mentioned earlier, the that bottom-up project control (using sensitivity metrics) works much
good performance of the SSI metric is completely in line with previous better for parallel projects and is much less good for serial networks.
studies who argue that the SSI outperforms all other sensitivity-based One notable exception is the graph for the CE2𝑚 formula definition for
methods. We have clustered all methods for further discussion into Action 1, in which the groups are slightly different than previously
three distinctive groups. discussed. In this graph, the CI and SI, normally belonging to Group
3, are now performing as good as the best performing metrics (Group
• Group 1. The first group consists of the best performing methods,
1), which has also been observed by Vanhoucke (2010b) (although
which consists, as said, of the analytical ranking methods and the
CE2𝑚 was called unit contribution in this study). This is in line with the
SSI-based sensitivity method.
observation in Section 4.3.
• Group 2. The second group consists of the 4 simulation-based
metrics (MOI and the three versions of the CRI metric) which Action set: The impact of the size of the action set is shown in Fig. 5.
shows a relatively stable deviation from Group 1, which means The upper line is the methods of Group 1 while the lower line is the
that they perform never as good as the best performing metrics, least-performing method of Group 3, and all other ranking methods
but the difference from them is independent of the network lie between these two extremes as shown by the gray area. The graph
structure. obviously shows that a growing number of activities in the set leads to

10
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Fig. 4. Absolute performance of all ranking methods split in 3 groups (Actions 1 and 2).

better results. However, it also shows the marginal increase in the total for different parameter values used to model activity duration variabil-
contribution, which illustrates that care should be taken not to make ity and corrective actions. A summary of the parameters used for the
the action set too big. It can be seen that the performance converges ranking methods and simulation runs is given along the following lines.
to each other for increasing values of the action set. In the extreme,
they all show the same performance, since if all activities of the project • The analytical ranking method makes use of probability distribu-
tions with known values for the average and standard deviation
is put in the action set, activity ranking is no longer necessary. This
(referred to as 𝜇1 and 𝜎1 and also incorporates the action param-
shows, once again, that for relative small action set sizes, selecting the
eters during the construction of the action set (referred to as 𝛼1
appropriate ranking method is crucial to obtain a good performance.
and 𝛽1 . This approach is called analytical since it does not rely on
Monte Carlo simulations but on exact and approximate analytical
4.5. Robustness of our analysis calculations.
• The simulation-based ranking makes use of the static simulations to
In order to verify that the results of the experiments are also obtain the values of the sensitivity metrics using the same prob-
validate for other input parameter values, several alternative robustness ability distributions of the analytical ranking method to model
checks have been performed. More precisely, the impact of changing activity duration variability (with parameter values 𝜇1 and 𝜎1 ).
the activity duration distribution parameters or the corrective action These simulations do not take any actions into account, and the
action parameters 𝛼1 and 𝛽1 are therefore not used in this method.
parameters is investigated in this section. In the previous experiments,
• The dynamic simulations to imitate project progress are used for
the parameter values used to rank activities (using either the analytical
both ranking methods to test the relevance of a good actions set
method or the simulation-based method) were set to identical values
and the impact of corrective actions. For this simulation, both the
during the dynamic simulations to imitate project progress, and this
activity variability must be defined by probability distributions
should be, in reality, not always the case. As a matter of fact, the
with parameters 𝜇2 and 𝜎2 which can differ from the initial
dynamic simulations of the protective strategy are used to imitate real
parameters 𝜇1 and 𝜎1 . Likewise, the corrective actions taken on
project progress, and there is no reason why we could realistically
the activities in the action set make use of action parameters 𝛼2
assume that they are perfectly in line with the parameters used for
and 𝛽2 which can differ from the 𝛼1 and 𝛽1 parameters.
activity ranking. Therefore, it might be possible, and even realistic, to
assume that the project progress is slightly, or even completely different Recall that we have set the initial values for the activity duration
than was though before its start when constructing the action set. This distributions to 𝜇1 = 0.0594 and 𝜎1 = 0.2679 and for the action data to
section will therefore test the robustness of the previously found results 𝛼1 = 0.6 and 𝛽1 = 1 (Action 1) and 𝛼1 = 1 and 𝛽1 = 0.6 for (Action 2) as

11
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Fig. 5. Performance metrics for increasing sizes of the action set (Actions 1 and 2).

Table 5 (on average), and even the second best performing simulation-based
Initial and modified parameters in robustness analysis (Actions 1 and 2).
method (since SSI performs best) is still worst than the AR and SSI
Modified parameter Initial value Modified value methods, which clearly shows the robustness of our method. Note that
Action 1 worst case, the performance drops to half of the performance of the AR
𝜇 0.0594 {0.1, 0.2, 0.3, 0.4, 0.5} and SSI methods, which demonstrates the effectiveness of our ranking
𝛼 0.6 {0.5, 0.4, 0.3} method.
Action 2
𝜎 0.2697 {0.3, 0.35, 0.4, 0.45, 0.5}
𝛽 0.6 {0.5, 0.4, 0.3} 4.6. Determination of a single ranking measure

In all experiments of this paper, the AR𝑚 and AR𝑠 ranking measures
have been analyzed and compared with the simulation-based sensitivity
summarized in Table 5. In order to test the robustness of our approach,
metrics for both Actions 1 and 2. It has been shown that the AR𝑚
the parameters 𝜇2 , 𝜎2 , 𝛼2 and 𝛽2 have been set to different values, as
ranking works best for Action 1 (activity crashing to decrease the mean
displayed in Table 5:
project duration) while the AR𝑠 ranking is found to be the most suitable
The values of performance measures total contribution on mean (TC𝑚 )
ranking measure for Action 2 (which aims at reducing the standard
and total contribution on standard deviation (TC𝑠 ) for both Action 1 and
Action 2 with the modified parameters are shown in Table 6. The deviation of the activity distribution). However, using two different
table shows the average values for the TC𝑚 and TC𝑠 measures for the ranking methods is not always desirable, as the project manager does
best performing analytical ranking methods (AR𝑚 and AR𝑠 ) and the not know in advance which action will be the most appropriate for the
best performing simulation-based ranking method (SSI). These results project. Therefore, this final experiments aims at finding out whether a
can be compared with the performance of the other simulation-based single ranking method, defined as a combination of the two methods,
ranking methods, for which the best, average and worst performance is can be used for ranking activities. More precisely, the single unified
displayed. The results are averaged for all SP values from 0.1 to 0.9 and ranking measure, UAR, will be defined as a weighted combination of
for action set sizes between 2 and 10, as well as for all values for the the two ranking methods as UAR = 𝛾 AR𝑚 + 𝜂 AR𝑠 , where 𝛾 and 𝜂
modified parameters 𝜇2 , 𝜎2 , 𝛼2 , 𝛽2 . As can be seen in the table, the best represent the weights assigned to each ranking measure, reflecting their
ranking methods still outperform the other simulation-based rankings respective importance.

12
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Table 6
Robustness analysis under changed parameters (Actions 1 and 2).
Modified Performance Best ranking measures Other ranking measures
parameter measures 𝐴𝑅𝑚 𝐴𝑅𝑠 𝑆𝑆𝐼 Best Average Worst
Action 1
𝜇 𝑻 𝑪𝒎 11.36 – 11.25 10.27 9.00 7.06
𝛼 𝑻 𝑪𝒎 10.08 – 10.02 9.46 7.60 4.98
Action 2
𝜎 𝑻 𝑪𝒔 – 14.59 14.40 13.45 10.32 6.38
𝛽 𝑻 𝑪𝒔 – 17.46 17.35 16.43 12.60 7.42

Fig. 6. Performance metrics 𝑇 𝐶𝑚 and 𝑇 𝐶𝑠 for different weights (Actions 1 and 2).

Fig. 7. Performance metrics 𝐶𝐸𝑚1 and 𝐶𝐸𝑠1 for different weights (Actions 1 and 2).

In the new experiment, we have varied the 𝛾 and 𝜂 values between that the UAR method can be used as a valid alternative for ranking
0 and 1 in steps of 0.1, and the results are displayed in three figures the activities, as some of the values for the performance measures are
(Figs. 6–8 for the 𝑇 𝐶, 𝐶𝐸 1 and 𝐶𝐸2 metrics respectively). Each figure almost as high as the extreme ranking methods shown by the circles.
displays the best working performance metrics (e.g. TC𝑚 for Action 1 In order to define a good UAR ranking method, the values for the 𝑇 𝐶𝑚
and TC𝑠 for Action 2) of the vertical axes of the left. Moreover, the and 𝑇 𝐶𝑠 performance metrics are displayed in Table 7 for different
other metrics (TC𝑠 for Action 1 and TC𝑚 for Action 2) that are not 𝛾 and 𝜂. The table also displays the differences from their respective
very appropriate for the actions are also displayed (on the secondary
maximum values (TC𝑚𝑎𝑥 𝑚 and TC𝑚𝑎𝑥
𝑠 ) for different combinations of 𝛾
vertical axes on the right). In the graphs, the results are displayed for
and 𝜂 for Actions 1 and 2. These differences are calculated using the
the various combinations of 𝛾 and 𝜂 to determine the UAR (under the
preventive control strategy). formulas 𝛥TC𝑚 = TC𝑚𝑎𝑥 𝑚𝑎𝑥
𝑚 - TC𝑚 and 𝛥TC𝑠 = TC𝑠 − TC𝑠 . The last column
First and foremost, the circular areas indicate that the highest of the table represents the sum of these differences for Actions 1 and 2,
performance metrics, TC𝑚𝑎𝑥 and TC𝑚𝑎𝑥 denoted as (𝛥TC𝑚 + 𝛥TC𝑠)𝐴𝑐𝑡𝑖𝑜𝑛1 + (𝛥TC𝑚 + 𝛥TC𝑠)𝐴𝑐𝑡𝑖𝑜𝑛2 . The minimal
𝑚 𝑠 , are achieved by using the best
ranking measures, AR𝑚 for Action 1 and AR𝑠 for Action 2, respectively. difference is obtained for 𝛾 = 0.8 and 𝜂 = 0.2 which is the proposed
This finding confirms the conclusions drawn by Vaseghi et al. (2022) UAR ranking from this experiment which works relatively well for both
and used in our previous experiments. However, the results also show types of actions.

13
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Table 7
Performance metrics 𝑇 𝐶𝑚 and 𝑇 𝐶𝑠 for different weights (Actions 1 and 2).
Weights Action 1 Action 2 Sum
𝑇 𝐶𝑚 𝑇 𝐶𝑠 𝛥𝑇 𝐶𝑚 𝛥𝑇 𝐶𝑠 𝑇 𝐶𝑚 𝑇 𝐶𝑠 𝛥𝑇 𝐶𝑚 𝛥𝑇 𝐶𝑠
𝛾 = 1, 𝜂 = 0 20.30 22.60 0.00 1.58 2.36 25.34 0.00 0.64 2.22
𝛾 = 0.9, 𝜂 = 0.1 20.09 23.58 0.21 0.60 2.34 25.91 0.02 0.07 0.90
𝛾 = 0.8, 𝜂 = 0.2 19.71 24.18 0.58 0.00 2.33 25.91 0.03 0.07 0.68
𝛾 = 0.7, 𝜂 = 0.3 19.26 24.12 1.04 0.05 2.32 25.93 0.04 0.05 1.18
𝛾 = 0.6, 𝜂 = 0.4 18.57 24.06 1.73 0.12 2.32 25.92 0.04 0.06 1.94
𝛾 = 0.5, 𝜂 = 0.5 18.01 24.08 2.29 0.10 2.32 25.93 0.05 0.05 2.47
𝛾 = 0.4, 𝜂 = 0.6 17.74 24.10 2.55 0.07 2.31 25.94 0.05 0.04 2.72
𝛾 = 0.3, 𝜂 = 0.7 17.57 24.06 2.72 0.11 2.31 25.93 0.05 0.05 2.94
𝛾 = 0.2, 𝜂 = 0.8 17.37 24.08 2.93 0.09 2.31 25.93 0.06 0.05 3.12
𝛾 = 0.1, 𝜂 = 0.9 17.18 24.08 3.11 0.10 2.30 25.95 0.06 0.03 3.31
𝛾 = 0, 𝜂 = 1 17.11 24.06 3.18 0.12 2.29 25.98 0.07 0.00 3.37

Fig. 8. Performance metrics 𝐶𝐸𝑚2 and 𝐶𝐸𝑠2 for different weights (Actions 1 and 2).

5. Conclusion the best simulation-based method, using the schedule sensitivity index
SSI, outperforms the other simulation-based methods. The CI, SI and
This paper presented a new method to rank activities and to eval- CSS resulted in the lowest performance, except for extremely parallel
uate the quality of corrective actions during the project control phase networks. Finally, it is shown that the analytical and SSI-based methods
of the project life cycle. A new concept, referred to as the action set, are less sensitive to input parameter changes than the other methods,
is used to determine the most sensitive activities that are most likely which once again illustrates that these methods should be used in future
to have the biggest impact on the project objective. A comparison is project control studies.
made between two analytical ranking methods and several simulation- We believe that this study is relevant for academic and practitioners.
based methods, and results show that the analytical methods, and the First and foremost, the new insights can provide guidelines how to carry
SSI-based simulation method outperform all other methods. out future academic project control studies with or without the use of
These concepts have been tested using two types of simulation runs simulations, and we recommend to use either analytical ranking meth-
to model a preventive and a protective strategy using two types of ods (which only works for predefined distributions) or the SSI-based
corrective actions. The preventive strategy entails taking action on simulation alternative. We believe that the results can also provide
activities in the action set before the start of the project execution, insights to practitioners since it shows the importance of having a
while the protective strategy involves taking actions during project ex- quantitative metric to assess the impact of actions. We therefore believe
ecution while the activities are in progress. Two different performance that the future research of this study should focus on testing the new
measures have been proposed to measure the total contribution, and concepts in a practical setting using empirical data. Such extension is
the link is made to existing efficiency metrics used in similar previous challenging, since it requires the availability of case-specific parameters
studies. for the distributions as well as knowledge about actions taking during
The computational experiments of this study have shown the rel- the project progress. Another possible extension of this study is to
evance and usefulness of the action set as well as the ability of the include other raking methods and other possible actions into account,
ranking methods to make a distinction between highly sensitive and possibly leading to changes into the general methodology of this study.
less sensitive activities. These results demonstrate that using a good
ranking procedure to select the activities for the corrective action CRediT authorship contribution statement
process could result in a better project outcome compared to a random
or bad ranking method. Furthermore, the results show the impact Forough Vaseghi: Conceptualization, Methodology, Software, For-
of the network topology (SP) and confirm previous findings in the mal analysis, Validation, Writing – original draft. Mario Vanhoucke:
literature that these methods work better for parallel networks than Conceptualization, Formal analysis, Validation, Writing – review &
for serial networks. In line with the literature, it was shown that editing, Funding acquisition, Project administration, Resources.

14
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Data availability

Data will be made available on request.

Acknowledgments

This work was supported by the Bijzonder Onderzoeksfonds (BOF)


under Grant BOF24Y207001101, and by the Wetenschappelijk Onder-
Fig. A.1. The project network.
zoek (FWO) under Grant FWOOPR2017001301.

Table A.1
Appendix A. Illustrative example The parameters of activity duration distribution.
Activity 𝑑̂𝑖 𝜇𝑖 𝜎𝑖
This Appendix illustrates the procedure for defining the action set
1 3 1.158 0.2679
by a numerical example using the project network of Fig. A.1. The 2 5 1.668 0.2679
project network is displayed in an activity-on-the-node format with 4 3 6 1.851 0.2679
non-dummy activities and with baseline durations denoted above each 4 9 2.256 0.2679
node. The five-step procedure to define the action set for corrective
action process are described along the following lines:

Step 1. Determine the project duration distribution: The analytical • Series reduction 2 - 4: Similarly, the joint probability density
project duration distribution (and its mean and standard deviation) function 𝑓6 and distribution function 𝐹6 after reducing activities
is determined based on given distribution functions for each activity 2 and 4 will be result in updated functions as follows:
duration. It is assumed that the actual activity durations follow a +∞
lognormal distribution with 𝜇 = 0.0594 + 𝑙𝑛(𝑑) ̂ and 𝜎 = 0.2679, where 𝑓6 (𝑥) = 𝑓2 (𝑥) ∗ 𝑓4 (𝑥) = 𝑓2 (𝑡)𝑓4 (𝑥 − 𝑡)𝑑𝑡
∫−∞
𝑑̂ is the baseline estimate of activity duration (cf. Table A.1). As an +∞
1 (ln(𝑡) − 1.668)2 1
example, the logarithmic mean and standard deviation of activity 1 is √ exp(− ) √
̂ = 1.158 and 𝜎1 = 0.2679, respectively. ∫−∞ 0.2679𝑡 2𝜋 2 × 0.26792 0.2679(𝑥 − 𝑡) 2𝜋
be equal to 𝜇1 = 0.0594 + 𝑙𝑛(3)
With these values, the probability density function of activity 3 is given (ln(𝑥 − 𝑡) − 2.256)2
2 × exp(− )𝑑𝑡
by 𝑓1 = 1
√ exp(− (ln(𝑥)−1.158)
2×0.26792
) and the cumulative distribution 2 × 0.26792
𝑥0.2679 2𝜋 𝑥
function is equal to 𝐹1 = 0.5[1 + 𝑒𝑟𝑓 ( 𝑙𝑛(𝑥)−1.158
√ )]. The density functions 𝐹6 (𝑥) = 𝑓6 (𝑥)𝑑𝑥
0.2679 2 ∫−∞
are shown above each activity in Fig. A.2. Note that the arithmetic
mean and standard deviation of activity duration
√ distribution will be • Parallel reduction 5 – 6: The new activities 5 and 6 are in parallel
+∞ +∞ and have the same predecessor and successor. A parallel reduction
equal to 𝑚1 = ∫−∞
𝑥𝑓1 (𝑥)𝑑𝑥 = 3.3 and 𝑠1 = ∫−∞ (𝑥 − 𝑚1 )2 𝑓1 (𝑥)𝑑𝑥
=
0.9, respectively, which will not be used during the procedure of replaces these activities by activity 7 with distribution function
determining the project duration distribution. 𝐹7 (as the product of their distribution functions) and probability
The project duration distribution (mean and standard deviation) can density function 𝑓7 as follows:
be determined using a sequence of convolution and product opera- 𝐹7 (𝑥) = 𝐹5 (𝑥).𝐹6 (𝑥)
tors (Bein et al., 1992), and since the project network is a reducible 𝜕𝐹7 (𝑥)
graph with complexity index of 0, it can be done by reducing the 𝑓7 (𝑥) = = 𝑓5 (𝑥)𝐹6 (𝑥) + 𝐹5 (𝑥)𝑓6 (𝑥)
𝜕𝑥
project to a single node using only series and parallel reductions2 .
These consecutive steps are shown in Fig. A.2 and summarized in the The probability density function and distribution function of node 7
following lines: represent the functions of the project duration (𝑓𝑃 = 𝑓7 and 𝐹𝑃 = 𝐹7 ),
+∞
with the mean project duration equal to 𝑚𝑝 = ∫−∞ 𝑥𝑓𝑃 (𝑥)𝑑𝑥 = 15.48

• Series reduction 1 – 3: Since activities 1 and 3 are serial activities +∞
and its standard deviation 𝑠𝑝 = ∫−∞ (𝑥 − 𝑚𝑝 )2 𝑓𝑃 (𝑥)𝑑𝑥 = 3.01.
with only one successor and one predecessor, respectively, a
series reduction replaces these activities by activity 5 with an Step 2. Model analytical corrective actions (for each activity indi-
updated probability density function 𝑓5 (as the convolution of vidually): Corrective actions are applied on each project activity by
their density functions) and distribution function 𝐹5 represented changing the parameters of the activity distributions. In the example,
as follows: Action 1 is used which modifies the logarithmic mean of the activity
+∞ duration distribution (𝜇′ = 𝜇 +𝑙𝑛(𝛼)) (Eq. (1)) but keeps the logarithmic
𝑓5 (𝑥) = 𝑓1 (𝑥) ∗ 𝑓3 (𝑥) = 𝑓1 (𝑡)𝑓3 (𝑥 − 𝑡)𝑑𝑡 standard deviation (𝜎) unchanged.3 After having made all changes to all
∫−∞
+∞
activities, Step 1 is applied again to determine the new density function
1 (ln(𝑡) − 1.158)2 1 𝑓𝑃 ,𝑎𝑓 𝑡𝑒𝑟 and distribution function 𝐹𝑃 ,𝑎𝑓 𝑡𝑒𝑟 of the project completion time
= √ exp(− ) √
∫−∞ 0.2679𝑡 2𝜋 2 × 0.26792 0.2679(𝑥 − 𝑡) 2𝜋 after the corrective actions, as well as the mean project duration 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟
(ln(𝑥 − 𝑡) − 1.851)2 and its standard deviation 𝑠𝑝,𝑎𝑓 𝑡𝑒𝑟 .
× exp(− )𝑑𝑡
2 × 0.26792 Assuming 𝛼 to be 0.6, the logarithmic mean of activity 1 after Action
𝑥
1 will be equal to 𝜇1′ = 1.158 + 𝑙𝑛(0.6) = 0.647 (cf. Table A.2) and
𝐹5 (𝑥) = 𝑓5 (𝑥)𝑑𝑥
∫−∞ consequently, the density and distribution functions of this activity
2
will be modified to 𝑓1′ = 1
√ exp(− (ln(𝑥)−0.647)
2×0.26792
) and 𝐹1′ = 0.5[1 +
𝑥0.2679 2𝜋
2
For irreducible graphs (networks with complexity index higher than
0), Vaseghi et al. (2022) has calculated upper and lower bounds for the project
3
duration distribution using the duplication process by Dodin (1985) and the For Action 2, the modifications will be applied only on the logarithmic
pairwise disjoint chains by Spelde (1976), but for the illustrative example, the standard deviation (𝜎 ′ = 𝛽𝜎) (Eq. (2)) but the remaining calculations will
exact project distribution can be calculated. remain the same.

15
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

Table A.2
Modifications after action.
Activity modification Project outcome
Activity 𝜇𝑖′ 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟 𝑠𝑝,𝑎𝑓 𝑡𝑒𝑟 𝑇 𝐶𝑚 𝑇 𝐶𝑠
1 0.647 15.43 3.06 0.32 −1.66
2 1.158 13.48 2.67 12.92 11.29
3 1.340 15.40 3.08 0.51 −2.32
4 1.745 12.00 2.03 22.48 32.55

Table A.3
Performance measures for different action sets.
|𝐴𝑆| 𝐴𝑆 𝑇 𝐶𝑚 𝑇 𝐶𝑠 𝐶𝐸 1𝑚 𝐶𝐸 1𝑆 𝐶𝐸 2𝑚 𝐶𝐸 2𝑠
1 {4} 22.48 32.55 22.64 33.97 88.18 93.74
2 {4, 2} 31.26 41.86 15.64 20.78 78.55 75.11
3 {4, 2, 3} 38.85 43.60 12.95 14.53 68.55 55.88
4 {4, 2, 3, 1} 39.97 40.47 9.99 10.11 62.54 42.93

Fig. A.2. Reduction procedure.

deviation is 𝑇 𝐶𝑠 = 3.01−3.06
3.01
× 100 = −1.66, as shown in the columns
‘‘project outcomes’’ (after taking actions) in Table A.2.
Step 4. Rank the activities (using a selected ranking measure): Activities
are now ranked according to a selected performance measure. Since
the main goal of taking Action 1 is to decrease the mean project
duration, the TC𝑚 performance measure is used. With the values of TC𝑚
in Table A.2, the ranking list will be {4, 2, 3, 1}4 .
Step 5. Evaluate impact of actions (on a set of activities): Actions
can now be taken on the highest ranked activities, varying from the
minimum set with only one activity (i.e. action set |AS| = 1)to the set
containing all project activities (|AS| = 4). For example, when |AS| = 2,
activities 2 and 4 are put in the action set and used for the corrective
action process. In a preventive control strategy, the logarithmic means
of these activities will be modified to 𝜇2′ = 1.668 + 𝑙𝑛(0.6) = 1.158
and 𝜇4′ = 2.256 + 𝑙𝑛(0.6) = 1.745, which results in modified density
and distribution functions for the activity durations. After determining
Fig. A.3. Performance measures TC𝑚 and TC𝑠 vs. AS. the project duration distribution, the mean project duration and its
standard deviation after taking action will be equal to 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟 = 10.64
and 𝑠𝑝,𝑎𝑓 𝑡𝑒𝑟 = 1.75, respectively. Consequently, The contribution of
taking these two actions on the mean project duration and its standard
𝑒𝑟𝑓 ( 𝑙𝑛(𝑥)−0.647
√ )], respectively. Doing this for each activity will result in deviation will be equal to 𝑇 𝐶𝑚 = 15.48−10.6415.48
× 100 = 31.26 and
0.2679 2
an updated density and distribution functions of the project duration 𝑇 𝐶𝑠 = 3.01−1.75
3.01
× 100 = 41.86, respectively. Table A.3.5 shows the
using the procedure of Step 1, with a mean project duration and its
+∞
standard deviation equal to 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟 = ∫−∞ 𝑥𝑓𝑃 ,𝑎𝑓 𝑡𝑒𝑟 (𝑥)𝑑𝑥 = 15.43 and
√ 4
+∞ The TC𝑚 is referred to as the AR𝑚 ranking measure in Vaseghi et al.
𝑠𝑝,𝑎𝑓 𝑡𝑒𝑟 = ∫−∞ (𝑥 − 𝑚𝑝,𝑎𝑓 𝑡𝑒𝑟 )2 𝑓𝑃 ,𝑎𝑓 𝑡𝑒𝑟 (𝑥)𝑑𝑥 = 3.06. (2022). In this study, it is shown that other analytical ranking measures as well
Step 3. Measure action contributions (for each activity individually): as simulation-based ranking measures can be used, as explained in Section 2.1.
5
The contribution of the action on each individual activity is determined Note that the preventive control strategy is used in the example. The
protective control strategy could also have been used to measure the impact
using two performance measures TC𝑚 and TC𝑠 (Eqs. (3) and (4)).
of the action set on project performance. In that case, the calculations are
For the example, the total contribution on mean is equal to 𝑇 𝐶𝑚 = similar but the activities of the action set will be chosen process along the
15.48−15.43
15.48
× 100 = 0.32 and the total contribution on the standard project progress instead of prior to the project start.

16
Downloaded from https://iranpaper.ir
https://www.tarjomano.com https://www.tarjomano.com

F. Vaseghi and M. Vanhoucke Computers & Industrial Engineering 183 (2023) 109505

performance measures TC𝑚 and TC𝑠 for different action sets (as well Hegazy, T., & Petzold, K. (2003). Genetic optimization for dynamic project control.
as other measures used in our study). Journal of Construction Engineering and Management, 129(4), 396–404.
Hu, X., Cui, N., Demeulemeester, E., & Bie, L. (2016). Incorporation of activity
Finally, Fig. A.3 displays the impact of the size of the action set on
sensitivity measures into buffer management to manage project schedule risk.
the performance measures TC𝑚 and TC𝑠 for the example project, and European Journal of Operational Research, 249, 717–727.
shows that an marginally increasing size results in better performance Hulett, D. (1996). Schedule risk analysis simplified. Project Management Network, 10,
of taking the actions. It also shows that the TC𝑠 can decrease after some 23–30.
point, which taking actions on too many activities not only increases Krishnan, V., Eppinger, S., & Whitney, D. (1997). A model-based framework to overlap
product development activities. Management Science, 43, 437–451.
the project manager’s effort of control, but also can lead to an increase Kulkarni, V., & Adlakha, V. (1986). Markov and markov-regenerative PERT networks.
in the variability of the project duration. Operations Research.
Madadi, M., & Iranmanesh, H. (2012). A management oriented approach to reduce a
References project duration and its risk (variability). European Journal of Operational Research,
219(3), 751–761.
Martens, A., & Vanhoucke, M. (2019). The impact of applying effort to reduce
Ballesteros-Pérez, P. (2017). Modelling the boundaries of project fast-tracking.
activity uncertainty on the project time and cost performance. European Journal
Automation in Construction, 84, 231–241.
of Operational Research, 277(2), 442–453
Ballesteros-Pérez, P., Cerezo-Narváez, A., Otero-Mateo, M., Pasto-Fernández, A., &
Martin, J. (1965). Distribution of the time through a directed, acyclic network.
Vanhoucke, M. (2019). Performance comparison of activity sensitivity metrics in
Operations Research, 13, 46–66.
schedule risk analysis. Automation in Construction, 106, 1–11.
Mastor, A. (1970). An experimental and comparative evaluation of production line
Bein, W., Kamburowski, J., & Stallmann, M. (1992). Optimal reduction of two-terminal
balancing techniques. Management Science, 16, 728–746.
directed acyclic graphs. SIAM Journal on Computing, 21, 1112–1129.
PMBOK (2004). A guide to the project management body of knowledge (3rd ed.). Newtown
Bie, L., Cui, N., & Zhang, X. (2012). Buffer sizing approach with dependence assumption
Square, Pa.: Project Management Institute, Inc.
between activities in critical chain scheduling. International Journal of Production
Song, J., Martens, A., & Vanhoucke, M. (2020). The impact of a limited budget on
Research, 50(24), 7343–7356.
the corrective action taking process. European Journal of Operational Research, 286,
Bowman, R. A. (1995). Efficient estimation of arc criticalities in stochastic activity
1070–1086.
networks. Management Science, 41, 58–67.
Song, J., Martens, A., & Vanhoucke, M. (2021). Using schedule risk analysis with
Bowman, R. A. (2006). Developing activity duration specification limits for effective
resource constraints for project control. European Journal of Operational Research,
project control. European Journal of Operational Research, 174(2), 1191–1204.
288, 736–752.
Cho, J., & Yum, B. (1997). An uncertainty importance measure of activities in PERT
Spelde, H. (1976). Stochastische Netzplane und ihre Anwendung im Baubetrieb (Ph.D.
networks. International Journal of Production Research, 35, 2737–2758.
thesis).
Cho, J., & Yum, B. (2004). Functional estimation of activity criticality indices and
Tavares, L., Ferreira, J., & Coelho, J. (2002). A comparative morphologic analysis of
sensitivity analysis of expected project completion time. Journal of the Operational
benchmark sets of project networks. International Journal of Project Management, 20,
Research Society, 55, 850–859.
475–485.
Colin, J., & Vanhoucke, M. (2016). Empirical perspective on activity durations for
Trietsch, D., Mazmanyan, L., Govergyan, L., & Baker, K. R. (2012). Modeling activity
project-management simulation studies. Journal of Construction Engineering and
times by the parkinson distribution with a lognormal core: Theory and validation.
Management, 142(1).
European Journal of Operational Research, 216, 386–396.
De Reyck, B., & Herroelen, W. (1996). On the use of the complexity index as a measure
Van Slyke, R. (1963). Monte Carlo methods and the PERT problem. Operations Research,
of complexity in activity networks. European Journal of Operational Research, 91,
11, 839–860.
347–366.
Vanhoucke, M. (2010a). International series in operations research and management
Demeulemeester, E., Vanhoucke, M., & Herroelen, W. (2003). RanGen: A random
science: vol. 136, Measuring time - Improving project performance using earned value
network generator for activity-on-the-node networks. Journal of Scheduling, 6,
management. Springer.
17–38.
Vanhoucke, M. (2010b). Using activity sensitivity and network topology information to
Dodin, B. (1985). Bounding the project completion time distribution in PERT networks.
monitor project time performance. Omega the International Journal of Management
Operations Research, 33(4), 862–881.
Science, 38, 359–370.
Dodin, B., & Elmaghraby, S. (1985). Approximating the criticality indices of the
Vanhoucke, M. (2011). On the dynamic use of project performance and schedule risk
activities in PERT networks. Management Science, 31, 207–223.
information during project tracking. Omega the International Journal of Management
Elmaghraby, S. (1999). Optimal resource allocation and budget estimation in
Science, 39, 416–426.
multimodal activity networks.
Vanhoucke, M. (2012a). Project management with dynamic scheduling: Baseline scheduling,
Elmaghraby, S. (2000). On criticality and sensitivity in activity networks. European
risk analysis and project control, vol. XVIII. Springer.
Journal of Operational Research, 127, 220–238.
Vanhoucke, M. (2012b). Measuring the efficiency of project control using fictitious and
Elmaghraby, S., Fathi, Y., & Taner, M. (1999). On the sensitivity of project variability to
empirical project data. International Journal of Project Management, 30, 252–263.
activity mean duration. International Journal of Production Economics, 62, 219–232.
Vanhoucke, M. (2019). Tolerance limits for project control: an overview of different
Elshaer, R. (2013). Impact of sensitivity information on the prediction of project’s
approaches. Computers & Industrial Engineering, 127, 467–479.
duration using earned schedule method. International Journal of Project Management,
Vanhoucke, M., Coelho, J., Debels, D., Maenhout, B., & Tavares, L. (2008). An
31, 579–588.
evaluation of the adequacy of project network generators with systematically
Fatemi Ghomi, S., & Teimouri, E. (2002). Path critical index and activity critical index
sampled networks. European Journal of Operational Research, 187, 511–524.
in PERT networks. European Journal of Operational Research, 141, 147–152.
Vanhoucke, M., & Debels, D. (2008). The impact of various activity assumptions on
Fleming, Q., & Koppelman, J. (2010). Earned value project management (3rd ed.). Newton
the lead time and resource utilization of resource-constrained projects. Computers
Square, Pennsylvania: Project Management Institute.
& Industrial Engineering, 54, 140–154.
Gutierrez, G., & Paul, A. (2000). Analysis of the effects of uncertainty, risk-pooling,
Vaseghi, F., Martens, A., & Vanhoucke, M. (2022). Analysis of the impact of corrective
and subcontracting mechanisms on project performance. Operations Research, 48,
actions for stochastic project networks (submitted for publication).
927–938.
Williams, T. (1992). Criticality in stochastic networks. Journal of the Operational
Hagstrom, J. (1988). Computational complexity of PERT problems. Networks,
Research Society, 43, 353–357.
18(139–147).

17

You might also like